0% found this document useful (0 votes)
103 views

Amazon Relational Database Service - User Guide

Uploaded by

Ananda Thimmappa
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
103 views

Amazon Relational Database Service - User Guide

Uploaded by

Ananda Thimmappa
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2852

Amazon Relational

Database Service
User Guide
Amazon Relational Database Service User Guide

Amazon Relational Database Service: User Guide


Copyright © 2023 Amazon Web Services, Inc. and/or its affiliates. All rights reserved.

Amazon's trademarks and trade dress may not be used in connection with any product or service that is not
Amazon's, in any manner that is likely to cause confusion among customers, or in any manner that disparages or
discredits Amazon. All other trademarks not owned by Amazon are the property of their respective owners, who may
or may not be affiliated with, connected to, or sponsored by Amazon.
Amazon Relational Database Service User Guide

Table of Contents
What is Amazon RDS? ........................................................................................................................ 1
Overview ................................................................................................................................... 1
Amazon EC2 and on-premises databases ............................................................................... 1
Amazon RDS and Amazon EC2 ............................................................................................ 2
Amazon RDS Custom for Oracle and Microsoft SQL Server ...................................................... 3
Amazon RDS on AWS Outposts ............................................................................................ 3
DB instances .............................................................................................................................. 3
DB engines ........................................................................................................................ 4
DB instance classes ............................................................................................................ 4
DB instance storage ............................................................................................................ 4
Amazon Virtual Private Cloud (Amazon VPC) ......................................................................... 5
AWS Regions and Availability Zones ............................................................................................. 5
Security .................................................................................................................................... 5
Amazon RDS monitoring ............................................................................................................. 5
How to work with Amazon RDS ................................................................................................... 5
AWS Management Console .................................................................................................. 6
Command line interface ...................................................................................................... 6
Amazon RDS APIs .............................................................................................................. 6
How you are charged for Amazon RDS ......................................................................................... 6
What's next? .............................................................................................................................. 6
Getting started .................................................................................................................. 6
Topics specific to database engines ...................................................................................... 6
Amazon RDS shared responsibility model ...................................................................................... 8
DB instances .............................................................................................................................. 9
DB instance classes ................................................................................................................... 11
DB instance class types ..................................................................................................... 11
Supported DB engines ...................................................................................................... 14
Determining DB instance class support in AWS Regions ......................................................... 68
Changing your DB instance class ........................................................................................ 71
Configuring the processor for RDS for Oracle ....................................................................... 71
Hardware specifications ..................................................................................................... 87
DB instance storage ................................................................................................................ 101
Storage types ................................................................................................................. 101
General Purpose SSD storage ........................................................................................... 102
Provisioned IOPS storage ................................................................................................ 104
Comparing SSD storage types .......................................................................................... 106
Magnetic storage ............................................................................................................ 107
Monitoring storage performance ...................................................................................... 107
Factors that affect storage performance ............................................................................ 108
Regions, Availability Zones, and Local Zones .............................................................................. 110
AWS Regions .................................................................................................................. 111
Availability Zones ........................................................................................................... 113
Local Zones ................................................................................................................... 114
Supported Amazon RDS features by Region and engine .............................................................. 116
Table conventions ........................................................................................................... 116
Feature quick reference ................................................................................................... 116
Blue/Green Deployments ................................................................................................. 118
Cross-Region automated backups ..................................................................................... 118
Cross-Region read replicas ............................................................................................... 119
Database activity streams ................................................................................................ 121
Dual-stack mode ............................................................................................................ 125
Export snapshots to S3 ................................................................................................... 133
IAM database authentication ............................................................................................ 138
Kerberos authentication .................................................................................................. 141

iii
Amazon Relational Database Service User Guide

Multi-AZ DB clusters ....................................................................................................... 147


Performance Insights ...................................................................................................... 150
RDS Custom ................................................................................................................... 151
Amazon RDS Proxy ......................................................................................................... 155
Secrets Manager integration ............................................................................................ 161
Engine-native features .................................................................................................... 162
DB instance billing for Amazon RDS ......................................................................................... 163
On-Demand DB instances ................................................................................................ 164
Reserved DB instances .................................................................................................... 165
Setting up ..................................................................................................................................... 174
Sign up for an AWS account .................................................................................................... 174
Create an administrative user ................................................................................................... 174
Grant programmatic access ...................................................................................................... 175
Determine requirements .......................................................................................................... 176
Provide access to your DB instance ........................................................................................... 177
Getting started ............................................................................................................................... 180
Creating and connecting to a MariaDB DB instance ..................................................................... 181
Prerequisites .................................................................................................................. 182
Step 1: Create an EC2 instance ......................................................................................... 182
Step 2: Create a MariaDB DB instance ............................................................................... 185
Step 3: Connect to a MariaDB DB instance ......................................................................... 190
Step 4: Delete the EC2 instance and DB instance ................................................................ 193
(Optional) Connect your DB instance to a Lambda function .................................................. 193
Creating and connecting to a Microsoft SQL Server DB instance ................................................... 194
Prerequisites .................................................................................................................. 195
Step 1: Create an EC2 instance ......................................................................................... 195
Step 2: Create a SQL Server DB instance ........................................................................... 199
Step 3: Connecting to your SQL Server DB instance ............................................................ 204
Step 4: Exploring your sample DB instance ........................................................................ 206
Step 5: Delete the EC2 instance and DB instance ................................................................ 208
(Optional) Connect your DB instance to a Lambda function .................................................. 208
Creating and connecting to a MySQL DB instance ....................................................................... 209
Prerequisites .................................................................................................................. 210
Step 1: Create an EC2 instance ......................................................................................... 210
Step 2: Create a MySQL DB instance ................................................................................. 213
Step 3: Connect to a MySQL DB instance ........................................................................... 218
Step 4: Delete the EC2 instance and DB instance ................................................................ 221
(Optional) Connect your DB instance to a Lambda function .................................................. 221
Creating and connecting to an Oracle DB instance ...................................................................... 222
Prerequisites .................................................................................................................. 223
Step 1: Create an EC2 instance ......................................................................................... 223
Step 2: Create an Oracle DB instance ................................................................................ 226
Step 3: Connect your SQL client to an Oracle DB instance .................................................... 231
Step 4: Delete the EC2 instance and DB instance ................................................................ 234
(Optional) Connect your DB instance to a Lambda function .................................................. 234
Creating and connecting to a PostgreSQL DB instance ................................................................ 235
Prerequisites .................................................................................................................. 236
Step 1: Create an EC2 instance ......................................................................................... 236
Step 2: Create a PostgreSQL DB instance ........................................................................... 240
Step 3: Connect to a PostgreSQL DB instance .................................................................... 245
Step 4: Delete the EC2 instance and DB instance ................................................................ 248
(Optional) Connect your DB instance to a Lambda function .................................................. 248
Tutorial: Create a web server and an Amazon RDS DB instance ..................................................... 249
Launch an EC2 instance ................................................................................................... 250
Create a DB instance ....................................................................................................... 255
Install a web server ........................................................................................................ 264
Tutorial: Create a Lambda function to access your Amazon RDS DB instance ................................... 273

iv
Amazon Relational Database Service User Guide

Prerequisites .................................................................................................................. 274


Create an Amazon RDS DB instance .................................................................................. 274
Create Lambda function and proxy ................................................................................... 275
Create a function execution role ....................................................................................... 275
Create a Lambda deployment package .............................................................................. 276
Update the Lambda function ........................................................................................... 278
Test your Lambda function in the console ......................................................................... 279
Create an Amazon SQS queue .......................................................................................... 280
Create an event source mapping to invoke your Lambda function ......................................... 280
Test and monitor your setup ............................................................................................ 281
Clean up your resources .................................................................................................. 282
Tutorials and sample code ............................................................................................................... 283
Tutorials in this guide ............................................................................................................. 283
Tutorials in other AWS guides .................................................................................................. 284
AWS workshop and lab content portal for Amazon RDS PostgreSQL .............................................. 284
AWS workshop and lab content portal for Amazon RDS MySQL .................................................... 284
Tutorials and sample code in GitHub ......................................................................................... 285
Working with AWS SDKs ......................................................................................................... 285
Best practices for Amazon RDS ........................................................................................................ 286
Amazon RDS basic operational guidelines .................................................................................. 286
DB instance RAM recommendations .......................................................................................... 287
Using Enhanced Monitoring to identify operating system issues .................................................... 287
Using metrics to identify performance issues ............................................................................. 287
Viewing performance metrics ........................................................................................... 287
Evaluating performance metrics ....................................................................................... 290
Tuning queries ....................................................................................................................... 291
Best practices for working with MySQL ..................................................................................... 292
Table size ...................................................................................................................... 292
Number of tables ........................................................................................................... 292
Storage engine ............................................................................................................... 293
Best practices for working with MariaDB ................................................................................... 293
Table size ...................................................................................................................... 293
Number of tables ........................................................................................................... 294
Storage engine ............................................................................................................... 294
Best practices for working with Oracle ...................................................................................... 294
Best practices for working with PostgreSQL ............................................................................... 294
Loading data into a PostgreSQL DB instance ...................................................................... 295
Working with the PostgreSQL autovacuum feature ............................................................. 295
Amazon RDS for PostgreSQL best practices video ............................................................... 296
Best practices for working with SQL Server ................................................................................ 296
Amazon RDS for SQL Server best practices video ................................................................ 297
Working with DB parameter groups .......................................................................................... 297
Best practices for automating DB instance creation ..................................................................... 297
Amazon RDS new features and best practices presentation video .................................................. 298
Configuring a DB instance ............................................................................................................... 299
Creating a DB instance ............................................................................................................ 300
Prerequisites .................................................................................................................. 300
Creating a DB instance .................................................................................................... 303
Available settings ........................................................................................................... 308
Creating resources with AWS CloudFormation ............................................................................ 324
RDS and AWS CloudFormation templates .......................................................................... 324
Learn more about AWS CloudFormation ............................................................................ 324
Connecting to a DB instance .................................................................................................... 325
Finding the connection information .................................................................................. 325
Database authentication options ...................................................................................... 328
Encrypted connections .................................................................................................... 329
Scenarios for accessing a DB instance ................................................................................ 329

v
Amazon Relational Database Service User Guide

Connecting to a DB instance running a specific DB engine ................................................... 329


Managing connections with RDS Proxy .............................................................................. 330
Working with option groups .................................................................................................... 331
Option groups overview .................................................................................................. 331
Creating an option group ................................................................................................ 332
Copying an option group ................................................................................................. 334
Adding an option to an option group ................................................................................ 335
Listing the options and option settings for an option group ................................................. 339
Modifying an option setting ............................................................................................ 340
Removing an option from an option group ........................................................................ 343
Deleting an option group ................................................................................................ 344
Working with parameter groups ............................................................................................... 347
Overview of parameter groups ......................................................................................... 347
Working with DB parameter groups .................................................................................. 349
Working with DB cluster parameter groups ........................................................................ 360
Comparing parameter groups ........................................................................................... 368
Specifying DB parameters ................................................................................................ 369
Creating an ElastiCache cluster from Amazon RDS ...................................................................... 374
Overview of ElastiCache cluster creation with RDS DB instance settings ................................. 374
Creating an ElastiCache cluster with settings from a new RDS DB instance .............................. 375
Creating an ElastiCache cluster with settings from an existing RDS DB instance ....................... 377
Managing a DB instance .................................................................................................................. 380
Stopping a DB instance ........................................................................................................... 381
Supported engines, classes, and Regions ........................................................................... 381
Support for Multi-AZ ...................................................................................................... 381
How it works ................................................................................................................. 381
Benefits ......................................................................................................................... 382
Limitations ..................................................................................................................... 382
Option and parameter group considerations ...................................................................... 382
Public IP address ............................................................................................................ 383
Stopping a DB instance ................................................................................................... 383
Starting a DB instance ............................................................................................................ 384
Connecting an AWS compute resource ...................................................................................... 385
Connecting an EC2 instance ............................................................................................. 385
Connecting a Lambda function ......................................................................................... 392
Modifying a DB instance .......................................................................................................... 401
Apply Immediately setting ............................................................................................... 402
Available settings ........................................................................................................... 402
Maintaining a DB instance ....................................................................................................... 418
Viewing pending maintenance .......................................................................................... 418
Applying updates ........................................................................................................... 421
Maintenance for Multi-AZ deployments ............................................................................. 422
The maintenance window ................................................................................................ 423
Adjusting the maintenance window for a DB instance .......................................................... 424
Working with operating system updates ............................................................................ 426
Upgrading the engine version .................................................................................................. 429
Manually upgrading the engine version ............................................................................. 429
Automatically upgrading the minor engine version ............................................................. 431
Renaming a DB instance .......................................................................................................... 434
Renaming to replace an existing DB instance ..................................................................... 434
Rebooting a DB instance ......................................................................................................... 436
Working with DB instance read replicas ..................................................................................... 438
Overview ....................................................................................................................... 439
Creating a read replica .................................................................................................... 445
Promoting a read replica ................................................................................................. 447
Monitoring read replication .............................................................................................. 449
Cross-Region read replicas ............................................................................................... 452

vi
Amazon Relational Database Service User Guide

Tagging RDS resources ............................................................................................................ 461


Overview ....................................................................................................................... 461
Using tags for access control with IAM .............................................................................. 462
Using tags to produce detailed billing reports .................................................................... 462
Adding, listing, and removing tags .................................................................................... 463
Using the AWS Tag Editor ............................................................................................... 465
Copying tags to DB instance snapshots ............................................................................. 465
Tutorial: Use tags to specify which DB instances to stop ...................................................... 466
Enabling backups ........................................................................................................... 468
Working with ARNs ................................................................................................................. 471
Constructing an ARN ....................................................................................................... 471
Getting an existing ARN .................................................................................................. 475
Working with storage .............................................................................................................. 478
Increasing DB instance storage capacity ............................................................................ 478
Managing capacity automatically with storage autoscaling ................................................... 480
Modifying Provisioned IOPS settings ................................................................................. 484
I/O-intensive storage modifications .................................................................................. 486
Modifying General Purpose (gp3) settings .......................................................................... 486
Deleting a DB instance ............................................................................................................ 489
Prerequisites for deleting a DB instance ............................................................................ 489
Considerations when deleting a DB instance ...................................................................... 489
Deleting a DB instance .................................................................................................... 490
Configuring and managing a Multi-AZ deployment ............................................................................. 492
Multi-AZ DB instance deployments ........................................................................................... 493
Modifying a DB instance to be a Multi-AZ DB instance deployment ....................................... 494
Failover process for Amazon RDS ...................................................................................... 495
Multi-AZ DB cluster deployments ............................................................................................. 499
Region and version availability ......................................................................................... 499
Instance class availability ................................................................................................. 499
Overview of Multi-AZ DB clusters ..................................................................................... 500
Limitations for Multi-AZ DB clusters .................................................................................. 501
Managing a Multi-AZ DB cluster with the AWS Management Console ..................................... 502
Working with parameter groups for Multi-AZ DB clusters ..................................................... 503
Upgrading the engine version of a Multi-AZ DB cluster ........................................................ 503
Replica lag and Multi-AZ DB clusters ................................................................................. 504
Failover process for Multi-AZ DB clusters ........................................................................... 505
Creating a Multi-AZ DB cluster ......................................................................................... 508
Connecting to a Multi-AZ DB cluster ................................................................................. 522
Connecting an AWS compute resource and a Multi-AZ DB cluster .......................................... 525
Modifying a Multi-AZ DB cluster ....................................................................................... 539
Renaming a Multi-AZ DB cluster ....................................................................................... 550
Rebooting a Multi-AZ DB cluster ...................................................................................... 552
Working with Multi-AZ DB cluster read replicas .................................................................. 554
Using PostgreSQL logical replication with Multi-AZ DB clusters ............................................. 561
Deleting a Multi-AZ DB cluster ......................................................................................... 563
Using Extended Support .................................................................................................................. 565
Using Blue/Green Deployments for database updates ......................................................................... 566
Overview of Amazon RDS Blue/Green Deployments .................................................................... 567
Benefits ......................................................................................................................... 567
Workflow ....................................................................................................................... 568
Authorizing access .......................................................................................................... 572
Considerations ................................................................................................................ 572
Best practices ................................................................................................................. 574
Region and version availability ......................................................................................... 575
Limitations ..................................................................................................................... 575
Creating a blue/green deployment ........................................................................................... 575
Making changes in the green environment ......................................................................... 576

vii
Amazon Relational Database Service User Guide

Handling lazy loading ..................................................................................................... 576


Creating the blue/green deployment ................................................................................ 577
Viewing a blue/green deployment ............................................................................................ 579
Switching a blue/green deployment ......................................................................................... 582
Switchover timeout ......................................................................................................... 582
Switchover guardrails ...................................................................................................... 583
Switchover actions .......................................................................................................... 583
Switchover best practices ................................................................................................ 584
Verifying CloudWatch metrics before switchover ................................................................ 584
Switching over a blue/green deployment .......................................................................... 585
After switchover ............................................................................................................. 587
Deleting a blue/green deployment ........................................................................................... 587
Backing up and restoring ................................................................................................................. 590
Working with backups ............................................................................................................. 591
Backup storage .............................................................................................................. 591
Backup window .............................................................................................................. 591
Backup retention period .................................................................................................. 593
Enabling automated backups ........................................................................................... 593
Retaining automated backups .......................................................................................... 595
Deleting retained automated backups ............................................................................... 596
Disabling automated backups .......................................................................................... 597
Using AWS Backup ......................................................................................................... 599
Unsupported MySQL storage engines ................................................................................ 599
Unsupported MariaDB storage engines .............................................................................. 600
Backing up and restoring a DB instance ..................................................................................... 600
Cross-Region automated backups ..................................................................................... 602
Creating a DB snapshot ................................................................................................... 613
Restoring from a DB snapshot .......................................................................................... 615
Copying a DB snapshot ................................................................................................... 619
Sharing a DB snapshot .................................................................................................... 633
Exporting DB snapshot data to Amazon S3 ........................................................................ 642
Restoring a DB instance to a specified time ....................................................................... 660
Deleting a DB snapshot ................................................................................................... 663
Tutorial: Restore a DB instance from a DB snapshot ............................................................ 665
Backing up and restoring a Multi-AZ DB cluster .......................................................................... 668
Creating a Multi-AZ DB cluster snapshot ............................................................................ 669
Restoring from a snapshot to a Multi-AZ DB cluster ............................................................ 671
Restoring from a Multi-AZ DB cluster snapshot to a DB instance ........................................... 673
Restoring a Multi-AZ DB cluster to a specified time ............................................................. 675
Monitoring metrics in a DB instance ................................................................................................. 678
Overview of monitoring .......................................................................................................... 679
Monitoring plan ............................................................................................................. 679
Performance baseline ...................................................................................................... 679
Performance guidelines ................................................................................................... 679
Monitoring tools ............................................................................................................. 680
Viewing instance status and recommendations ........................................................................... 683
Viewing Amazon RDS DB instance status ........................................................................... 684
Viewing Amazon RDS recommendations ............................................................................ 688
Viewing metrics in the Amazon RDS console .............................................................................. 696
Viewing combined metrics in the Amazon RDS console ............................................................... 699
Choosing the new monitoring view in the Monitoring tab ................................................... 699
Choosing the new monitoring view with Performance Insights in the navigation pane ............. 700
Choosing the legacy view with Performance Insights in the navigation pane .......................... 701
Creating a custom dashboard with Performance Insights in the navigation pane .................... 702
Choosing the preconfigured dashboard with Performance Insights in the navigation pane ....... 705
Monitoring RDS with CloudWatch ............................................................................................. 706
Overview of Amazon RDS and Amazon CloudWatch ............................................................ 707

viii
Amazon Relational Database Service User Guide

Viewing CloudWatch metrics ............................................................................................ 708


Creating CloudWatch alarms ............................................................................................ 713
Tutorial: Creating a CloudWatch alarm for DB cluster replica lag ........................................... 713
Monitoring DB load with Performance Insights ........................................................................... 720
Overview of Performance Insights .................................................................................... 720
Turning Performance Insights on and off ........................................................................... 727
Turning on the Performance Schema for MariaDB or MySQL ................................................ 731
Performance Insights policies ........................................................................................... 734
Analyzing metrics with the Performance Insights dashboard ................................................ 738
Retrieving metrics with the Performance Insights API .......................................................... 769
Logging Performance Insights calls using AWS CloudTrail .................................................... 786
Analyzing performance with DevOps Guru for RDS ..................................................................... 789
Benefits of DevOps Guru for RDS ..................................................................................... 789
How DevOps Guru for RDS works ..................................................................................... 790
Setting up DevOps Guru for RDS ...................................................................................... 791
Monitoring the OS with Enhanced Monitoring ............................................................................ 797
Overview of Enhanced Monitoring .................................................................................... 797
Setting up and enabling Enhanced Monitoring ................................................................... 798
Viewing OS metrics in the RDS console ............................................................................. 802
Viewing OS metrics using CloudWatch Logs ....................................................................... 805
RDS metrics reference ............................................................................................................. 806
CloudWatch metrics for RDS ............................................................................................ 806
CloudWatch dimensions for RDS ...................................................................................... 813
CloudWatch metrics for Performance Insights .................................................................... 813
Counter metrics for Performance Insights .......................................................................... 814
SQL statistics for Performance Insights ............................................................................. 830
OS metrics in Enhanced Monitoring .................................................................................. 837
Monitoring events, logs, and database activity streams ....................................................................... 846
Viewing logs, events, and streams in the Amazon RDS console ..................................................... 846
Monitoring RDS events ............................................................................................................ 850
Overview of events for Amazon RDS ................................................................................. 850
Viewing Amazon RDS events ............................................................................................ 852
Working with Amazon RDS event notification .................................................................... 855
Creating a rule that triggers on an Amazon RDS event ........................................................ 870
Amazon RDS event categories and event messages ............................................................. 874
Monitoring RDS logs ............................................................................................................... 895
Viewing and listing database log files ............................................................................... 895
Downloading a database log file ...................................................................................... 896
Watching a database log file ............................................................................................ 897
Publishing to CloudWatch Logs ........................................................................................ 898
Reading log file contents using REST ................................................................................ 900
MariaDB database log files .............................................................................................. 902
Microsoft SQL Server database log files ............................................................................ 911
MySQL database log files ................................................................................................ 915
Oracle database log files ................................................................................................. 924
PostgreSQL database log files .......................................................................................... 931
Monitoring RDS API calls in CloudTrail ...................................................................................... 940
CloudTrail integration with Amazon RDS ........................................................................... 940
Amazon RDS log file entries ............................................................................................ 940
Monitoring RDS with Database Activity Streams ......................................................................... 944
Overview ....................................................................................................................... 944
Configuring Oracle unified auditing .................................................................................. 948
Configuring SQL Server auditing ...................................................................................... 949
Starting a database activity stream ................................................................................... 950
Modifying a database activity stream ................................................................................ 951
Getting the activity stream status ..................................................................................... 953
Stopping a database activity stream ................................................................................. 954

ix
Amazon Relational Database Service User Guide

Monitoring activity streams ............................................................................................. 955


Managing access to activity streams .................................................................................. 975
Working with Amazon RDS Custom .................................................................................................. 978
Database customization challenge ............................................................................................ 978
RDS Custom management model and benefits ........................................................................... 979
Shared responsibility model in RDS Custom ....................................................................... 979
Support perimeter and unsupported configurations in RDS Custom ....................................... 981
Key benefits of RDS Custom ............................................................................................ 981
RDS Custom architecture ......................................................................................................... 981
VPC .............................................................................................................................. 982
RDS Custom automation and monitoring ........................................................................... 983
Amazon S3 .................................................................................................................... 986
AWS CloudTrail .............................................................................................................. 986
RDS Custom security ............................................................................................................... 988
How RDS Custom securely manages tasks on your behalf .................................................... 988
SSL certificates ............................................................................................................... 989
Securing your Amazon S3 bucket against the confused deputy problem ................................. 989
Rotating RDS Custom for Oracle credentials for compliance programs ................................... 990
Working with RDS Custom for Oracle ........................................................................................ 993
RDS Custom for Oracle workflow ..................................................................................... 993
Database architecture for Amazon RDS Custom for Oracle ................................................... 997
RDS Custom for Oracle requirements and limitations .......................................................... 999
Setting up your RDS Custom for Oracle environment ........................................................ 1002
Working with CEVs for RDS Custom for Oracle ................................................................. 1015
Configuring an RDS Custom for Oracle DB instance ........................................................... 1035
Managing an RDS Custom for Oracle DB instance ............................................................. 1047
Working with RDS Custom for Oracle replicas .................................................................. 1060
Backing up and restoring an RDS Custom for Oracle DB instance ........................................ 1065
Migrating to RDS Custom for Oracle ............................................................................... 1072
Upgrading a DB instance for RDS Custom for Oracle ......................................................... 1073
Troubleshooting RDS Custom for Oracle .......................................................................... 1078
Working with RDS Custom for SQL Server ............................................................................... 1087
RDS Custom for SQL Server workflow ............................................................................. 1087
RDS Custom for SQL Server requirements and limitations .................................................. 1089
Setting up your RDS Custom for SQL Server environment .................................................. 1099
Bring Your Own Media with RDS Custom for SQL Server .................................................... 1113
Working with CEVs for RDS Custom for SQL Server ........................................................... 1115
Creating and connecting to an RDS Custom for SQL Server DB instance ............................... 1130
Managing an RDS Custom for SQL Server DB instance ....................................................... 1138
Managing a Multi-AZ deployment for RDS Custom for SQL Server ....................................... 1147
Backing up and restoring an RDS Custom for SQL Server DB instance .................................. 1157
Migrating an on-premises database to RDS Custom for SQL Server ..................................... 1165
Upgrading a DB instance for RDS Custom for SQL Server ................................................... 1168
Troubleshooting Amazon RDS Custom for SQL Server ....................................................... 1169
Working with RDS on AWS Outposts ............................................................................................... 1177
Prerequisites ........................................................................................................................ 1178
Support for Amazon RDS features .......................................................................................... 1179
Supported DB instance classes ............................................................................................... 1182
Customer-owned IP addresses ................................................................................................ 1184
Using CoIPs .................................................................................................................. 1184
Limitations ................................................................................................................... 1185
Multi-AZ deployments ........................................................................................................... 1186
Working with the shared responsibility model .................................................................. 1186
Improving availability .................................................................................................... 1186
Prerequisites ................................................................................................................ 1187
Working with API operations for Amazon EC2 permissions ................................................. 1188
Creating DB instances for RDS on Outposts ............................................................................. 1189

x
Amazon Relational Database Service User Guide

Creating read replicas for RDS on Outposts .............................................................................. 1196


Considerations for restoring DB instances ................................................................................ 1198
Using RDS Proxy ........................................................................................................................... 1199
Region and version availability ............................................................................................... 1199
Quotas and limitations .......................................................................................................... 1199
RDS for MariaDB limitations ........................................................................................... 1200
RDS for SQL Server limitations ....................................................................................... 1201
MySQL limitations ......................................................................................................... 1201
PostgreSQL limitations .................................................................................................. 1201
Planning where to use RDS Proxy ........................................................................................... 1202
RDS Proxy concepts and terminology ...................................................................................... 1203
Overview of RDS Proxy concepts .................................................................................... 1203
Connection pooling ....................................................................................................... 1204
Security ....................................................................................................................... 1204
Failover ....................................................................................................................... 1206
Transactions ................................................................................................................. 1206
Getting started with RDS Proxy .............................................................................................. 1207
Setting up network prerequisites .................................................................................... 1207
Setting up database credentials in Secrets Manager .......................................................... 1209
Setting up IAM policies .................................................................................................. 1210
Creating an RDS Proxy .................................................................................................. 1212
Viewing an RDS Proxy ................................................................................................... 1217
Connecting through RDS Proxy ....................................................................................... 1218
Managing an RDS Proxy ........................................................................................................ 1220
Modifying an RDS Proxy ................................................................................................ 1221
Adding a database user ................................................................................................. 1225
Changing database passwords ........................................................................................ 1226
Configuring connection settings ..................................................................................... 1226
Avoiding pinning .......................................................................................................... 1228
Deleting an RDS Proxy .................................................................................................. 1232
Working with RDS Proxy endpoints ......................................................................................... 1232
Overview of proxy endpoints ......................................................................................... 1233
Reader endpoints .......................................................................................................... 1233
Accessing Aurora and RDS databases across VPCs ............................................................. 1233
Creating a proxy endpoint ............................................................................................. 1234
Viewing proxy endpoints ............................................................................................... 1236
Modifying a proxy endpoint ........................................................................................... 1237
Deleting a proxy endpoint ............................................................................................. 1238
Limitations for proxy endpoints ...................................................................................... 1239
Monitoring RDS Proxy with CloudWatch .................................................................................. 1239
Working with RDS Proxy events .............................................................................................. 1244
RDS Proxy events ......................................................................................................... 1244
RDS Proxy examples ............................................................................................................. 1245
Troubleshooting RDS Proxy .................................................................................................... 1247
Verifying connectivity for a proxy ................................................................................... 1248
Common issues and solutions ........................................................................................ 1249
Using RDS Proxy with AWS CloudFormation ............................................................................. 1253
MariaDB on Amazon RDS ............................................................................................................... 1255
MariaDB feature support ....................................................................................................... 1256
MariaDB major versions ................................................................................................. 1256
Supported storage engines ............................................................................................ 1261
Cache warming ............................................................................................................. 1262
Features not supported ................................................................................................. 1263
MariaDB versions .................................................................................................................. 1265
Supported MariaDB minor versions ................................................................................. 1265
Supported MariaDB major versions ................................................................................. 1267
MariaDB 10.3 RDS end of standard support ..................................................................... 1267

xi
Amazon Relational Database Service User Guide

MariaDB 10.2 RDS end of standard support ..................................................................... 1268


Deprecated MariaDB versions ......................................................................................... 1268
Connecting to a DB instance running MariaDB .......................................................................... 1269
Finding the connection information ................................................................................ 1270
Connecting from the MySQL command-line client (unencrypted) ........................................ 1272
Troubleshooting ............................................................................................................ 1273
Securing MariaDB connections ................................................................................................ 1274
MariaDB security ........................................................................................................... 1274
Encrypting with SSL/TLS ............................................................................................... 1275
Using new SSL/TLS certificates ...................................................................................... 1277
Improving query performance with RDS Optimized Reads .......................................................... 1281
Overview ..................................................................................................................... 1281
Use cases ..................................................................................................................... 1281
Best practices ............................................................................................................... 1282
Using .......................................................................................................................... 1282
Monitoring ................................................................................................................... 1283
Limitations ................................................................................................................... 1283
Improving write performance with RDS Optimized Writes for MariaDB ......................................... 1284
Overview ..................................................................................................................... 1284
Using .......................................................................................................................... 1285
Limitations ................................................................................................................... 1288
Upgrading the MariaDB DB engine .......................................................................................... 1289
Overview ..................................................................................................................... 1289
Major version upgrades ................................................................................................. 1290
Upgrading a MariaDB DB instance ................................................................................... 1291
Automatic minor version upgrades .................................................................................. 1291
Upgrading with reduced downtime ................................................................................. 1293
Importing data into a MariaDB DB instance .............................................................................. 1296
Importing data from an external database ....................................................................... 1297
Importing data to a DB instance with reduced downtime ................................................... 1299
Importing data from any source ..................................................................................... 1313
Working with MariaDB replication ........................................................................................... 1318
Working with MariaDB read replicas ................................................................................ 1318
Configuring GTID-based replication with an external source instance ................................... 1328
Configuring binary log file position replication with an external source instance .................... 1331
Options for MariaDB ............................................................................................................. 1334
MariaDB Audit Plugin support ........................................................................................ 1334
Parameters for MariaDB ........................................................................................................ 1338
Viewing MariaDB parameters .......................................................................................... 1338
MySQL parameters that aren't available .......................................................................... 1339
Migrating data from a MySQL DB snapshot to a MariaDB DB instance .......................................... 1341
Performing the migration .............................................................................................. 1341
Incompatibilities between MariaDB and MySQL ................................................................ 1343
MariaDB on Amazon RDS SQL reference .................................................................................. 1344
mysql.rds_replica_status ................................................................................................ 1344
mysql.rds_set_external_master_gtid ................................................................................ 1345
mysql.rds_kill_query_id .................................................................................................. 1347
Local time zone .................................................................................................................... 1349
Known issues and limitations for MariaDB ................................................................................ 1352
File size limits .............................................................................................................. 1352
InnoDB reserved word ................................................................................................... 1353
Custom ports ............................................................................................................... 1353
Performance Insights .................................................................................................... 1353
Microsoft SQL Server on Amazon RDS ............................................................................................. 1354
Common management tasks .................................................................................................. 1355
Limitations ........................................................................................................................... 1357
DB instance class support ...................................................................................................... 1358

xii
Amazon Relational Database Service User Guide

Security ............................................................................................................................... 1360


Compliance programs ............................................................................................................ 1361
HIPAA .......................................................................................................................... 1361
SSL support ......................................................................................................................... 1362
Version support .................................................................................................................... 1362
Version management ............................................................................................................ 1363
Database engine patches and versions ............................................................................. 1363
Deprecation schedule .................................................................................................... 1364
Feature support .................................................................................................................... 1364
SQL Server 2019 features .............................................................................................. 1365
SQL Server 2017 features .............................................................................................. 1365
SQL Server 2016 features .............................................................................................. 1366
SQL Server 2014 features .............................................................................................. 1366
SQL Server 2012 end of support on Amazon RDS ............................................................. 1366
SQL Server 2008 R2 end of support on Amazon RDS ........................................................ 1366
CDC support ........................................................................................................................ 1366
Features not supported and features with limited support ......................................................... 1367
Multi-AZ deployments ........................................................................................................... 1368
Using TDE ............................................................................................................................ 1368
Functions and stored procedures ............................................................................................ 1368
Local time zone .................................................................................................................... 1371
Supported time zones ................................................................................................... 1371
Licensing SQL Server on Amazon RDS ..................................................................................... 1379
Restoring license-terminated DB instances ....................................................................... 1379
SQL Server Developer Edition ......................................................................................... 1379
Connecting to a DB instance running SQL Server ...................................................................... 1380
Before you connect ....................................................................................................... 1380
Finding the DB instance endpoint and port number .......................................................... 1380
Connecting to your DB instance with SSMS ...................................................................... 1381
Connecting to your DB instance with SQL Workbench/J ..................................................... 1384
Security group considerations ......................................................................................... 1385
Troubleshooting ............................................................................................................ 1385
Working with Active Directory with RDS for SQL Server ............................................................. 1387
Working with Self Managed Active Directory with a SQL Server DB instance ......................... 1388
Working with AWS Managed Active Directory with RDS for SQL Server ................................ 1401
Updating applications for new SSL/TLS certificates ................................................................... 1411
Determining whether any applications are connecting to your Microsoft SQL Server DB
instance using SSL ........................................................................................................ 1411
Determining whether a client requires certificate verification in order to connect ................... 1412
Updating your application trust store .............................................................................. 1413
Upgrading the SQL Server DB engine ...................................................................................... 1414
Overview ..................................................................................................................... 1415
Major version upgrades ................................................................................................. 1415
Multi-AZ and in-memory optimization considerations ........................................................ 1417
Read replica considerations ............................................................................................ 1417
Option group considerations .......................................................................................... 1417
Parameter group considerations ..................................................................................... 1417
Testing an upgrade ....................................................................................................... 1417
Upgrading a SQL server DB instance ............................................................................... 1418
Upgrading deprecated DB instances before support ends ................................................... 1418
Importing and exporting SQL Server databases ........................................................................ 1419
Limitations and recommendations .................................................................................. 1420
Setting up ................................................................................................................... 1421
Using native backup and restore ..................................................................................... 1425
Compressing backup files .............................................................................................. 1435
Troubleshooting ............................................................................................................ 1435
Importing and exporting SQL Server data using other methods .......................................... 1437

xiii
Amazon Relational Database Service User Guide

Working with SQL Server read replicas .................................................................................... 1446


Configuring read replicas for SQL Server ......................................................................... 1446
Read replica limitations with SQL Server ......................................................................... 1446
Option considerations ................................................................................................... 1447
Synchronizing database users and objects with a SQL Server read replica ............................. 1448
Troubleshooting a SQL Server read replica problem .......................................................... 1449
Multi-AZ for RDS for SQL Server ............................................................................................ 1450
Adding Multi-AZ to a SQL Server DB instance ................................................................... 1451
Removing Multi-AZ from a SQL Server DB instance ........................................................... 1451
Limitations, notes, and recommendations ........................................................................ 1451
Determining the location of the secondary ...................................................................... 1453
Migrating to Always On AGs .......................................................................................... 1454
Additional features for SQL Server .......................................................................................... 1455
Using SSL with a SQL Server DB instance ........................................................................ 1456
Configuring security protocols and ciphers ....................................................................... 1459
Amazon S3 integration .................................................................................................. 1464
Using Database Mail ..................................................................................................... 1478
Instance store support for tempdb .................................................................................. 1489
Using extended events .................................................................................................. 1491
Access to transaction log backups ................................................................................... 1494
Options for SQL Server ......................................................................................................... 1514
Listing the available options for SQL Server versions and editions ....................................... 1515
Linked Servers with Oracle OLEDB .................................................................................. 1517
Native backup and restore ............................................................................................. 1525
Transparent Data Encryption .......................................................................................... 1528
SQL Server Audit .......................................................................................................... 1536
SQL Server Analysis Services .......................................................................................... 1543
SQL Server Integration Services ...................................................................................... 1562
SQL Server Reporting Services ....................................................................................... 1577
Microsoft Distributed Transaction Coordinator .................................................................. 1590
Common DBA tasks for SQL Server ......................................................................................... 1602
Accessing the tempdb database ...................................................................................... 1603
Analyzing database workload with Database Engine Tuning Advisor .................................... 1605
Collations and character sets .......................................................................................... 1607
Creating a database user ............................................................................................... 1611
Determining a recovery model ....................................................................................... 1611
Determining the last failover time .................................................................................. 1612
Disabling fast inserts ..................................................................................................... 1612
Dropping a SQL Server database .................................................................................... 1613
Renaming a Multi-AZ database ....................................................................................... 1613
Resetting the db_owner role password ........................................................................... 1613
Restoring license-terminated DB instances ....................................................................... 1614
Transitioning a database from OFFLINE to ONLINE ........................................................... 1614
Using CDC ................................................................................................................... 1614
Using SQL Server Agent ................................................................................................ 1617
Working with SQL Server logs ........................................................................................ 1619
Working with trace and dump files ................................................................................. 1620
MySQL on Amazon RDS ................................................................................................................. 1622
MySQL feature support ......................................................................................................... 1624
Supported storage engines ............................................................................................ 1624
Using memcached and other options .............................................................................. 1624
InnoDB cache warming .................................................................................................. 1625
Features not supported ................................................................................................. 1625
MySQL versions .................................................................................................................... 1627
Supported MySQL minor versions ................................................................................... 1627
Supported MySQL major versions ................................................................................... 1629
Deprecated MySQL versions ........................................................................................... 1629

xiv
Amazon Relational Database Service User Guide

Connecting to a DB instance running MySQL ............................................................................ 1630


Finding the connection information ................................................................................ 1631
Connecting from the MySQL command-line client (unencrypted) ........................................ 1633
Connecting from MySQL Workbench ............................................................................... 1634
Connecting with the AWS JDBC Driver for MySQL ............................................................. 1635
Troubleshooting ............................................................................................................ 1636
Securing MySQL connections .................................................................................................. 1637
MySQL security ............................................................................................................ 1637
Password Validation Plugin ............................................................................................ 1638
Encrypting with SSL/TLS ............................................................................................... 1639
Using new SSL/TLS certificates ...................................................................................... 1642
Using Kerberos authentication for MySQL ........................................................................ 1645
Improving query performance with RDS Optimized Reads .......................................................... 1656
Overview ..................................................................................................................... 1656
Use cases ..................................................................................................................... 1656
Best practices ............................................................................................................... 1657
Using .......................................................................................................................... 1657
Monitoring ................................................................................................................... 1658
Limitations ................................................................................................................... 1658
Improving write performance with RDS Optimized Writes for MySQL ........................................... 1659
Overview ..................................................................................................................... 1284
Using .......................................................................................................................... 1660
Limitations ................................................................................................................... 1663
Upgrading the MySQL DB engine ........................................................................................... 1664
Overview ..................................................................................................................... 1664
Major version upgrades ................................................................................................. 1665
Testing an upgrade ....................................................................................................... 1669
Upgrading a MySQL DB instance .................................................................................... 1669
Automatic minor version upgrades .................................................................................. 1669
Upgrading with reduced downtime ................................................................................. 1671
Importing data into a MySQL DB instance ............................................................................... 1674
Overview ..................................................................................................................... 1674
Importing data considerations ........................................................................................ 1676
Restoring a backup into a MySQL DB instance .................................................................. 1680
Importing data from an external database ....................................................................... 1688
Importing data with reduced downtime ........................................................................... 1690
Importing data from any source ..................................................................................... 1703
Working with MySQL replication ............................................................................................. 1708
Working with MySQL read replicas .................................................................................. 1708
Using GTID-based replication ......................................................................................... 1719
Configuring binary log file position replication with an external source instance .................... 1724
Exporting data from a MySQL DB instance ............................................................................... 1728
Prepare an external MySQL database .............................................................................. 1728
Prepare the source MySQL DB instance ........................................................................... 1729
Copy the database ........................................................................................................ 1730
Complete the export ..................................................................................................... 1730
Options for MySQL ............................................................................................................... 1732
MariaDB Audit Plugin .................................................................................................... 1733
memcached .................................................................................................................. 1738
Parameters for MySQL .......................................................................................................... 1742
Common DBA tasks for MySQL .............................................................................................. 1744
Ending a session or query .............................................................................................. 1744
Skipping the current replication error .............................................................................. 1744
Working with InnoDB tablespaces to improve crash recovery times ...................................... 1745
Managing the Global Status History ................................................................................ 1747
Local time zone .................................................................................................................... 1749
Known issues and limitations ................................................................................................. 1752

xv
Amazon Relational Database Service User Guide

InnoDB reserved word ................................................................................................... 1752


Storage-full behavior .................................................................................................... 1752
Inconsistent InnoDB buffer pool size ............................................................................... 1753
Index merge optimization returns incorrect results ............................................................ 1753
Log file size ................................................................................................................. 1754
MySQL parameter exceptions for Amazon RDS DB instances ............................................... 1754
MySQL file size limits in Amazon RDS ............................................................................. 1754
MySQL Keyring Plugin not supported .............................................................................. 1756
Custom ports ............................................................................................................... 1756
MySQL stored procedure limitations ................................................................................ 1756
GTID-based replication with an external source instance .................................................... 1756
RDS for MySQL stored procedures .......................................................................................... 1757
Configuring .................................................................................................................. 1758
Ending a session or query .............................................................................................. 1761
Logging ....................................................................................................................... 1763
Managing the Global Status History ................................................................................ 1764
Replicating ................................................................................................................... 1767
Warming the InnoDB cache ............................................................................................ 1784
Oracle on Amazon RDS ................................................................................................................. 1785
Oracle overview .................................................................................................................... 1786
Oracle features ............................................................................................................. 1786
Oracle versions ............................................................................................................. 1789
Oracle licensing ............................................................................................................ 1793
Oracle users and privileges ............................................................................................ 1796
Oracle instance classes .................................................................................................. 1796
Oracle database architecture .......................................................................................... 1800
Oracle parameters ........................................................................................................ 1801
Oracle character sets ..................................................................................................... 1801
Oracle limitations ......................................................................................................... 1804
Connecting to your Oracle DB instance .................................................................................... 1806
Finding the endpoint ..................................................................................................... 1806
SQL developer .............................................................................................................. 1808
SQL*Plus ...................................................................................................................... 1810
Security group considerations ......................................................................................... 1811
Dedicated and shared server processes ............................................................................ 1811
Troubleshooting ............................................................................................................ 1811
Modifying Oracle sqlnet.ora parameters .......................................................................... 1812
Securing Oracle connections .................................................................................................. 1816
Encrypting with SSL ...................................................................................................... 1816
Using new SSL/TLS certificates ...................................................................................... 1816
Encrypting with NNE ..................................................................................................... 1819
Configuring Kerberos authentication ............................................................................... 1819
Configuring UTL_HTTP access ........................................................................................ 1832
Working with CDBs ............................................................................................................... 1840
Overview of CDBs ......................................................................................................... 1840
Configuring a CDB ........................................................................................................ 1841
Backing up and restoring a CDB ..................................................................................... 1844
Converting a non-CDB to a CDB ..................................................................................... 1844
Upgrading your CDB ..................................................................................................... 1846
Administering your Oracle DB ................................................................................................ 1847
System tasks ................................................................................................................ 1855
Database tasks ............................................................................................................. 1869
Log tasks ..................................................................................................................... 1888
RMAN tasks ................................................................................................................. 1897
Oracle Scheduler tasks .................................................................................................. 1914
Diagnostic tasks ............................................................................................................ 1919
Other tasks .................................................................................................................. 1926

xvi
Amazon Relational Database Service User Guide

Configuring advanced RDS for Oracle features ......................................................................... 1936


Configuring the instance store ........................................................................................ 1936
Turning on HugePages .................................................................................................. 1942
Turning on extended data types ..................................................................................... 1945
Importing data into Oracle .................................................................................................... 1947
Importing using Oracle SQL Developer ............................................................................ 1947
Importing using Oracle Data Pump ................................................................................. 1948
Importing using Oracle Export/Import ............................................................................ 1959
Importing using Oracle SQL*Loader ................................................................................ 1959
Migrating with Oracle materialized views ......................................................................... 1960
Migrating using Oracle transportable tablespaces ............................................................. 1962
Working with Oracle replicas .................................................................................................. 1973
Overview of Oracle replicas ........................................................................................... 1973
Requirements and considerations for Oracle replicas ......................................................... 1974
Preparing to create an Oracle replica .............................................................................. 1977
Creating a mounted Oracle replica .................................................................................. 1978
Modifying the replica mode ........................................................................................... 1979
Working with Oracle replica backups ............................................................................... 1980
Performing an Oracle Data Guard switchover ................................................................... 1982
Troubleshooting Oracle replicas ...................................................................................... 1988
Options for Oracle ................................................................................................................ 1990
Overview of Oracle DB options ...................................................................................... 1990
Amazon S3 integration .................................................................................................. 1992
Application Express (APEX) ............................................................................................. 2009
Amazon EFS integration ................................................................................................ 2020
Java virtual machine (JVM) ............................................................................................ 2031
Enterprise Manager ....................................................................................................... 2034
Label security ............................................................................................................... 2049
Locator ........................................................................................................................ 2052
Multimedia ................................................................................................................... 2055
Native network encryption (NNE) .................................................................................... 2057
OLAP .......................................................................................................................... 2065
Secure Sockets Layer (SSL) ............................................................................................. 2068
Spatial ......................................................................................................................... 2075
SQLT ........................................................................................................................... 2078
Statspack ..................................................................................................................... 2084
Time zone .................................................................................................................... 2087
Time zone file autoupgrade ........................................................................................... 2091
Transparent Data Encryption (TDE) ................................................................................. 2097
UTL_MAIL .................................................................................................................... 2099
XML DB ....................................................................................................................... 2102
Upgrading the Oracle DB engine ............................................................................................ 2103
Overview of Oracle upgrades ......................................................................................... 2103
Major version upgrades ................................................................................................. 2106
Minor version upgrades ................................................................................................. 2107
Upgrade considerations ................................................................................................. 2108
Testing an upgrade ....................................................................................................... 2110
Upgrading an Oracle DB instance ................................................................................... 2111
Upgrading an Oracle DB snapshot .................................................................................. 2111
Tools and third-party software for Oracle ................................................................................ 2114
Setting up ................................................................................................................... 2115
Using Oracle GoldenGate ............................................................................................... 2121
Using the Oracle Repository Creation Utility .................................................................... 2135
Configuring CMAN ........................................................................................................ 2141
Installing a Siebel database on Oracle on Amazon RDS ...................................................... 2143
Oracle Database engine releases ............................................................................................. 2146
PostgreSQL on Amazon RDS .......................................................................................................... 2147

xvii
Amazon Relational Database Service User Guide

Common management tasks .................................................................................................. 2148


The database preview environment ........................................................................................ 2151
Features not supported in the preview environment .......................................................... 2151
Creating a new DB instance in the preview environment .................................................... 2151
PostgreSQL version 16 in the database preview environment ..................................................... 2153
PostgreSQL versions .............................................................................................................. 2154
Deprecation of PostgreSQL version 10 ............................................................................ 2154
Deprecation of PostgreSQL version 9.6 ............................................................................ 2155
Deprecated PostgreSQL versions ..................................................................................... 2155
PostgreSQL extension versions ............................................................................................... 2156
Restricting installation of PostgreSQL extensions .............................................................. 2156
PostgreSQL trusted extensions ....................................................................................... 2157
PostgreSQL features .............................................................................................................. 2158
Custom data types and enumerations ............................................................................. 2158
Event triggers for RDS for PostgreSQL ............................................................................ 2159
Huge pages for RDS for PostgreSQL ............................................................................... 2159
Logical replication ......................................................................................................... 2160
RAM disk for the stats_temp_directory ............................................................................ 2162
Tablespaces for RDS for PostgreSQL .............................................................................. 2162
RDS for PostgreSQL collations for EBCDIC and other mainframe migrations .......................... 2163
Connecting to a PostgreSQL instance ...................................................................................... 2167
Using pgAdmin to connect to a RDS for PostgreSQL DB instance ........................................ 2169
Using psql to connect to your RDS for PostgreSQL DB instance ........................................... 2171
Connecting with the AWS JDBC Driver for PostgreSQL ....................................................... 2171
Troubleshooting connections to your RDS for PostgreSQL instance ...................................... 2172
Securing connections with SSL/TLS ......................................................................................... 2174
Using SSL with a PostgreSQL DB instance ........................................................................ 2174
Updating applications to use new SSL/TLS certificates ...................................................... 2177
Using Kerberos authentication ................................................................................................ 2181
Region and version availability ....................................................................................... 2181
Overview of Kerberos authentication ............................................................................... 2181
Setting up ................................................................................................................... 2182
Managing a DB instance in a Domain .............................................................................. 2191
Connecting with Kerberos authentication ......................................................................... 2192
Using a custom DNS server for outbound network access .......................................................... 2195
Turning on custom DNS resolution .................................................................................. 2195
Turning off custom DNS resolution ................................................................................. 2195
Setting up a custom DNS server ..................................................................................... 2195
Upgrading the PostgreSQL DB engine ..................................................................................... 2197
Overview of upgrading .................................................................................................. 2198
PostgreSQL version numbers .......................................................................................... 2199
RDS version number ..................................................................................................... 2199
Choosing a major version upgrade .................................................................................. 2200
How to perform a major version upgrade ........................................................................ 2203
Automatic minor version upgrades .................................................................................. 2207
Upgrading PostgreSQL extensions .................................................................................. 2209
Upgrading a PostgreSQL DB snapshot engine version ................................................................ 2210
Working with read replicas for RDS for PostgreSQL ................................................................... 2212
Read replica limitations with PostgreSQL ......................................................................... 2212
Read replica configuration with PostgreSQL ..................................................................... 2213
How replication works for different RDS for PostgreSQL versions ........................................ 2215
Monitoring and tuning the replication process .................................................................. 2218
Improving query performance with RDS Optimized Reads .......................................................... 2220
Overview of RDS Optimized Reads in PostgreSQL ............................................................. 2220
Use cases ..................................................................................................................... 2221
Best practices ............................................................................................................... 2221
Using .......................................................................................................................... 2221

xviii
Amazon Relational Database Service User Guide

Monitoring ................................................................................................................... 2222


Limitations ................................................................................................................... 2222
Importing data into PostgreSQL ............................................................................................. 2223
Importing a PostgreSQL database from an Amazon EC2 instance ........................................ 2224
Using the \copy command to import data to a table on a PostgreSQL DB instance ................. 2226
Importing data from Amazon S3 into RDS for PostgreSQL ................................................. 2227
Transporting PostgreSQL databases between DB instances ................................................. 2240
Exporting PostgreSQL data to Amazon S3 ............................................................................... 2247
Installing the extension ................................................................................................. 2247
Overview of exporting to S3 .......................................................................................... 2248
Specifying the Amazon S3 file path to export to ............................................................... 2249
Setting up access to an Amazon S3 bucket ...................................................................... 2250
Exporting query data using the aws_s3.query_export_to_s3 function ................................... 2253
Troubleshooting access to Amazon S3 ............................................................................. 2255
Function reference ........................................................................................................ 2255
Invoking a Lambda function from RDS for PostgreSQL .............................................................. 2259
Step 1: Configure outbound connections ......................................................................... 2259
Step 2: Configure IAM for your instance and Lambda ........................................................ 2260
Step 3: Install the extension ........................................................................................... 2261
Step 4: Use Lambda helper functions .............................................................................. 2262
Step 5: Invoke a Lambda function ................................................................................... 2262
Step 6: Grant users permissions ...................................................................................... 2263
Examples: Invoking Lambda functions ............................................................................. 2264
Lambda function error messages .................................................................................... 2266
Lambda function reference ............................................................................................ 2267
Common DBA tasks for RDS for PostgreSQL ............................................................................ 2270
Collations supported in RDS for PostgreSQL .................................................................... 2270
Understanding PostgreSQL roles and permissions ............................................................. 2271
Working with the PostgreSQL autovacuum ...................................................................... 2280
Logging mechanisms ..................................................................................................... 2290
Managing temporary files with PostgreSQL ...................................................................... 2291
Using pgBadger for log analysis with PostgreSQL ............................................................. 2295
Using PGSnapper for snapping with PostgreSQL ............................................................... 2295
Working with parameters ............................................................................................... 2296
Tuning with wait events for RDS for PostgreSQL ...................................................................... 2306
Essential concepts for RDS for PostgreSQL tuning ............................................................. 2306
RDS for PostgreSQL wait events ..................................................................................... 2309
Client:ClientRead .......................................................................................................... 2311
Client:ClientWrite .......................................................................................................... 2313
CPU ............................................................................................................................ 2314
IO:BufFileRead and IO:BufFileWrite ................................................................................. 2319
IO:DataFileRead ............................................................................................................ 2324
IO:WALWrite ................................................................................................................. 2329
Lock:advisory ............................................................................................................... 2331
Lock:extend .................................................................................................................. 2333
Lock:Relation ................................................................................................................ 2335
Lock:transactionid ......................................................................................................... 2337
Lock:tuple .................................................................................................................... 2339
LWLock:BufferMapping (LWLock:buffer_mapping) ............................................................. 2342
LWLock:BufferIO (IPC:BufferIO) ....................................................................................... 2344
LWLock:buffer_content (BufferContent) ........................................................................... 2345
LWLock:lock_manager (LWLock:lockmanager) ................................................................... 2346
Timeout:PgSleep ........................................................................................................... 2350
Timeout:VacuumDelay ................................................................................................... 2351
Tuning RDS for PostgreSQL with Amazon DevOps Guru proactive insights .................................... 2353
Database has long running idle in transaction connection .................................................. 2353
Using PostgreSQL extensions ................................................................................................. 2356

xix
Amazon Relational Database Service User Guide

Using functions from orafce ........................................................................................... 2357


Managing partitions with the pg_partman extension ......................................................... 2358
Using pgAudit to log database activity ............................................................................ 2362
Scheduling maintenance with the pg_cron extension ......................................................... 2371
Using pglogical to synchronize data ................................................................................ 2378
Reducing bloat with the pg_repack extension ................................................................... 2388
Upgrading and using PLV8 ............................................................................................. 2389
Using PL/Rust to write functions in the Rust language ...................................................... 2390
Managing spatial data with PostGIS ................................................................................ 2394
Supported foreign data wrappers ........................................................................................... 2401
Using the log_fdw extension .......................................................................................... 2401
Using postgres_fdw to access external data ..................................................................... 2402
Working with a MySQL database .................................................................................... 2403
Working with an Oracle database ................................................................................... 2406
Working with a SQL Server database .............................................................................. 2409
Working with Trusted Language Extensions for PostgreSQL ........................................................ 2412
Terminology ................................................................................................................. 2412
Requirements for using Trusted Language Extensions ........................................................ 2413
Setting up Trusted Language Extensions .......................................................................... 2415
Overview of Trusted Language Extensions ....................................................................... 2418
Creating TLE extensions ................................................................................................. 2419
Dropping your TLE extensions from a database ................................................................ 2422
Uninstalling Trusted Language Extensions ........................................................................ 2423
Using PostgreSQL hooks with your TLE extensions ............................................................ 2424
Using Custom Data Types in Trusted Language Extensions ................................................. 2428
Functions reference for Trusted Language Extensions ........................................................ 2428
Hooks reference for Trusted Language Extensions ............................................................. 2438
Code examples ............................................................................................................................. 2441
Actions ................................................................................................................................ 2445
Create a DB instance ..................................................................................................... 2446
Create a DB parameter group ......................................................................................... 2453
Create a snapshot of a DB instance ................................................................................. 2456
Create an authentication token ...................................................................................... 2460
Delete a DB instance ..................................................................................................... 2461
Delete a DB parameter group ......................................................................................... 2465
Describe DB instances ................................................................................................... 2469
Describe DB parameter groups ....................................................................................... 2473
Describe database engine versions .................................................................................. 2477
Describe options for DB instances ................................................................................... 2481
Describe parameters in a DB parameter group .................................................................. 2486
Describe snapshots of DB instances ................................................................................. 2491
Modify a DB instance .................................................................................................... 2494
Reboot a DB instance .................................................................................................... 2495
Retrieve attributes ........................................................................................................ 2496
Update parameters in a DB parameter group ................................................................... 2497
Scenarios ............................................................................................................................. 2500
Get started with DB instances ........................................................................................ 2501
Cross-service examples .......................................................................................................... 2561
Create an Aurora Serverless work item tracker .................................................................. 2562
Security ....................................................................................................................................... 2565
Database authentication ........................................................................................................ 2566
Password authentication ................................................................................................ 2566
IAM database authentication .......................................................................................... 2567
Kerberos authentication ................................................................................................. 2567
Password management with RDS and Secrets Manager .............................................................. 2568
Limitations ................................................................................................................... 2568
Overview ..................................................................................................................... 2568

xx
Amazon Relational Database Service User Guide

Benefits ....................................................................................................................... 2569


Permissions required for Secrets Manager integration ........................................................ 2569
Enforcing RDS management ........................................................................................... 2570
Managing the master user password for a DB instance ...................................................... 2570
Managing the master user password for a Multi-AZ DB cluster ............................................ 2573
Rotating the master user password secret for a DB instance ............................................... 2576
Rotating the master user password secret for a Multi-AZ DB cluster ..................................... 2578
Viewing the details about a secret for a DB instance ......................................................... 2579
Viewing the details about a secret for a Multi-AZ DB cluster ............................................... 2582
Region and version availability ....................................................................................... 2585
Data protection .................................................................................................................... 2585
Data encryption ............................................................................................................ 2585
Internetwork traffic privacy ............................................................................................ 2605
Identity and access management ............................................................................................ 2606
Audience ...................................................................................................................... 2606
Authenticating with identities ......................................................................................... 2606
Managing access using policies ....................................................................................... 2609
How Amazon RDS works with IAM .................................................................................. 2610
Identity-based policy examples ....................................................................................... 2616
AWS managed policies .................................................................................................. 2628
Policy updates .............................................................................................................. 2632
Cross-service confused deputy prevention ........................................................................ 2640
IAM database authentication .......................................................................................... 2642
Troubleshooting ............................................................................................................ 2670
Logging and monitoring ........................................................................................................ 2672
Compliance validation ........................................................................................................... 2674
Resilience ............................................................................................................................. 2675
Backup and restore ....................................................................................................... 2675
Replication ................................................................................................................... 2675
Failover ....................................................................................................................... 2675
Infrastructure security ........................................................................................................... 2676
Security groups ............................................................................................................ 2676
Public accessibility ........................................................................................................ 2676
VPC endpoints (AWS PrivateLink) ............................................................................................ 2677
Considerations .............................................................................................................. 2677
Availability ................................................................................................................... 2677
Creating an interface VPC endpoint ................................................................................ 2678
Creating a VPC endpoint policy ...................................................................................... 2678
Security best practices ........................................................................................................... 2679
Controlling access with security groups ................................................................................... 2680
Overview of VPC security groups .................................................................................... 2680
Security group scenario ................................................................................................. 2680
Creating a VPC security group ........................................................................................ 2682
Associating with a DB instance ....................................................................................... 2682
Master user account privileges ................................................................................................ 2682
Service-linked roles ............................................................................................................... 2684
Service-linked role permissions for Amazon RDS ............................................................... 2684
Service-linked role permissions for Amazon RDS Custom ................................................... 2686
Using Amazon RDS with Amazon VPC ..................................................................................... 2688
Working with a DB instance in a VPC .............................................................................. 2688
Updating the VPC for a DB instance ................................................................................ 2700
Scenarios for accessing a DB instance in a VPC ................................................................. 2701
Tutorial: Create a VPC for use with a DB instance (IPv4 only) .............................................. 2706
Tutorial: Create a VPC for use with a DB instance (dual-stack mode) .................................... 2711
Moving a DB instance into a VPC .................................................................................... 2718
Quotas and constraints .................................................................................................................. 2720
Quotas in Amazon RDS ......................................................................................................... 2720

xxi
Amazon Relational Database Service User Guide

Naming constraints in Amazon RDS ........................................................................................ 2723


Maximum number of database connections ............................................................................. 2724
File size limits in Amazon RDS ................................................................................................ 2726
Troubleshooting ............................................................................................................................ 2727
Can't connect to DB instance ................................................................................................. 2727
Testing the DB instance connection ................................................................................. 2728
Troubleshooting connection authentication ...................................................................... 2729
Security issues ...................................................................................................................... 2729
Error message "failed to retrieve account attributes, certain console functions may be
impaired." .................................................................................................................... 2729
Troubleshooting incompatible-network state ............................................................................ 2730
Causes ......................................................................................................................... 2730
Resolution .................................................................................................................... 2730
Resetting the DB instance owner password .............................................................................. 2731
DB instance outage or reboot ................................................................................................. 2731
Parameter changes not taking effect ....................................................................................... 2732
DB instance out of storage .................................................................................................... 2732
Insufficient DB instance capacity ............................................................................................. 2733
RDS freeable memory issues .................................................................................................. 2734
MySQL and MariaDB issues .................................................................................................... 2734
Maximum MySQL and MariaDB connections ..................................................................... 2734
Diagnosing and resolving incompatible parameters status for a memory limit ....................... 2735
Diagnosing and resolving lag between read replicas .......................................................... 2736
Diagnosing and resolving a MySQL or MariaDB read replication failure ................................. 2737
Creating triggers with binary logging enabled requires SUPER privilege ............................... 2738
Diagnosing and resolving point-in-time restore failures ..................................................... 2740
Replication stopped error .............................................................................................. 2740
Read replica create fails or replication breaks with fatal error 1236 ..................................... 2741
Can't set backup retention period to 0 .................................................................................... 2741
Amazon RDS API reference ............................................................................................................ 2742
Using the Query API ............................................................................................................. 2742
Query parameters ......................................................................................................... 2742
Query request authentication ......................................................................................... 2742
Troubleshooting applications .................................................................................................. 2742
Retrieving errors ........................................................................................................... 2743
Troubleshooting tips ..................................................................................................... 2743
Document history ......................................................................................................................... 2744
Earlier updates ..................................................................................................................... 2812
AWS glossary ............................................................................................................................... 2830

xxii
Amazon Relational Database Service User Guide
Overview

What is Amazon Relational Database


Service (Amazon RDS)?
Amazon Relational Database Service (Amazon RDS) is a web service that makes it easier to set up,
operate, and scale a relational database in the AWS Cloud. It provides cost-efficient, resizable capacity
for an industry-standard relational database and manages common database administration tasks.
Note
This guide covers Amazon RDS database engines other than Amazon Aurora. For information
about using Amazon Aurora, see the Amazon Aurora User Guide.

If you are new to AWS products and services, begin learning more with the following resources:

• For an overview of all AWS products, see What is cloud computing?


• Amazon Web Services provides a number of database services. To learn more about the variety of
database options available on AWS, see Choosing an AWS database service and Running databases on
AWS.

Overview of Amazon RDS


Why do you want to run a relational database in the AWS Cloud? Because AWS takes over many of the
difficult and tedious management tasks of a relational database.

Topics
• Amazon EC2 and on-premises databases (p. 1)
• Amazon RDS and Amazon EC2 (p. 2)
• Amazon RDS Custom for Oracle and Microsoft SQL Server (p. 3)
• Amazon RDS on AWS Outposts (p. 3)

Amazon EC2 and on-premises databases


Amazon Elastic Compute Cloud (Amazon EC2) provides scalable computing capacity in the AWS Cloud.
Amazon EC2 eliminates your need to invest in hardware up front, so you can develop and deploy
applications faster.

When you buy an on-premises server, you get CPU, memory, storage, and IOPS, all bundled together.
With Amazon EC2, these are split apart so that you can scale them independently. If you need more CPU,
less IOPS, or more storage, you can easily allocate them.

For a relational database in an on-premises server, you assume full responsibility for the server,
operating system, and software. For a database on an Amazon EC2 instance, AWS manages the layers
below the operating system. In this way, Amazon EC2 eliminates some of the burden of managing an on-
premises database server.

In the following table, you can find a comparison of the management models for on-premises databases
and Amazon EC2.

1
Amazon Relational Database Service User Guide
Amazon RDS and Amazon EC2

Feature On-premises management Amazon EC2 management

Application optimization Customer Customer

Scaling Customer Customer

High availability Customer Customer

Database backups Customer Customer

Database software patching Customer Customer

Database software install Customer Customer

Operating system (OS) patching Customer Customer

OS installation Customer Customer

Server maintenance Customer AWS

Hardware lifecycle Customer AWS

Power, network, and cooling Customer AWS

Amazon EC2 isn't a fully managed service. Thus, when you run a database on Amazon EC2, you're
more prone to user errors. For example, when you update the operating system or database software
manually, you might accidentally cause application downtime. You might spend hours checking every
change to identify and fix an issue.

Amazon RDS and Amazon EC2


Amazon RDS is a managed database service. It's responsible for most management tasks. By eliminating
tedious manual tasks, Amazon RDS frees you to focus on your application and your users. We
recommend Amazon RDS over Amazon EC2 as your default choice for most database deployments.

In the following table, you can find a comparison of the management models in Amazon EC2 and
Amazon RDS.

Feature Amazon EC2 management Amazon RDS management

Application optimization Customer Customer

Scaling Customer AWS

High availability Customer AWS

Database backups Customer AWS

Database software patching Customer AWS

Database software install Customer AWS

OS patching Customer AWS

OS installation Customer AWS

Server maintenance AWS AWS

Hardware lifecycle AWS AWS

2
Amazon Relational Database Service User Guide
Amazon RDS Custom for Oracle and Microsoft SQL Server

Feature Amazon EC2 management Amazon RDS management

Power, network, and cooling AWS AWS

Amazon RDS provides the following specific advantages over database deployments that aren't fully
managed:

• You can use the database products you are already familiar with: MariaDB, Microsoft SQL Server,
MySQL, Oracle, and PostgreSQL.
• Amazon RDS manages backups, software patching, automatic failure detection, and recovery.
• You can turn on automated backups, or manually create your own backup snapshots. You can use
these backups to restore a database. The Amazon RDS restore process works reliably and efficiently.
• You can get high availability with a primary instance and a synchronous secondary instance that you
can fail over to when problems occur. You can also use read replicas to increase read scaling.
• In addition to the security in your database package, you can help control who can access your RDS
databases. To do so, you can use AWS Identity and Access Management (IAM) to define users and
permissions. You can also help protect your databases by putting them in a virtual private cloud (VPC).

Amazon RDS Custom for Oracle and Microsoft SQL


Server
Amazon RDS Custom is an RDS management type that gives you full access to your database and
operating system.

You can use the control capabilities of RDS Custom to access and customize the database environment
and operating system for legacy and packaged business applications. Meanwhile, Amazon RDS
automates database administration tasks and operations.

In this deployment model, you can install applications and change configuration settings to suit your
applications. At the same time, you can offload database administration tasks such as provisioning,
scaling, upgrading, and backup to AWS. You can take advantage of the database management benefits
of Amazon RDS, with more control and flexibility.

For Oracle Database and Microsoft SQL Server, RDS Custom combines the automation of Amazon RDS
with the flexibility of Amazon EC2. For more information on RDS Custom, see Working with Amazon RDS
Custom (p. 978).

With the shared responsibility model of RDS Custom, you get more control than in Amazon RDS, but also
more responsibility. For more information, see Shared responsibility model in RDS Custom (p. 979).

Amazon RDS on AWS Outposts


Amazon RDS on AWS Outposts extends RDS for SQL Server, RDS for MySQL, and RDS for PostgreSQL
databases to AWS Outposts environments. AWS Outposts uses the same hardware as in public AWS
Regions to bring AWS services, infrastructure, and operation models on-premises. With RDS on Outposts,
you can provision managed DB instances close to the business applications that must run on-premises.
For more information, see Working with Amazon RDS on AWS Outposts (p. 1177).

DB instances
A DB instance is an isolated database environment in the AWS Cloud. The basic building block of Amazon
RDS is the DB instance.

3
Amazon Relational Database Service User Guide
DB engines

Your DB instance can contain one or more user-created databases. You can access your DB instance by
using the same tools and applications that you use with a standalone database instance. You can create
and modify a DB instance by using the AWS Command Line Interface (AWS CLI), the Amazon RDS API, or
the AWS Management Console.

DB engines
A DB engine is the specific relational database software that runs on your DB instance. Amazon RDS
currently supports the following engines:

• MariaDB
• Microsoft SQL Server
• MySQL
• Oracle
• PostgreSQL

Each DB engine has its own supported features, and each version of a DB engine can include specific
features. Support for Amazon RDS features varies across AWS Regions and specific versions of each DB
engine. To check feature support in different engine versions and Regions, see Supported features in
Amazon RDS by AWS Region and DB engine (p. 116).

Additionally, each DB engine has a set of parameters in a DB parameter group that control the behavior
of the databases that it manages.

DB instance classes
A DB instance class determines the computation and memory capacity of a DB instance. A DB instance
class consists of both the DB instance type and the size. Each instance type offers different compute,
memory, and storage capabilities. For example, db.m6g is a general-purpose DB instance type powered
by AWS Graviton2 processors. Within the db.m6g instance type, db.m6g.2xlarge is a DB instance class.

You can select the DB instance that best meets your needs. If your needs change over time, you can
change DB instances. For information, see DB instance classes (p. 11).
Note
For pricing information on DB instance classes, see the Pricing section of the Amazon RDS
product page.

DB instance storage
Amazon EBS provides durable, block-level storage volumes that you can attach to a running instance. DB
instance storage comes in the following types:

• General Purpose (SSD)


• Provisioned IOPS (PIOPS)
• Magnetic

The storage types differ in performance characteristics and price. You can tailor your storage
performance and cost to the needs of your database.

Each DB instance has minimum and maximum storage requirements depending on the storage type and
the database engine it supports. It's important to have sufficient storage so that your databases have
room to grow. Also, sufficient storage makes sure that features for the DB engine have room to write
content or log entries. For more information, see Amazon RDS DB instance storage (p. 101).

4
Amazon Relational Database Service User Guide
Amazon Virtual Private Cloud (Amazon VPC)

Amazon Virtual Private Cloud (Amazon VPC)


You can run a DB instance on a virtual private cloud (VPC) using the Amazon Virtual Private Cloud
(Amazon VPC) service. When you use a VPC, you have control over your virtual networking environment.
You can choose your own IP address range, create subnets, and configure routing and access control lists.
The basic functionality of Amazon RDS is the same whether it's running in a VPC or not. Amazon RDS
manages backups, software patching, automatic failure detection, and recovery. There's no additional
cost to run your DB instance in a VPC. For more information on using Amazon VPC with RDS, see Amazon
VPC VPCs and Amazon RDS (p. 2688).

Amazon RDS uses Network Time Protocol (NTP) to synchronize the time on DB instances.

AWS Regions and Availability Zones


Amazon cloud computing resources are housed in highly available data center facilities in different areas
of the world (for example, North America, Europe, or Asia). Each data center location is called an AWS
Region.

Each AWS Region contains multiple distinct locations called Availability Zones, or AZs. Each Availability
Zone is engineered to be isolated from failures in other Availability Zones. Each is engineered to provide
inexpensive, low-latency network connectivity to other Availability Zones in the same AWS Region. By
launching instances in separate Availability Zones, you can protect your applications from the failure of a
single location. For more information, see Regions, Availability Zones, and Local Zones (p. 110).

You can run your DB instance in several Availability Zones, an option called a Multi-AZ deployment.
When you choose this option, Amazon automatically provisions and maintains one or more secondary
standby DB instances in a different Availability Zone. Your primary DB instance is replicated across
Availability Zones to each secondary DB instance. This approach helps provide data redundancy and
failover support, eliminate I/O freezes, and minimize latency spikes during system backups. In a Multi-
AZ DB clusters deployment, the secondary DB instances can also serve read traffic. For more information,
see Configuring and managing a Multi-AZ deployment (p. 492).

Security
A security group controls the access to a DB instance. It does so by allowing access to IP address ranges or
Amazon EC2 instances that you specify.

For more information about security groups, see Security in Amazon RDS (p. 2565).

Amazon RDS monitoring


There are several ways that you can track the performance and health of a DB instance. You can use
the Amazon CloudWatch service to monitor the performance and health of a DB instance. CloudWatch
performance charts are shown in the Amazon RDS console. You can also subscribe to Amazon RDS
events to be notified about changes to a DB instance, DB snapshot, or DB parameter group. For more
information, see Monitoring metrics in an Amazon RDS instance (p. 678).

How to work with Amazon RDS


There are several ways that you can interact with Amazon RDS.

5
Amazon Relational Database Service User Guide
AWS Management Console

AWS Management Console


The AWS Management Console is a simple web-based user interface. You can manage your DB instances
from the console with no programming required. To access the Amazon RDS console, sign in to the AWS
Management Console and open the Amazon RDS console at https://console.aws.amazon.com/rds/.

Command line interface


You can use the AWS Command Line Interface (AWS CLI) to access the Amazon RDS API interactively. To
install the AWS CLI, see Installing the AWS Command Line Interface. To begin using the AWS CLI for RDS,
see AWS Command Line Interface reference for Amazon RDS.

Amazon RDS APIs


If you are a developer, you can access the Amazon RDS programmatically using APIs. For more
information, see Amazon RDS API reference (p. 2742).

For application development, we recommend that you use one of the AWS Software Development Kits
(SDKs). The AWS SDKs handle low-level details such as authentication, retry logic, and error handling, so
that you can focus on your application logic. AWS SDKs are available for a wide variety of languages. For
more information, see Tools for Amazon web services .

AWS also provides libraries, sample code, tutorials, and other resources to help you get started more
easily. For more information, see Sample code & libraries.

How you are charged for Amazon RDS


When you use Amazon RDS, you can choose to use on-demand DB instances or reserved DB instances.
For more information, see DB instance billing for Amazon RDS (p. 163).

For Amazon RDS pricing information, see the Amazon RDS product page.

What's next?
The preceding section introduced you to the basic infrastructure components that RDS offers. What
should you do next?

Getting started
Create a DB instance using instructions in Getting started with Amazon RDS (p. 180).

Topics specific to database engines


You can review information specific to a particular DB engine in the following sections:

• Amazon RDS for MariaDB (p. 1255)


• Amazon RDS for Microsoft SQL Server (p. 1354)
• Amazon RDS for MySQL (p. 1622)
• Amazon RDS for Oracle (p. 1785)
• Amazon RDS for PostgreSQL (p. 2147)

6
Amazon Relational Database Service User Guide
Topics specific to database engines

7
Amazon Relational Database Service User Guide
Amazon RDS shared responsibility model

Amazon RDS shared responsibility model


Amazon RDS is responsible for hosting the software components and infrastructure of DB instances and
DB cluster. You are responsible for query tuning, which is the process of adjusting SQL queries to improve
performance. Query performance is highly dependent on database design, data size, data distribution,
application workload, and query patterns, which can vary greatly. Monitoring and tuning are highly
individualized processes that you own for your RDS databases. You can use Amazon RDS Performance
Insights and other tools to identify problematic queries.

8
Amazon Relational Database Service User Guide
DB instances

Amazon RDS DB instances


A DB instance is an isolated database environment running in the cloud. It is the basic building block of
Amazon RDS. A DB instance can contain multiple user-created databases, and can be accessed using the
same client tools and applications you might use to access a standalone database instance. DB instances
are simple to create and modify with the AWS command line tools, Amazon RDS API operations, or the
AWS Management Console.
Note
Amazon RDS supports access to databases using any standard SQL client application. Amazon
RDS does not allow direct host access.

You can have up to 40 Amazon RDS DB instances, with the following limitations:

• 10 for each SQL Server edition (Enterprise, Standard, Web, and Express) under the "license-included"
model
• 10 for Oracle under the "license-included" model
• 40 for MySQL, MariaDB, or PostgreSQL
• 40 for Oracle under the "bring-your-own-license" (BYOL) licensing model

Note
If your application requires more DB instances, you can request additional DB instances by using
this form.

Each DB instance has a DB instance identifier. This customer-supplied name uniquely identifies the DB
instance when interacting with the Amazon RDS API and AWS CLI commands. The DB instance identifier
must be unique for that customer in an AWS Region.

The DB instance identifier forms part of the DNS hostname allocated to your instance by RDS.
For example, if you specify db1 as the DB instance identifier, then RDS will automatically
allocate a DNS endpoint for your instance. An example endpoint is db1.abcdefghijkl.us-
east-1.rds.amazonaws.com, where db1 is your instance ID.

In the example endpoint db1.abcdefghijkl.us-east-1.rds.amazonaws.com, the string


abcdefghijkl is a unique identifier for a specific combination of AWS Region and AWS account. The
identifier abcdefghijkl in the example is internally generated by RDS and doesn't change for the
specified combination of Region and account. Thus, all your DB instances in this Region share the same
fixed identifier. Consider the following features of the fixed identifier:

• If you rename your DB instance, the endpoint is different but the fixed identifier is the same.
For example, if you rename db1 to renamed-db1, the new instance endpoint is renamed-
db1.abcdefghijkl.us-east-1.rds.amazonaws.com.
• If you delete and re-create a DB instance with the same DB instance identifier, the endpoint is the
same.
• If you use the same account to create a DB instance in a different Region, the internally
generated identifier is different because the Region is different, as in db2.mnopqrstuvwx.us-
west-1.rds.amazonaws.com.

Each DB instance supports a database engine. Amazon RDS currently supports MySQL, MariaDB,
PostgreSQL, Oracle, Microsoft SQL Server, and Amazon Aurora database engines.

When creating a DB instance, some database engines require that a database name be specified. A DB
instance can host multiple databases, or a single Oracle database with multiple schemas. The database
name value depends on the database engine:

9
Amazon Relational Database Service User Guide
DB instances

• For the MySQL and MariaDB database engines, the database name is the name of a database hosted
in your DB instance. Databases hosted by the same DB instance must have a unique name within that
instance.
• For the Oracle database engine, database name is used to set the value of ORACLE_SID, which must be
supplied when connecting to the Oracle RDS instance.
• For the Microsoft SQL Server database engine, database name is not a supported parameter.
• For the PostgreSQL database engine, the database name is the name of a database hosted in your DB
instance. A database name is not required when creating a DB instance. Databases hosted by the same
DB instance must have a unique name within that instance.

Amazon RDS creates a master user account for your DB instance as part of the creation process. This
master user has permissions to create databases and to perform create, delete, select, update, and insert
operations on tables the master user creates. You must set the master user password when you create
a DB instance, but you can change it at any time using the AWS CLI, Amazon RDS API operations, or the
AWS Management Console. You can also change the master user password and manage users using
standard SQL commands.
Note
This guide covers non-Aurora Amazon RDS database engines. For information about using
Amazon Aurora, see the Amazon Aurora User Guide.

10
Amazon Relational Database Service User Guide
DB instance classes

DB instance classes
The DB instance class determines the computation and memory capacity of an Amazon RDS DB instance.
The DB instance class that you need depends on your processing power and memory requirements.

A DB instance class consists of both the DB instance class type and the size. For example, db.r6g is a
memory-optimized DB instance class type powered by AWS Graviton2 processors. Within the db.r6g
instance class type, db.r6g.2xlarge is a DB instance class. The size of this class is 2xlarge.

For more information about instance class pricing, see Amazon RDS pricing.

Topics
• DB instance class types (p. 11)
• Supported DB engines for DB instance classes (p. 14)
• Determining DB instance class support in AWS Regions (p. 68)
• Changing your DB instance class (p. 71)
• Configuring the processor for a DB instance class in RDS for Oracle (p. 71)
• Hardware specifications for DB instance classes (p. 87)

DB instance class types


Amazon RDS supports DB instance classes for the following use cases:

• General-purpose (p. 11)


• Memory-optimized (p. 12)
• Burstable-performance (p. 14)

For more information about Amazon EC2 instance types, see Instance types in the Amazon EC2
documentation.

General-purpose instance class types


The following general-purpose DB instance class types are available:

• db.m7g – General-purpose DB instance classes powered by AWS Graviton3 processors. These instance
classes deliver balanced compute, memory, and networking for a broad range of general-purpose
workloads.

You can modify a DB instance to use one of the DB instance classes powered by AWS Graviton3
processors. To do so, complete the same steps as with any other DB instance modification.
• db.m6g – General-purpose DB instance classes powered by AWS Graviton2 processors. These instances
deliver balanced compute, memory, and networking for a broad range of general-purpose workloads.
The db.m6gd instance classes have local NVMe-based SSD block-level storage for applications that
need high-speed, low latency local storage.

You can modify a DB instance to use one of the DB instance classes powered by AWS Graviton2
processors. To do so, complete the same steps as with any other DB instance modification.
• db.m6i – General-purpose DB instance classes powered by 3rd Generation Intel Xeon Scalable
processors. These instances are SAP Certified and ideal for workloads such as backend servers
supporting enterprise applications, gaming servers, caching fleets, and application development

11
Amazon Relational Database Service User Guide
DB instance class types

environments. The db.m6id instance classes offer up to 7.6 TB of local NVMe-based SSD storage,
whereas db.m6i offers EBS-only storage.
• db.m5 –General-purpose DB instance classes that provide a balance of compute, memory, and
network resources, and are a good choice for many applications. The db.m5d instance class offers
NVMe-based SSD storage that is physically connected to the host server. The db.m5 instance classes
provide more computing capacity than the previous db.m4 instance classes. They are powered by the
AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor.
• db.m4 – General-purpose DB instance classes that provide more computing capacity than the previous
db.m3 instance classes.

For the RDS for Oracle DB engines, Amazon RDS no longer supports db.m4 DB instance classes. If you
had previously created RDS for Oracle db.m4 DB instances, Amazon RDS automatically upgrades those
DB instances to equivalent db.m5 DB instance classes.
• db.m3 – General-purpose DB instance classes that provide more computing capacity than the previous
db.m1 instance classes.

For the RDS for MariaDB, RDS for MySQL, and RDS for PostgreSQL DB engines, Amazon RDS has
started the end-of-life process for db.m3 DB instance classes using the following schedule, which
includes upgrade recommendations. For all RDS DB instances that use db.m3 DB instance classes, we
recommend that you upgrade to a db.m5 DB instance class as soon as possible.

Action or recommendation Dates

You can no longer create RDS DB instances that use Now


db.m3 DB instance classes.

Amazon RDS started automatic upgrades of RDS February 1, 2023


DB instances that use db.m3 DB instance classes to
equivalent db.m5 DB instance classes.

Memory-optimized instance class types


The memory-optimized Z family supports the following instance class type:

• db.z1d – Instance classes optimized for memory-intensive applications. These instance classes offer
both high compute capacity and a high memory footprint. High frequency z1d instances deliver a
sustained all-core frequency of up to 4.0 GHz.

The memory-optimized X family supports the following instance class types:

• db.x2g – Instance classes optimized for memory-intensive applications and powered by AWS
Graviton2 processors. These instance classes offer low cost per GiB of memory.

You can modify a DB instance to use one of the DB instance classes powered by AWS Graviton2
processors. To do so, complete the same steps as with any other DB instance modification.
• db.x2i – Instance classes optimized for memory-intensive applications. The db.x2iedn and db.x2idn
classes are powered by third-generation Intel Xeon Scalable processors (Ice Lake). They include up
to 3.8 TB of local NVMe SSD storage, up to 100 Gbps of networking bandwidth, and up to 4 TiB
(db.x2iden) or 2 TiB (db.x2idn) of memory. The db.x2iezn class is powered by second-generation Intel
Xeon Scalable processors (Cascade Lake) with an all-core turbo frequency of up to 4.5 GHz and up to
1.5 TiB of memory.
• db.x1 – Instance classes optimized for memory-intensive applications. These instance classes offer one
of the lowest price per GiB of RAM among the DB instance classes and up to 1,952 GiB of DRAM-based
instance memory. The db.x1e type offers up to 3,904 GiB of DRAM-based instance memory.

12
Amazon Relational Database Service User Guide
DB instance class types

The memory-optimized R family supports the following instance class types:

• db.r7g – Instance classes powered by AWS Graviton3 processors. These instance classes are ideal for
running memory-intensive workloads in open-source databases such as MySQL and PostgreSQL.

You can modify a DB instance to use one of the DB instance classes powered by AWS Graviton3
processors. To do so, complete the same steps as with any other DB instance modification.
• db.r6g – Instance classes powered by AWS Graviton2 processors. These instance classes are ideal for
running memory-intensive workloads in open-source databases such as MySQL and PostgreSQL. The
db.r6gd type offers local NVMe-based SSD block-level storage for applications that need high-speed,
low latency local storage.

You can modify a DB instance to use one of the DB instance classes powered by AWS Graviton2
processors. To do so, complete the same steps as with any other DB instance modification.
• db.r6i – Instance classes powered by 3rd Generation Intel Xeon Scalable processors. These instances
are SAP-Certified and are an ideal fit for memory-intensive workloads in open-source databases such
as MySQL and PostgreSQL. The db.r6id instance class type has a memory-to-vCPU ratio of 8:1 and a
maximum memory of 1 TiB. The db.r6id instance class type offers up to 7.6 TB of local NVMe-based
SSD storage, whereas the db.r6i class type offers EBS-only storage.
• db.r5b – Instance classes that are memory-optimized for throughput-intensive applications. Powered
by the AWS Nitro System, db.r5b instances deliver up to 60 Gbps bandwidth and 260,000 IOPS of EBS
performance. This is the fastest block storage performance on EC2.
• db.r5d – Instance classes that are optimized for low latency, very high random I/O performance, and
high sequential read throughput.
• db.r5 – Instance classes optimized for memory-intensive applications. These instance classes offer
improved networking performance. They are powered by the AWS Nitro System, a combination of
dedicated hardware and lightweight hypervisor.
• db.r4 – Instance classes that provide improved networking over previous db.r3 instance classes.

For the RDS for Oracle DB engines, Amazon RDS has started the end-of-life process for db.r4 DB
instance classes using the following schedule, which includes upgrade recommendations. For RDS
for Oracle DB instances that use db.r4 instance classes, we recommend that you upgrade to a db.r5
instance class as soon as possible.

Action or recommendation Dates

You can no longer create RDS for Oracle DB instances Now


that use db.r4 DB instance classes.

Amazon RDS started automatic upgrades of RDS for April 17, 2023
Oracle DB instances that use db.r4 DB instance classes
to equivalent db.r5 DB instance classes.

• db.r3 – Instance classes that provide memory optimization.

For the RDS for MariaDB, RDS for MySQL, and RDS for PostgreSQL DB engines, Amazon RDS has
started the end-of-life process for db.r3 DB instance classes using the following schedule, which
includes upgrade recommendations. For all RDS DB instances that use db.r3 DB instance classes, we
recommend that you upgrade to a db.r5 DB instance class as soon as possible.

Action or recommendation Dates

You can no longer create RDS DB instances that use Now


db.r3 DB instance classes.

13
Amazon Relational Database Service User Guide
Supported DB engines

Action or recommendation Dates

Amazon RDS started automatic upgrades of RDS February 1, 2023


DB instances that use db.r3 DB instance classes to
equivalent db.r5 DB instance classes.

Burstable-performance instance class types


The following burstable-performance DB instance class types are available:

• db.t4g – General-purpose instance classes powered by Arm-based AWS Graviton2 processors. These
instance classes deliver better price performance than previous burstable-performance DB instance
classes for a broad set of burstable general-purpose workloads. Amazon RDS db.t4g instances are
configured for Unlimited mode. This means that they can burst beyond the baseline over a 24-hour
window for an additional charge.

You can modify a DB instance to use one of the DB instance classes powered by AWS Graviton2
processors. To do so, complete the same steps as with any other DB instance modification.
• db.t3 – Instance classes that provide a baseline performance level, with the ability to burst to full
CPU usage. The db.t3 instances are configured for Unlimited mode. These instance classes provide
more computing capacity than the previous db.t2 instance classes. They are powered by the AWS Nitro
System, a combination of dedicated hardware and lightweight hypervisor.
• db.t2 – Instance classes that provide a baseline performance level, with the ability to burst to full CPU
usage. We recommend using these instance classes only for development and test servers, or other
non-production servers.

Note
The DB instance classes that use the AWS Nitro System (db.m5, db.r5, db.t3) are throttled on
combined read plus write workload.

For DB instance class hardware specifications, see Hardware specifications for DB instance
classes (p. 87).

Supported DB engines for DB instance classes


The following are DB engine–specific considerations for DB instance classes:

Microsoft SQL Server

DB instance class support varies according to the version and edition of SQL Server. For
instance class support by version and edition, see DB instance class support for Microsoft SQL
Server (p. 1358).
Oracle

DB instance class support varies according to the Oracle Database version and edition. RDS for
Oracle supports additional memory-optimized instance classes. These classes have names of the
form db.r5.instance_size.tpcthreads_per_core.memratio. For the vCPU count and memory
allocation for each optimized class, see Supported RDS for Oracle instance classes (p. 1797).
RDS Custom

For information about the DB instance classes supported in RDS Custom, see DB instance class
support for RDS Custom for Oracle (p. 999) and DB instance class support for RDS Custom for SQL
Server (p. 1089).

14
Amazon Relational Database Service User Guide
Supported DB engines

In the following table, you can find details about supported Amazon RDS DB instance classes for each
Amazon RDS DB engine. The cell for each engine contains one of the following values:

Yes

The instance class is supported for all versions of the DB engine.

No

The instance class isn't supported for the DB engine.

specific-versions

The instance class is supported only for the specified database versions of the DB engine.

Amazon RDS periodically deprecates major and minor versions. For information about current
supported versions, see topics for the individual DB engines: MariaDB versions (p. 1265), Microsoft
SQL Server versions (p. 1362), MySQL versions (p. 1627), Oracle versions (p. 1789), and PostgreSQL
versions (p. 2154).

Instance class MariaDB Microsoft MySQL Oracle PostgreSQL


SQL Server

db.m7g – general-purpose instance classes powered by AWS Graviton3 processors

db.m7g.16xlarge All MariaDB No MySQL 8.0.28 No PostgreSQL


10.11 and higher 15.2 and
versions, higher 15
MariaDB versions,
10.6.10 and PostgreSQL
higher 10.6 14.5 and
versions, higher 14
MariaDB versions, and
10.5.17 and PostgreSQL
higher 10.5 13.4 and
versions, higher 13
and MariaDB versions
10.4.26 and
higher 10.4
versions

db.m7g.12xlarge All MariaDB No MySQL 8.0.28 No PostgreSQL


10.11 and higher 15.2 and
versions, higher 15
MariaDB versions,
10.6.10 and PostgreSQL
higher 10.6 14.5 and
versions, higher 14
MariaDB versions, and
10.5.17 and PostgreSQL
higher 10.5 13.4 and
versions, higher 13
and MariaDB versions
10.4.26 and
higher 10.4
versions

15
Amazon Relational Database Service User Guide
Supported DB engines

Instance class MariaDB Microsoft MySQL Oracle PostgreSQL


SQL Server

db.m7g.8xlarge All MariaDB No MySQL 8.0.28 No PostgreSQL


10.11 and higher 15.2 and
versions, higher 15
MariaDB versions,
10.6.10 and PostgreSQL
higher 10.6 14.5 and
versions, higher 14
MariaDB versions, and
10.5.17 and PostgreSQL
higher 10.5 13.4 and
versions, higher 13
and MariaDB versions
10.4.26 and
higher 10.4
versions

db.m7g.4xlarge All MariaDB No MySQL 8.0.28 No PostgreSQL


10.11 and higher 15.2 and
versions, higher 15
MariaDB versions,
10.6.10 and PostgreSQL
higher 10.6 14.5 and
versions, higher 14
MariaDB versions, and
10.5.17 and PostgreSQL
higher 10.5 13.4 and
versions, higher 13
and MariaDB versions
10.4.26 and
higher 10.4
versions

db.m7g.2xlarge All MariaDB No MySQL 8.0.28 No PostgreSQL


10.11 and higher 15.2 and
versions, higher 15
MariaDB versions,
10.6.10 and PostgreSQL
higher 10.6 14.5 and
versions, higher 14
MariaDB versions, and
10.5.17 and PostgreSQL
higher 10.5 13.4 and
versions, higher 13
and MariaDB versions
10.4.26 and
higher 10.4
versions

16
Amazon Relational Database Service User Guide
Supported DB engines

Instance class MariaDB Microsoft MySQL Oracle PostgreSQL


SQL Server

db.m7g.xlarge All MariaDB No MySQL 8.0.28 No PostgreSQL


10.11 and higher 15.2 and
versions, higher 15
MariaDB versions,
10.6.10 and PostgreSQL
higher 10.6 14.5 and
versions, higher 14
MariaDB versions, and
10.5.17 and PostgreSQL
higher 10.5 13.4 and
versions, higher 13
and MariaDB versions
10.4.26 and
higher 10.4
versions

db.m7g.large All MariaDB No MySQL 8.0.28 No PostgreSQL


10.11 and higher 15.2 and
versions, higher 15
MariaDB versions,
10.6.10 and PostgreSQL
higher 10.6 14.5 and
versions, higher 14
MariaDB versions, and
10.5.17 and PostgreSQL
higher 10.5 13.4 and
versions, higher 13
and MariaDB versions
10.4.26 and
higher 10.4
versions

db.m6g – general-purpose instance classes powered by AWS Graviton2 processors

db.m6g.16xlarge All MariaDB No MySQL 8.0.23 No All


10.11 and higher PostgreSQL
versions, all 15
MariaDB 10.6 versions, all
versions, all PostgreSQL
MariaDB 10.5 14
versions, and versions, all
all MariaDB PostgreSQL
10.4 versions 13 versions,
PostgreSQL
12.7 and
higher

17
Amazon Relational Database Service User Guide
Supported DB engines

Instance class MariaDB Microsoft MySQL Oracle PostgreSQL


SQL Server

db.m6g.12xlarge All MariaDB No MySQL 8.0.23 No All


10.11 and higher PostgreSQL
versions, all 15
MariaDB 10.6 versions, all
versions, all PostgreSQL
MariaDB 10.5 14
versions, and versions, all
all MariaDB PostgreSQL
10.4 versions 13 versions,
PostgreSQL
12.7 and
higher

db.m6g.8xlarge All MariaDB No MySQL 8.0.23 No All


10.11 and higher PostgreSQL
versions, all 15
MariaDB 10.6 versions, all
versions, all PostgreSQL
MariaDB 10.5 14
versions, and versions, all
all MariaDB PostgreSQL
10.4 versions 13 versions,
PostgreSQL
12.7 and
higher

db.m6g.4xlarge All MariaDB No MySQL 8.0.23 No All


10.11 and higher PostgreSQL
versions, all 15
MariaDB 10.6 versions, all
versions, all PostgreSQL
MariaDB 10.5 14
versions, and versions, all
all MariaDB PostgreSQL
10.4 versions 13 versions,
PostgreSQL
12.7 and
higher

db.m6g.2xlarge All MariaDB No MySQL 8.0.23 No All


10.11 and higher PostgreSQL
versions, all 15
MariaDB 10.6 versions, all
versions, all PostgreSQL
MariaDB 10.5 14
versions, and versions, all
all MariaDB PostgreSQL
10.4 versions 13 versions,
PostgreSQL
12.7 and
higher

18
Amazon Relational Database Service User Guide
Supported DB engines

Instance class MariaDB Microsoft MySQL Oracle PostgreSQL


SQL Server

db.m6g.xlarge All MariaDB No MySQL 8.0.23 No All


10.11 and higher PostgreSQL
versions, all 15
MariaDB 10.6 versions, all
versions, all PostgreSQL
MariaDB 10.5 14
versions, and versions, all
all MariaDB PostgreSQL
10.4 versions 13 versions,
PostgreSQL
12.7 and
higher

db.m6g.large All MariaDB No MySQL 8.0.23 No All


10.11 and higher PostgreSQL
versions, all 15
MariaDB 10.6 versions, all
versions, all PostgreSQL
MariaDB 10.5 14
versions, and versions, all
all MariaDB PostgreSQL
10.4 versions 13 versions,
PostgreSQL
12.7 and
higher

db.m6gd – general-purpose instance classes powered by AWS Graviton2 processors

db.m6gd.16xlarge All MariaDB No MySQL 8.0.28 No All


10.11 and higher PostgreSQL
versions, 15
MariaDB versions, all
10.6.7 and PostgreSQL
higher 10.6 14 versions,
versions, PostgreSQL
MariaDB 13.4 and
10.5.16 and PostgreSQL
higher 10.5 13.7 and
versions, higher 13
and MariaDB versions
10.4.25 and
higher 10.4
versions

19
Amazon Relational Database Service User Guide
Supported DB engines

Instance class MariaDB Microsoft MySQL Oracle PostgreSQL


SQL Server

db.m6gd.12xlarge All MariaDB No MySQL 8.0.28 No All


10.11 and higher PostgreSQL
versions, 15
MariaDB versions, all
10.6.7 and PostgreSQL
higher 10.6 14 versions,
versions, PostgreSQL
MariaDB 13.4 and
10.5.16 and PostgreSQL
higher 10.5 13.7 and
versions, higher 13
and MariaDB versions
10.4.25 and
higher 10.4
versions

db.m6gd.8xlarge All MariaDB No MySQL 8.0.28 No All


10.11 and higher PostgreSQL
versions, 15
MariaDB versions, all
10.6.7 and PostgreSQL
higher 10.6 14 versions,
versions, PostgreSQL
MariaDB 13.4 and
10.5.16 and PostgreSQL
higher 10.5 13.7 and
versions, higher 13
and MariaDB versions
10.4.25 and
higher 10.4
versions

db.m6gd.4xlarge All MariaDB No MySQL 8.0.28 No All


10.11 and higher PostgreSQL
versions, 15
MariaDB versions, all
10.6.7 and PostgreSQL
higher 10.6 14 versions,
versions, PostgreSQL
MariaDB 13.4 and
10.5.16 and PostgreSQL
higher 10.5 13.7 and
versions, higher 13
and MariaDB versions
10.4.25 and
higher 10.4
versions

20
Amazon Relational Database Service User Guide
Supported DB engines

Instance class MariaDB Microsoft MySQL Oracle PostgreSQL


SQL Server

db.m6gd.2xlarge All MariaDB No MySQL 8.0.28 No All


10.11 and higher PostgreSQL
versions, 15
MariaDB versions, all
10.6.7 and PostgreSQL
higher 10.6 14 versions,
versions, PostgreSQL
MariaDB 13.4 and
10.5.16 and PostgreSQL
higher 10.5 13.7 and
versions, higher 13
and MariaDB versions
10.4.25 and
higher 10.4
versions

db.m6gd.xlarge All MariaDB No MySQL 8.0.28 No All


10.11 and higher PostgreSQL
versions, 15
MariaDB versions, all
10.6.7 and PostgreSQL
higher 10.6 14 versions,
versions, PostgreSQL
MariaDB 13.4 and
10.5.16 and PostgreSQL
higher 10.5 13.7 and
versions, higher 13
and MariaDB versions
10.4.25 and
higher 10.4
versions

db.m6gd.large All MariaDB No MySQL 8.0.28 No All


10.11 and higher PostgreSQL
versions, 15
MariaDB versions, all
10.6.7 and PostgreSQL
higher 10.6 14 versions,
versions, PostgreSQL
MariaDB 13.4 and
10.5.16 and PostgreSQL
higher 10.5 13.7 and
versions, higher 13
and MariaDB versions
10.4.25 and
higher 10.4
versions

db.m6id – general-purpose instance classes powered by 3rd generation Intel Xeon Scalable processors

21
Amazon Relational Database Service User Guide
Supported DB engines

Instance class MariaDB Microsoft MySQL Oracle PostgreSQL


SQL Server

db.m6id.32xlarge MariaDB No MySQL No PostgreSQL


version version 8.0.28 15.2 and
10.6.10 and and higher higher 15
higher 10.6 versions,
versions, PostgreSQL
MariaDB 14.5 and
version higher 14
10.5.16 and versions, and
higher 10.5 PostgreSQL
versions, 13.7 and
and MariaDB higher 13
version versions
10.4.25 and
higher 10.4
versions

db.m6id.24xlarge MariaDB No MySQL No PostgreSQL


version version 8.0.28 15.2 and
10.6.10 and and higher higher 15
higher 10.6 versions,
versions, PostgreSQL
MariaDB 14.5 and
version higher 14
10.5.16 and versions, and
higher 10.5 PostgreSQL
versions, 13.7 and
and MariaDB higher 13
version versions
10.4.25 and
higher 10.4
versions

db.m6id.16xlarge MariaDB No MySQL No PostgreSQL


version version 8.0.28 15.2 and
10.6.10 and and higher higher 15
higher 10.6 versions,
versions, PostgreSQL
MariaDB 14.5 and
version higher 14
10.5.16 and versions, and
higher 10.5 PostgreSQL
versions, 13.7 and
and MariaDB higher 13
version versions
10.4.25 and
higher 10.4
versions

22
Amazon Relational Database Service User Guide
Supported DB engines

Instance class MariaDB Microsoft MySQL Oracle PostgreSQL


SQL Server

db.m6id.12xlarge MariaDB No MySQL No PostgreSQL


version version 8.0.28 15.2 and
10.6.10 and and higher higher 15
higher 10.6 versions,
versions, PostgreSQL
MariaDB 14.5 and
version higher 14
10.5.16 and versions, and
higher 10.5 PostgreSQL
versions, 13.7 and
and MariaDB higher 13
version versions
10.4.25 and
higher 10.4
versions

db.m6id.8xlarge MariaDB No MySQL No PostgreSQL


version version 8.0.28 15.2 and
10.6.10 and and higher higher 15
higher 10.6 versions,
versions, PostgreSQL
MariaDB 14.5 and
version higher 14
10.5.16 and versions, and
higher 10.5 PostgreSQL
versions, 13.7 and
and MariaDB higher 13
version versions
10.4.25 and
higher 10.4
versions

db.m6id.4xlarge MariaDB No MySQL No PostgreSQL


version version 8.0.28 15.2 and
10.6.10 and and higher higher 15
higher 10.6 versions,
versions, PostgreSQL
MariaDB 14.5 and
version higher 14
10.5.16 and versions, and
higher 10.5 PostgreSQL
versions, 13.7 and
and MariaDB higher 13
version versions
10.4.25 and
higher 10.4
versions

23
Amazon Relational Database Service User Guide
Supported DB engines

Instance class MariaDB Microsoft MySQL Oracle PostgreSQL


SQL Server

db.m6id.2xlarge MariaDB No MySQL No PostgreSQL


version version 8.0.28 15.2 and
10.6.10 and and higher higher 15
higher 10.6 versions,
versions, PostgreSQL
MariaDB 14.5 and
version higher 14
10.5.16 and versions, and
higher 10.5 PostgreSQL
versions, 13.7 and
and MariaDB higher 13
version versions
10.4.25 and
higher 10.4
versions

db.m6id.xlarge MariaDB No MySQL No PostgreSQL


version version 8.0.28 15.2 and
10.6.10 and and higher higher 15
higher 10.6 versions,
versions, PostgreSQL
MariaDB 14.5 and
version higher 14
10.5.16 and versions, and
higher 10.5 PostgreSQL
versions, 13.7 and
and MariaDB higher 13
version versions
10.4.25 and
higher 10.4
versions

db.m6id.large MariaDB No MySQL No PostgreSQL


version version 8.0.28 15.2 and
10.6.10 and and higher higher 15
higher 10.6 versions,
versions, PostgreSQL
MariaDB 14.5 and
version higher 14
10.5.16 and versions, and
higher 10.5 PostgreSQL
versions, 13.7 and
and MariaDB higher 13
version versions
10.4.25 and
higher 10.4
versions

db.m6i – general-purpose instance classes

24
Amazon Relational Database Service User Guide
Supported DB engines

Instance class MariaDB Microsoft MySQL Oracle PostgreSQL


SQL Server

db.m6i.32xlarge All MariaDB Yes MySQL Oracle All


10.11 version 8.0.28 Database 19c PostgreSQL
versions, and higher 15
MariaDB versions, all
10.6.7 and PostgreSQL
higher 10.6 14 versions;
versions, PostgreSQL
MariaDB 13.4, 12.8,
10.5.15 and 11.13 and
higher 10.5 higher
versions,
and MariaDB
10.4.24 and
higher 10.4
versions

db.m6i.24xlarge All MariaDB Yes MySQL Oracle All


10.11 version 8.0.28 Database 19c PostgreSQL
versions, and higher 15
MariaDB versions, all
10.6.7 and PostgreSQL
higher 10.6 14 versions;
versions, PostgreSQL
MariaDB 13.4, 12.8,
10.5.15 and 11.13 and
higher 10.5 higher
versions,
and MariaDB
10.4.24 and
higher 10.4
versions

db.m6i.16xlarge All MariaDB Yes MySQL Oracle All


10.11 version 8.0.28 Database 19c PostgreSQL
versions, and higher 15
MariaDB versions, all
10.6.7 and PostgreSQL
higher 10.6 14 versions;
versions, PostgreSQL
MariaDB 13.4, 12.8,
10.5.15 and 11.13 and
higher 10.5 higher
versions,
and MariaDB
10.4.24 and
higher 10.4
versions

25
Amazon Relational Database Service User Guide
Supported DB engines

Instance class MariaDB Microsoft MySQL Oracle PostgreSQL


SQL Server

db.m6i.12xlarge All MariaDB Yes MySQL Oracle All


10.11 version 8.0.28 Database 19c PostgreSQL
versions, and higher 15
MariaDB versions, all
10.6.7 and PostgreSQL
higher 10.6 14 versions;
versions, PostgreSQL
MariaDB 13.4, 12.8,
10.5.15 and 11.13 and
higher 10.5 higher
versions,
and MariaDB
10.4.24 and
higher 10.4
versions

db.m6i.8xlarge All MariaDB Yes MySQL Oracle All


10.11 version 8.0.28 Database 19c PostgreSQL
versions, and higher 15
MariaDB versions, all
10.6.7 and PostgreSQL
higher 10.6 14 versions;
versions, PostgreSQL
MariaDB 13.4, 12.8,
10.5.15 and 11.13 and
higher 10.5 higher
versions,
and MariaDB
10.4.24 and
higher 10.4
versions

db.m6i.4xlarge All MariaDB Yes MySQL Oracle All


10.11 version 8.0.28 Database 19c PostgreSQL
versions, and higher 15
MariaDB versions, all
10.6.7 and PostgreSQL
higher 10.6 14 versions;
versions, PostgreSQL
MariaDB 13.4, 12.8,
10.5.15 and 11.13 and
higher 10.5 higher
versions,
and MariaDB
10.4.24 and
higher 10.4
versions

26
Amazon Relational Database Service User Guide
Supported DB engines

Instance class MariaDB Microsoft MySQL Oracle PostgreSQL


SQL Server

db.m6i.2xlarge All MariaDB Yes MySQL Oracle All


10.11 version 8.0.28 Database 19c PostgreSQL
versions, and higher 15
MariaDB versions, all
10.6.7 and PostgreSQL
higher 10.6 14 versions;
versions, PostgreSQL
MariaDB 13.4, 12.8,
10.5.15 and 11.13 and
higher 10.5 higher
versions,
and MariaDB
10.4.24 and
higher 10.4
versions

db.m6i.xlarge All MariaDB Yes MySQL Oracle All


10.11 version 8.0.28 Database 19c PostgreSQL
versions, and higher 15
MariaDB versions, all
10.6.7 and PostgreSQL
higher 10.6 14 versions;
versions, PostgreSQL
MariaDB 13.4, 12.8,
10.5.15 and 11.13 and
higher 10.5 higher
versions,
and MariaDB
10.4.24 and
higher 10.4
versions

db.m6i.large All MariaDB Yes MySQL Oracle All


10.11 version 8.0.28 Database 19c PostgreSQL
versions, and higher 15
MariaDB versions, all
10.6.7 and PostgreSQL
higher 10.6 14 versions;
versions, PostgreSQL
MariaDB 13.4, 12.8,
10.5.15 and 11.13 and
higher 10.5 higher
versions,
and MariaDB
10.4.24 and
higher 10.4
versions

db.m5d – general-purpose instance classes

27
Amazon Relational Database Service User Guide
Supported DB engines

Instance class MariaDB Microsoft MySQL Oracle PostgreSQL


SQL Server

db.m5d.24xlarge All MariaDB Yes MySQL 8.0.28 Yes All


10.11 and higher PostgreSQL
versions, 15 versions,
MariaDB PostgreSQL
10.6.7 and 14.5 and
higher 10.6 higher 14
versions, versions,
MariaDB PostgreSQL
10.5.16 and 13.4, and
higher 10.5 PostgreSQL
versions, 13.7 and
and MariaDB higher 13
10.4.25 and versions
higher 10.4
versions

db.m5d.16xlarge All MariaDB Yes MySQL 8.0.28 Yes All


10.11 and higher PostgreSQL
versions, 15 versions,
MariaDB PostgreSQL
10.6.7 and 14.5 and
higher 10.6 higher 14
versions, versions,
MariaDB PostgreSQL
10.5.16 and 13.4, and
higher 10.5 PostgreSQL
versions, 13.7 and
and MariaDB higher 13
10.4.25 and versions
higher 10.4
versions

db.m5d.12xlarge All MariaDB Yes MySQL 8.0.28 Yes All


10.11 and higher PostgreSQL
versions, 15 versions,
MariaDB PostgreSQL
10.6.7 and 14.5 and
higher 10.6 higher 14
versions, versions,
MariaDB PostgreSQL
10.5.16 and 13.4, and
higher 10.5 PostgreSQL
versions, 13.7 and
and MariaDB higher 13
10.4.25 and versions
higher 10.4
versions

28
Amazon Relational Database Service User Guide
Supported DB engines

Instance class MariaDB Microsoft MySQL Oracle PostgreSQL


SQL Server

db.m5d.8xlarge All MariaDB Yes MySQL 8.0.28 Yes All


10.11 and higher PostgreSQL
versions, 15 versions,
MariaDB PostgreSQL
10.6.7 and 14.5 and
higher 10.6 higher 14
versions, versions,
MariaDB PostgreSQL
10.5.16 and 13.4, and
higher 10.5 PostgreSQL
versions, 13.7 and
and MariaDB higher 13
10.4.25 and versions
higher 10.4
versions

db.m5d.4xlarge All MariaDB Yes MySQL 8.0.28 Yes All


10.11 and higher PostgreSQL
versions, 15 versions,
MariaDB PostgreSQL
10.6.7 and 14.5 and
higher 10.6 higher 14
versions, versions,
MariaDB PostgreSQL
10.5.16 and 13.4, and
higher 10.5 PostgreSQL
versions, 13.7 and
and MariaDB higher 13
10.4.25 and versions
higher 10.4
versions

db.m5d.2xlarge All MariaDB Yes MySQL 8.0.28 Yes All


10.11 and higher PostgreSQL
versions, 15 versions,
MariaDB PostgreSQL
10.6.7 and 14.5 and
higher 10.6 higher 14
versions, versions,
MariaDB PostgreSQL
10.5.16 and 13.4, and
higher 10.5 PostgreSQL
versions, 13.7 and
and MariaDB higher 13
10.4.25 and versions
higher 10.4
versions

29
Amazon Relational Database Service User Guide
Supported DB engines

Instance class MariaDB Microsoft MySQL Oracle PostgreSQL


SQL Server

db.m5d.xlarge All MariaDB Yes MySQL 8.0.28 Yes All


10.11 and higher PostgreSQL
versions, 15 versions,
MariaDB PostgreSQL
10.6.7 and 14.5 and
higher 10.6 higher 14
versions, versions,
MariaDB PostgreSQL
10.5.16 and 13.4, and
higher 10.5 PostgreSQL
versions, 13.7 and
and MariaDB higher 13
10.4.25 and versions
higher 10.4
versions

db.m5d.large All MariaDB Yes MySQL 8.0.28 Yes All


10.11 and higher PostgreSQL
versions, 15 versions,
MariaDB PostgreSQL
10.6.7 and 14.5 and
higher 10.6 higher 14
versions, versions,
MariaDB PostgreSQL
10.5.16 and 13.4, and
higher 10.5 PostgreSQL
versions, 13.7 and
and MariaDB higher 13
10.4.25 and versions
higher 10.4
versions

db.m5 – general-purpose instance classes

db.m5.24xlarge Yes Yes Yes Yes All


PostgreSQL
15, 14, 13,
12, and 11
versions;
10.17 and
higher; 9.6.22
and higher

db.m5.16xlarge Yes Yes Yes Yes All


PostgreSQL
15, 14, 13,
12, and 11
versions;
10.17 and
higher; 9.6.22
and higher

30
Amazon Relational Database Service User Guide
Supported DB engines

Instance class MariaDB Microsoft MySQL Oracle PostgreSQL


SQL Server

db.m5.12xlarge Yes Yes Yes Yes All


PostgreSQL
15, 14, 13,
12, and 11
versions;
10.17 and
higher; 9.6.22
and higher

db.m5.8xlarge Yes Yes Yes Yes All


PostgreSQL
15, 14, 13,
12, and 11
versions;
10.17 and
higher; 9.6.22
and higher

db.m5.4xlarge Yes Yes Yes Yes All


PostgreSQL
15, 14, 13,
12, and 11
versions;
10.17 and
higher; 9.6.22
and higher

db.m5.2xlarge Yes Yes Yes Yes All


PostgreSQL
15, 14, 13,
12, and 11
versions;
10.17 and
higher; 9.6.22
and higher

db.m5.xlarge Yes Yes Yes Yes All


PostgreSQL
15, 14, 13,
12, and 11
versions;
10.17 and
higher; 9.6.22
and higher

db.m5.large Yes Yes Yes Yes All


PostgreSQL
15, 14, 13,
12, and 11
versions;
10.17 and
higher; 9.6.22
and higher

db.m4 – general-purpose instance classes

31
Amazon Relational Database Service User Guide
Supported DB engines

Instance class MariaDB Microsoft MySQL Oracle PostgreSQL


SQL Server

db.m4.16xlarge All MariaDB Yes MySQL 8.0, Deprecated Lower than


10.6 versions, 5.7 PostgreSQL
all MariaDB 13
10.5 versions,
all MariaDB
10.4 versions,
all MariaDB
10.3 versions

db.m4.10xlarge All MariaDB Yes Yes Deprecated Lower than


10.6 versions, PostgreSQL
all MariaDB 13
10.5 versions,
all MariaDB
10.4 versions,
all MariaDB
10.3 versions

db.m4.4xlarge All MariaDB Yes Yes Deprecated Lower than


10.6 versions, PostgreSQL
all MariaDB 13
10.5 versions,
all MariaDB
10.4 versions,
all MariaDB
10.3 versions

db.m4.2xlarge All MariaDB Yes Yes Deprecated Lower than


10.6 versions, PostgreSQL
all MariaDB 13
10.5 versions,
all MariaDB
10.4 versions,
all MariaDB
10.3 versions

db.m4.xlarge All MariaDB Yes Yes Deprecated Lower than


10.6 versions, PostgreSQL
all MariaDB 13
10.5 versions,
all MariaDB
10.4 versions,
all MariaDB
10.3 versions

db.m4.large All MariaDB Yes Yes Deprecated Lower than


10.6 versions, PostgreSQL
all MariaDB 13
10.5 versions,
all MariaDB
10.4 versions,
all MariaDB
10.3 versions

db.m3 – general-purpose instance classes

32
Amazon Relational Database Service User Guide
Supported DB engines

Instance class MariaDB Microsoft MySQL Oracle PostgreSQL


SQL Server

db.m3.2xlarge No Yes Yes Deprecated Deprecated

db.m3.xlarge No Yes Yes Deprecated Deprecated

db.m3.large No Yes Yes Deprecated Deprecated

db.m3.medium No Yes Yes Deprecated Deprecated

db.x2g – memory-optimized instance classes powered by AWS Graviton2 processors

db.x2g.16xlarge All MariaDB No MySQL 8.0.25 No All


10.11 and higher PostgreSQL
versions, all 15
MariaDB 10.6 versions, all
versions, all PostgreSQL
MariaDB 10.5 14
versions, and versions, all
all MariaDB PostgreSQL
10.4 versions 13 versions;
PostgreSQL
12.7 and
higher

db.x2g.12xlarge All MariaDB No MySQL 8.0.25 No All


10.11 and higher PostgreSQL
versions, all 15
MariaDB 10.6 versions, all
versions, all PostgreSQL
MariaDB 10.5 14
versions, and versions, all
all MariaDB PostgreSQL
10.4 versions 13 versions;
PostgreSQL
12.7 and
higher

db.x2g.8xlarge All MariaDB No MySQL 8.0.25 No All


10.11 and higher PostgreSQL
versions, all 15
MariaDB 10.6 versions, all
versions, all PostgreSQL
MariaDB 10.5 14
versions, and versions, all
all MariaDB PostgreSQL
10.4 versions 13 versions;
PostgreSQL
12.7 and
higher

33
Amazon Relational Database Service User Guide
Supported DB engines

Instance class MariaDB Microsoft MySQL Oracle PostgreSQL


SQL Server

db.x2g.4xlarge All MariaDB No MySQL 8.0.25 No All


10.11 and higher PostgreSQL
versions, all 15
MariaDB 10.6 versions, all
versions, all PostgreSQL
MariaDB 10.5 14
versions, and versions, all
all MariaDB PostgreSQL
10.4 versions 13 versions;
PostgreSQL
12.7 and
higher

db.x2g.2xlarge All MariaDB No MySQL 8.0.25 No All


10.11 and higher PostgreSQL
versions, all 15
MariaDB 10.6 versions, all
versions, all PostgreSQL
MariaDB 10.5 14
versions, and versions, all
all MariaDB PostgreSQL
10.4 versions 13 versions;
PostgreSQL
12.7 and
higher

db.x2g.xlarge All MariaDB No MySQL 8.0.25 No All


10.11 and higher PostgreSQL
versions, all 15
MariaDB 10.6 versions, all
versions, all PostgreSQL
MariaDB 10.5 14
versions, and versions, all
all MariaDB PostgreSQL
10.4 versions 13 versions;
PostgreSQL
12.7 and
higher

db.x2g.large All MariaDB No MySQL 8.0.25 No All


10.11 and higher PostgreSQL
versions, all 15
MariaDB 10.6 versions, all
versions, all PostgreSQL
MariaDB 10.5 14
versions, and versions, all
all MariaDB PostgreSQL
10.4 versions 13 versions;
PostgreSQL
12.7 and
higher

db.x2idn – memory-optimized instance classes powered by 3rd generation Intel Xeon Scalable processors

34
Amazon Relational Database Service User Guide
Supported DB engines

Instance class MariaDB Microsoft MySQL Oracle PostgreSQL


SQL Server

db.x2idn.32xlarge All MariaDB No MySQL 8.0.28 Enterprise All


10.11 and higher Edition only PostgreSQL
versions, 15 versions,
MariaDB PostgreSQL
10.6.7 and 14.6 and 13.9
higher 10.6 versions.
versions,
MariaDB
10.5.16 and
higher 10.5
versions,
and MariaDB
10.4.25 and
higher 10.4
versions

db.x2idn.24xlarge All MariaDB No MySQL 8.0.28 Enterprise All


10.11 and higher Edition only PostgreSQL
versions, 15 versions,
MariaDB PostgreSQL
10.6.7 and 14.6 and 13.9
higher 10.6 versions.
versions,
MariaDB
10.5.16 and
higher 10.5
versions,
and MariaDB
10.4.25 and
higher 10.4
versions

db.x2idn.16xlarge All MariaDB No MySQL 8.0.28 Enterprise All


10.11 and higher Edition only PostgreSQL
versions, 15 versions,
MariaDB PostgreSQL
10.6.7 and 14.6 and 13.9
higher 10.6 versions.
versions,
MariaDB
10.5.16 and
higher 10.5
versions,
and MariaDB
10.4.25 and
higher 10.4
versions

db.x2iedn – memory-optimized instance classes with local NVMe-based SSDs, powered by 3rd generation Intel
Xeon Scalable processors

35
Amazon Relational Database Service User Guide
Supported DB engines

Instance class MariaDB Microsoft MySQL Oracle PostgreSQL


SQL Server

db.x2iedn.32xlarge All MariaDB No MySQL 8.0.28 Enterprise All


10.11 and higher Edition only PostgreSQL
versions, 15 versions,
MariaDB PostgreSQL
10.6.7 and 14.5 and
higher 10.6 higher 14
versions, versions,
MariaDB PostgreSQL
10.5.16 and 13.4, and
higher 10.5 PostgreSQL
versions, 13.7 and
and MariaDB higher 13
10.4.25 and versions
higher 10.4
versions

db.x2iedn.24xlarge All MariaDB No MySQL 8.0.28 Enterprise All


10.11 and higher Edition only PostgreSQL
versions, 15 versions,
MariaDB PostgreSQL
10.6.7 and 14.5 and
higher 10.6 higher 14
versions, versions,
MariaDB PostgreSQL
10.5.16 and 13.4, and
higher 10.5 PostgreSQL
versions, 13.7 and
and MariaDB higher 13
10.4.25 and versions
higher 10.4
versions

db.x2iedn.16xlarge All MariaDB No MySQL 8.0.28 Enterprise All


10.11 and higher Edition only PostgreSQL
versions, 15 versions,
MariaDB PostgreSQL
10.6.7 and 14.5 and
higher 10.6 higher 14
versions, versions,
MariaDB PostgreSQL
10.5.16 and 13.4, and
higher 10.5 PostgreSQL
versions, 13.7 and
and MariaDB higher 13
10.4.25 and versions
higher 10.4
versions

36
Amazon Relational Database Service User Guide
Supported DB engines

Instance class MariaDB Microsoft MySQL Oracle PostgreSQL


SQL Server

db.x2iedn.8xlarge All MariaDB No MySQL 8.0.28 Enterprise All


10.11 and higher Edition only PostgreSQL
versions, 15 versions,
MariaDB PostgreSQL
10.6.7 and 14.5 and
higher 10.6 higher 14
versions, versions,
MariaDB PostgreSQL
10.5.16 and 13.4, and
higher 10.5 PostgreSQL
versions, 13.7 and
and MariaDB higher 13
10.4.25 and versions
higher 10.4
versions

db.x2iedn.4xlarge All MariaDB No MySQL 8.0.28 Enterprise All


10.11 and higher Edition and PostgreSQL
versions, Standard 15 versions,
MariaDB Edition 2 PostgreSQL
10.6.7 and (SE2) 14.5 and
higher 10.6 higher 14
versions, versions,
MariaDB PostgreSQL
10.5.16 and 13.4, and
higher 10.5 PostgreSQL
versions, 13.7 and
and MariaDB higher 13
10.4.25 and versions
higher 10.4
versions

db.x2iedn.2xlarge All MariaDB No MySQL 8.0.28 Enterprise All


10.11 and higher Edition and PostgreSQL
versions, Standard 15 versions,
MariaDB Edition 2 PostgreSQL
10.6.7 and (SE2) 14.5 and
higher 10.6 higher 14
versions, versions,
MariaDB PostgreSQL
10.5.16 and 13.4, and
higher 10.5 PostgreSQL
versions, 13.7 and
and MariaDB higher 13
10.4.25 and versions
higher 10.4
versions

37
Amazon Relational Database Service User Guide
Supported DB engines

Instance class MariaDB Microsoft MySQL Oracle PostgreSQL


SQL Server

db.x2iedn.xlarge All MariaDB No MySQL 8.0.28 Enterprise All


10.11 and higher Edition and PostgreSQL
versions, Standard 15 versions,
MariaDB Edition 2 PostgreSQL
10.6.7 and (SE2) 14.5 and
higher 10.6 higher 14
versions, versions,
MariaDB PostgreSQL
10.5.16 and 13.4, and
higher 10.5 PostgreSQL
versions, 13.7 and
and MariaDB higher 13
10.4.25 and versions
higher 10.4
versions

db.x2iezn – memory-optimized instance classes powered by 2nd generation Intel Xeon Scalable processors

db.x2iezn.12xlarge No No No Enterprise No
Edition only

db.x2iezn.8xlarge No No No Enterprise No
Edition only

db.x2iezn.6xlarge No No No Enterprise No
Edition only

db.x2iezn.4xlarge No No No Enterprise No
Edition and
Standard
Edition 2
(SE2)

db.x2iezn.2xlarge No No No Enterprise No
Edition and
Standard
Edition 2
(SE2)

db.z1d – memory-optimized instance classes

db.z1d.12xlarge No Yes No Yes No

db.z1d.6xlarge No Yes No Yes No

db.z1d.3xlarge No Yes No Yes No

db.z1d.2xlarge No Yes No Yes No

db.z1d.xlarge No Yes No Yes No

db.z1d.large No Yes No Yes No

db.x1e – memory-optimized instance classes

db.x1e.32xlarge No Yes No Yes No

38
Amazon Relational Database Service User Guide
Supported DB engines

Instance class MariaDB Microsoft MySQL Oracle PostgreSQL


SQL Server

db.x1e.16xlarge No Yes No Yes No

db.x1e.8xlarge No Yes No Yes No

db.x1e.4xlarge No Yes No Yes No

db.x1e.2xlarge No Yes No Yes No

db.x1e.xlarge No Yes No Yes No

db.x1 – memory-optimized instance classes

db.x1.32xlarge No Yes No Yes No

db.x1.16xlarge No Yes No Yes No

db.r7g – memory-optimized instance classes powered by AWS Graviton3 processors

db.r7g.16xlarge All MariaDB No MySQL 8.0.28 No PostgreSQL


10.11 and higher 15.2 and
versions, higher 15
MariaDB versions,
10.6.10 and PostgreSQL
higher 10.6 14.5 and
versions, higher 14
MariaDB versions, and
10.5.17 and PostgreSQL
higher 10.5 13.4 and
versions, higher 13
and MariaDB versions
10.4.26 and
higher 10.4
versions

db.r7g.12xlarge All MariaDB No MySQL 8.0.28 No PostgreSQL


10.11 and higher 15.2 and
versions, higher 15
MariaDB versions,
10.6.10 and PostgreSQL
higher 10.6 14.5 and
versions, higher 14
MariaDB versions, and
10.5.17 and PostgreSQL
higher 10.5 13.4 and
versions, higher 13
and MariaDB versions
10.4.26 and
higher 10.4
versions

39
Amazon Relational Database Service User Guide
Supported DB engines

Instance class MariaDB Microsoft MySQL Oracle PostgreSQL


SQL Server

db.r7g.8xlarge All MariaDB No MySQL 8.0.28 No PostgreSQL


10.11 and higher 15.2 and
versions, higher 15
MariaDB versions,
10.6.10 and PostgreSQL
higher 10.6 14.5 and
versions, higher 14
MariaDB versions, and
10.5.17 and PostgreSQL
higher 10.5 13.4 and
versions, higher 13
and MariaDB versions
10.4.26 and
higher 10.4
versions

db.r7g.4xlarge All MariaDB No MySQL 8.0.28 No PostgreSQL


10.11 and higher 15.2 and
versions, higher 15
MariaDB versions,
10.6.10 and PostgreSQL
higher 10.6 14.5 and
versions, higher 14
MariaDB versions, and
10.5.17 and PostgreSQL
higher 10.5 13.4 and
versions, higher 13
and MariaDB versions
10.4.26 and
higher 10.4
versions

db.r7g.2xlarge All MariaDB No MySQL 8.0.28 No PostgreSQL


10.11 and higher 15.2 and
versions, higher 15
MariaDB versions,
10.6.10 and PostgreSQL
higher 10.6 14.5 and
versions, higher 14
MariaDB versions, and
10.5.17 and PostgreSQL
higher 10.5 13.4 and
versions, higher 13
and MariaDB versions
10.4.26 and
higher 10.4
versions

40
Amazon Relational Database Service User Guide
Supported DB engines

Instance class MariaDB Microsoft MySQL Oracle PostgreSQL


SQL Server

db.r7g.xlarge All MariaDB No MySQL 8.0.28 No PostgreSQL


10.11 and higher 15.2 and
versions, higher 15
MariaDB versions,
10.6.10 and PostgreSQL
higher 10.6 14.5 and
versions, higher 14
MariaDB versions, and
10.5.17 and PostgreSQL
higher 10.5 13.4 and
versions, higher 13
and MariaDB versions
10.4.26 and
higher 10.4
versions

db.r7g.large All MariaDB No MySQL 8.0.28 No PostgreSQL


10.11 and higher 15.2 and
versions, higher 15
MariaDB versions,
10.6.10 and PostgreSQL
higher 10.6 14.5 and
versions, higher 14
MariaDB versions, and
10.5.17 and PostgreSQL
higher 10.5 13.4 and
versions, higher 13
and MariaDB versions
10.4.26 and
higher 10.4
versions

db.r6g – memory-optimized instance classes powered by AWS Graviton2 processors

db.r6g.16xlarge All MariaDB No MySQL 8.0.23 No All


10.11 and higher PostgreSQL
versions, all 15
MariaDB 10.6 versions, all
versions, all PostgreSQL
MariaDB 10.5 14
versions, and versions, all
all MariaDB PostgreSQL
10.4 versions 13 versions;
PostgreSQL
12.7 and
higher

41
Amazon Relational Database Service User Guide
Supported DB engines

Instance class MariaDB Microsoft MySQL Oracle PostgreSQL


SQL Server

db.r6g.12xlarge All MariaDB No MySQL 8.0.23 No All


10.11 and higher PostgreSQL
versions, all 15
MariaDB 10.6 versions, all
versions, all PostgreSQL
MariaDB 10.5 14
versions, and versions, all
all MariaDB PostgreSQL
10.4 versions 13 versions;
PostgreSQL
12.7 and
higher

db.r6g.8xlarge All MariaDB No MySQL 8.0.23 No All


10.11 and higher PostgreSQL
versions, all 15
MariaDB 10.6 versions, all
versions, all PostgreSQL
MariaDB 10.5 14
versions, and versions, all
all MariaDB PostgreSQL
10.4 versions 13 versions;
PostgreSQL
12.7 and
higher

db.r6g.4xlarge All MariaDB No MySQL 8.0.23 No All


10.11 and higher PostgreSQL
versions, all 15
MariaDB 10.6 versions, all
versions, all PostgreSQL
MariaDB 10.5 14
versions, and versions, all
all MariaDB PostgreSQL
10.4 versions 13 versions;
PostgreSQL
12.7 and
higher

db.r6g.2xlarge All MariaDB No MySQL 8.0.23 No All


10.11 and higher PostgreSQL
versions, all 15
MariaDB 10.6 versions, all
versions, all PostgreSQL
MariaDB 10.5 14
versions, and versions, all
all MariaDB PostgreSQL
10.4 versions 13 versions;
PostgreSQL
12.7 and
higher

42
Amazon Relational Database Service User Guide
Supported DB engines

Instance class MariaDB Microsoft MySQL Oracle PostgreSQL


SQL Server

db.r6g.xlarge All MariaDB No MySQL 8.0.23 No All


10.11 and higher PostgreSQL
versions, all 15
MariaDB 10.6 versions, all
versions, all PostgreSQL
MariaDB 10.5 14
versions, and versions, all
all MariaDB PostgreSQL
10.4 versions 13 versions;
PostgreSQL
12.7 and
higher

db.r6g.large All MariaDB No MySQL 8.0.23 No All


10.11 and higher PostgreSQL
versions, all 15
MariaDB 10.6 versions, all
versions, all PostgreSQL
MariaDB 10.5 14
versions, and versions, all
all MariaDB PostgreSQL
10.4 versions 13 versions;
PostgreSQL
12.7 and
higher

db.r6gd – memory-optimized instance classes powered by AWS Graviton2 processors

db.r6gd.16xlarge All MariaDB No MySQL 8.0.28 No All


10.11 and higher PostgreSQL
versions, 15 versions,
MariaDB PostgreSQL
10.6.7 and 14.5 and
higher 10.6 higher 14
versions, versions,
MariaDB PostgreSQL
10.5.16 and 13.4 and
higher 10.5 PostgreSQL
versions, 13.7 and
and MariaDB higher 13
10.4.25 and versions
higher 10.4
versions

43
Amazon Relational Database Service User Guide
Supported DB engines

Instance class MariaDB Microsoft MySQL Oracle PostgreSQL


SQL Server

db.r6gd.12xlarge All MariaDB No MySQL 8.0.28 No All


10.11 and higher PostgreSQL
versions, 15 versions,
MariaDB PostgreSQL
10.6.7 and 14.5 and
higher 10.6 higher 14
versions, versions,
MariaDB PostgreSQL
10.5.16 and 13.4 and
higher 10.5 PostgreSQL
versions, 13.7 and
and MariaDB higher 13
10.4.25 and versions
higher 10.4
versions

db.r6gd.8xlarge All MariaDB No MySQL 8.0.28 No All


10.11 and higher PostgreSQL
versions, 15 versions,
MariaDB PostgreSQL
10.6.7 and 14.5 and
higher 10.6 higher 14
versions, versions,
MariaDB PostgreSQL
10.5.16 and 13.4 and
higher 10.5 PostgreSQL
versions, 13.7 and
and MariaDB higher 13
10.4.25 and versions
higher 10.4
versions

db.r6gd.4xlarge All MariaDB No MySQL 8.0.28 No All


10.11 and higher PostgreSQL
versions, 15 versions,
MariaDB PostgreSQL
10.6.7 and 14.5 and
higher 10.6 higher 14
versions, versions,
MariaDB PostgreSQL
10.5.16 and 13.4 and
higher 10.5 PostgreSQL
versions, 13.7 and
and MariaDB higher 13
10.4.25 and versions
higher 10.4
versions

44
Amazon Relational Database Service User Guide
Supported DB engines

Instance class MariaDB Microsoft MySQL Oracle PostgreSQL


SQL Server

db.r6gd.2xlarge All MariaDB No MySQL 8.0.28 No All


10.11 and higher PostgreSQL
versions, 15 versions,
MariaDB PostgreSQL
10.6.7 and 14.5 and
higher 10.6 higher 14
versions, versions,
MariaDB PostgreSQL
10.5.16 and 13.4 and
higher 10.5 PostgreSQL
versions, 13.7 and
and MariaDB higher 13
10.4.25 and versions
higher 10.4
versions

db.r6gd.xlarge All MariaDB No MySQL 8.0.28 No All


10.11 and higher PostgreSQL
versions, 15 versions,
MariaDB PostgreSQL
10.6.7 and 14.5 and
higher 10.6 higher 14
versions, versions,
MariaDB PostgreSQL
10.5.16 and 13.4 and
higher 10.5 PostgreSQL
versions, 13.7 and
and MariaDB higher 13
10.4.25 and versions
higher 10.4
versions

db.r6gd.large All MariaDB No MySQL 8.0.28 No All


10.11 and higher PostgreSQL
versions, 15 versions,
MariaDB PostgreSQL
10.6.7 and 14.5 and
higher 10.6 higher 14
versions, versions,
MariaDB PostgreSQL
10.5.16 and 13.4 and
higher 10.5 PostgreSQL
versions, 13.7 and
and MariaDB higher 13
10.4.25 and versions
higher 10.4
versions

db.r6i – memory-optimized instance classes

45
Amazon Relational Database Service User Guide
Supported DB engines

Instance class MariaDB Microsoft MySQL Oracle PostgreSQL


SQL Server

db.r6i.32xlarge All MariaDB Yes MySQL Yes All


10.11 version 8.0.28 PostgreSQL
versions, and higher 15
MariaDB versions, all
10.6.7 and PostgreSQL
higher 10.6 14 versions;
versions, PostgreSQL
MariaDB 13.4 and
10.5.15 and higher 13
higher 10.5 versions,
versions, PostgreSQL
and MariaDB 12.8 and
10.4.24 and higher 12
higher 10.4 versions,
versions PostgreSQL
11.13 and
higher 13
versions, and
PostgreSQL
10.21 and
higher 10
versions

db.r6i.24xlarge All MariaDB Yes MySQL Yes All


10.11 version 8.0.28 PostgreSQL
versions, and higher 15
MariaDB versions, all
10.6.7 and PostgreSQL
higher 10.6 14 versions;
versions, PostgreSQL
MariaDB 13.4 and
10.5.15 and higher 13
higher 10.5 versions,
versions, PostgreSQL
and MariaDB 12.8 and
10.4.24 and higher 12
higher 10.4 versions,
versions PostgreSQL
11.13 and
higher 13
versions, and
PostgreSQL
10.21 and
higher 10
versions

46
Amazon Relational Database Service User Guide
Supported DB engines

Instance class MariaDB Microsoft MySQL Oracle PostgreSQL


SQL Server

db.r6i.16xlarge All MariaDB Yes MySQL Yes All


10.11 version 8.0.28 PostgreSQL
versions, and higher 15
MariaDB versions, all
10.6.7 and PostgreSQL
higher 10.6 14 versions;
versions, PostgreSQL
MariaDB 13.4 and
10.5.15 and higher 13
higher 10.5 versions,
versions, PostgreSQL
and MariaDB 12.8 and
10.4.24 and higher 12
higher 10.4 versions,
versions PostgreSQL
11.13 and
higher 13
versions, and
PostgreSQL
10.21 and
higher 10
versions

db.r6i.12xlarge All MariaDB Yes MySQL Yes All


10.11 version 8.0.28 PostgreSQL
versions, and higher 15
MariaDB versions, all
10.6.7 and PostgreSQL
higher 10.6 14 versions;
versions, PostgreSQL
MariaDB 13.4 and
10.5.15 and higher 13
higher 10.5 versions,
versions, PostgreSQL
and MariaDB 12.8 and
10.4.24 and higher 12
higher 10.4 versions,
versions PostgreSQL
11.13 and
higher 13
versions, and
PostgreSQL
10.21 and
higher 10
versions

47
Amazon Relational Database Service User Guide
Supported DB engines

Instance class MariaDB Microsoft MySQL Oracle PostgreSQL


SQL Server

db.r6i.8xlarge All MariaDB Yes MySQL Yes All


10.11 version 8.0.28 PostgreSQL
versions, and higher 15
MariaDB versions, all
10.6.7 and PostgreSQL
higher 10.6 14 versions;
versions, PostgreSQL
MariaDB 13.4 and
10.5.15 and higher 13
higher 10.5 versions,
versions, PostgreSQL
and MariaDB 12.8 and
10.4.24 and higher 12
higher 10.4 versions,
versions PostgreSQL
11.13 and
higher 13
versions, and
PostgreSQL
10.21 and
higher 10
versions

db.r6i.4xlarge All MariaDB Yes MySQL Yes All


10.11 version 8.0.28 PostgreSQL
versions, and higher 15
MariaDB versions, all
10.6.7 and PostgreSQL
higher 10.6 14 versions;
versions, PostgreSQL
MariaDB 13.4 and
10.5.15 and higher 13
higher 10.5 versions,
versions, PostgreSQL
and MariaDB 12.8 and
10.4.24 and higher 12
higher 10.4 versions,
versions PostgreSQL
11.13 and
higher 13
versions, and
PostgreSQL
10.21 and
higher 10
versions

48
Amazon Relational Database Service User Guide
Supported DB engines

Instance class MariaDB Microsoft MySQL Oracle PostgreSQL


SQL Server

db.r6i.2xlarge All MariaDB Yes MySQL Yes All


10.11 version 8.0.28 PostgreSQL
versions, and higher 15
MariaDB versions, all
10.6.7 and PostgreSQL
higher 10.6 14 versions;
versions, PostgreSQL
MariaDB 13.4 and
10.5.15 and higher 13
higher 10.5 versions,
versions, PostgreSQL
and MariaDB 12.8 and
10.4.24 and higher 12
higher 10.4 versions,
versions PostgreSQL
11.13 and
higher 13
versions, and
PostgreSQL
10.21 and
higher 10
versions

db.r6i.xlarge All MariaDB Yes MySQL Yes All


10.11 version 8.0.28 PostgreSQL
versions, and higher 15
MariaDB versions, all
10.6.7 and PostgreSQL
higher 10.6 14 versions;
versions, PostgreSQL
MariaDB 13.4 and
10.5.15 and higher 13
higher 10.5 versions,
versions, PostgreSQL
and MariaDB 12.8 and
10.4.24 and higher 12
higher 10.4 versions,
versions PostgreSQL
11.13 and
higher 13
versions, and
PostgreSQL
10.21 and
higher 10
versions

49
Amazon Relational Database Service User Guide
Supported DB engines

Instance class MariaDB Microsoft MySQL Oracle PostgreSQL


SQL Server

db.r6i.large All MariaDB Yes MySQL Yes All


10.11 version 8.0.28 PostgreSQL
versions, and higher 15
MariaDB versions, all
10.6.7 and PostgreSQL
higher 10.6 14 versions;
versions, PostgreSQL
MariaDB 13.4 and
10.5.15 and higher 13
higher 10.5 versions,
versions, PostgreSQL
and MariaDB 12.8 and
10.4.24 and higher 12
higher 10.4 versions,
versions PostgreSQL
11.13 and
higher 13
versions, and
PostgreSQL
10.21 and
higher 10
versions

db.r6id – memory-optimized instance classes powered by 3rd generation Intel Xeon Scalable processors

db.r6id.32xlarge MariaDB No MySQL No PostgreSQL


version version 8.0.28 15.2 and
10.6.10 and and higher higher 15
higher 10.6 versions,
versions, PostgreSQL
MariaDB 14.5 and
version higher 14
10.5.16 and versions, and
higher 10.5 PostgreSQL
versions, 13.7 and
and MariaDB higher 13
version versions
10.4.25 and
higher 10.4
versions

50
Amazon Relational Database Service User Guide
Supported DB engines

Instance class MariaDB Microsoft MySQL Oracle PostgreSQL


SQL Server

db.r6id.24xlarge MariaDB No MySQL No PostgreSQL


version version 8.0.28 15.2 and
10.6.10 and and higher higher 15
higher 10.6 versions,
versions, PostgreSQL
MariaDB 14.5 and
version higher 14
10.5.16 and versions, and
higher 10.5 PostgreSQL
versions, 13.7 and
and MariaDB higher 13
version versions
10.4.25 and
higher 10.4
versions

db.r6id.16xlarge MariaDB No MySQL No PostgreSQL


version version 8.0.28 15.2 and
10.6.10 and and higher higher 15
higher 10.6 versions,
versions, PostgreSQL
MariaDB 14.5 and
version higher 14
10.5.16 and versions, and
higher 10.5 PostgreSQL
versions, 13.7 and
and MariaDB higher 13
version versions
10.4.25 and
higher 10.4
versions

db.r6id.12xlarge MariaDB No MySQL No PostgreSQL


version version 8.0.28 15.2 and
10.6.10 and and higher higher 15
higher 10.6 versions,
versions, PostgreSQL
MariaDB 14.5 and
version higher 14
10.5.16 and versions, and
higher 10.5 PostgreSQL
versions, 13.7 and
and MariaDB higher 13
version versions
10.4.25 and
higher 10.4
versions

51
Amazon Relational Database Service User Guide
Supported DB engines

Instance class MariaDB Microsoft MySQL Oracle PostgreSQL


SQL Server

db.r6id.8xlarge MariaDB No MySQL No PostgreSQL


version version 8.0.28 15.2 and
10.6.10 and and higher higher 15
higher 10.6 versions,
versions, PostgreSQL
MariaDB 14.5 and
version higher 14
10.5.16 and versions, and
higher 10.5 PostgreSQL
versions, 13.7 and
and MariaDB higher 13
version versions
10.4.25 and
higher 10.4
versions

db.r6id.4xlarge MariaDB No MySQL No PostgreSQL


version version 8.0.28 15.2 and
10.6.10 and and higher higher 15
higher 10.6 versions,
versions, PostgreSQL
MariaDB 14.5 and
version higher 14
10.5.16 and versions, and
higher 10.5 PostgreSQL
versions, 13.7 and
and MariaDB higher 13
version versions
10.4.25 and
higher 10.4
versions

db.r6id.2xlarge MariaDB No MySQL No PostgreSQL


version version 8.0.28 15.2 and
10.6.10 and and higher higher 15
higher 10.6 versions,
versions, PostgreSQL
MariaDB 14.5 and
version higher 14
10.5.16 and versions, and
higher 10.5 PostgreSQL
versions, 13.7 and
and MariaDB higher 13
version versions
10.4.25 and
higher 10.4
versions

52
Amazon Relational Database Service User Guide
Supported DB engines

Instance class MariaDB Microsoft MySQL Oracle PostgreSQL


SQL Server

db.r6id.xlarge MariaDB No MySQL No PostgreSQL


version version 8.0.28 15.2 and
10.6.10 and and higher higher 15
higher 10.6 versions,
versions, PostgreSQL
MariaDB 14.5 and
version higher 14
10.5.16 and versions, and
higher 10.5 PostgreSQL
versions, 13.7 and
and MariaDB higher 13
version versions
10.4.25 and
higher 10.4
versions

db.r6id.large MariaDB No MySQL No PostgreSQL


version version 8.0.28 15.2 and
10.6.10 and and higher higher 15
higher 10.6 versions,
versions, PostgreSQL
MariaDB 14.5 and
version higher 14
10.5.16 and versions, and
higher 10.5 PostgreSQL
versions, 13.7 and
and MariaDB higher 13
version versions
10.4.25 and
higher 10.4
versions

db.r5d – memory-optimized instance classes

db.r5d.24xlarge All MariaDB Yes MySQL 8.0.28 Yes All


10.11 and higher PostgreSQL
versions, 15 versions,
MariaDB PostgreSQL
10.6.7 and 14.5 and
higher 10.6 higher 14
versions, versions,
MariaDB PostgreSQL
10.5.16 and 13.4, and
higher 10.5 PostgreSQL
versions, 13.7 and
and MariaDB higher 13
10.4.25 and versions
higher 10.4
versions

53
Amazon Relational Database Service User Guide
Supported DB engines

Instance class MariaDB Microsoft MySQL Oracle PostgreSQL


SQL Server

db.r5d.16xlarge All MariaDB Yes MySQL 8.0.28 Yes All


10.11 and higher PostgreSQL
versions, 15 versions,
MariaDB PostgreSQL
10.6.7 and 14.5 and
higher 10.6 higher 14
versions, versions,
MariaDB PostgreSQL
10.5.16 and 13.4, and
higher 10.5 PostgreSQL
versions, 13.7 and
and MariaDB higher 13
10.4.25 and versions
higher 10.4
versions

db.r5d.12xlarge All MariaDB Yes MySQL 8.0.28 Yes All


10.11 and higher PostgreSQL
versions, 15 versions,
MariaDB PostgreSQL
10.6.7 and 14.5 and
higher 10.6 higher 14
versions, versions,
MariaDB PostgreSQL
10.5.16 and 13.4, and
higher 10.5 PostgreSQL
versions, 13.7 and
and MariaDB higher 13
10.4.25 and versions
higher 10.4
versions

db.r5d.8xlarge All MariaDB Yes MySQL 8.0.28 Yes All


10.11 and higher PostgreSQL
versions, 15 versions,
MariaDB PostgreSQL
10.6.7 and 14.5 and
higher 10.6 higher 14
versions, versions,
MariaDB PostgreSQL
10.5.16 and 13.4, and
higher 10.5 PostgreSQL
versions, 13.7 and
and MariaDB higher 13
10.4.25 and versions
higher 10.4
versions

54
Amazon Relational Database Service User Guide
Supported DB engines

Instance class MariaDB Microsoft MySQL Oracle PostgreSQL


SQL Server

db.r5d.4xlarge All MariaDB Yes MySQL 8.0.28 Yes All


10.11 and higher PostgreSQL
versions, 15 versions,
MariaDB PostgreSQL
10.6.7 and 14.5 and
higher 10.6 higher 14
versions, versions,
MariaDB PostgreSQL
10.5.16 and 13.4, and
higher 10.5 PostgreSQL
versions, 13.7 and
and MariaDB higher 13
10.4.25 and versions
higher 10.4
versions

db.r5d.2xlarge All MariaDB Yes MySQL 8.0.28 Yes All


10.11 and higher PostgreSQL
versions, 15 versions,
MariaDB PostgreSQL
10.6.7 and 14.5 and
higher 10.6 higher 14
versions, versions,
MariaDB PostgreSQL
10.5.16 and 13.4, and
higher 10.5 PostgreSQL
versions, 13.7 and
and MariaDB higher 13
10.4.25 and versions
higher 10.4
versions

db.r5d.xlarge All MariaDB Yes MySQL 8.0.28 Yes All


10.11 and higher PostgreSQL
versions, 15 versions,
MariaDB PostgreSQL
10.6.7 and 14.5 and
higher 10.6 higher 14
versions, versions,
MariaDB PostgreSQL
10.5.16 and 13.4, and
higher 10.5 PostgreSQL
versions, 13.7 and
and MariaDB higher 13
10.4.25 and versions
higher 10.4
versions

55
Amazon Relational Database Service User Guide
Supported DB engines

Instance class MariaDB Microsoft MySQL Oracle PostgreSQL


SQL Server

db.r5d.large All MariaDB Yes MySQL 8.0.28 Yes All


10.11 and higher PostgreSQL
versions, 15 versions,
MariaDB PostgreSQL
10.6.7 and 14.5 and
higher 10.6 higher 14
versions, versions,
MariaDB PostgreSQL
10.5.16 and 13.4, and
higher 10.5 PostgreSQL
versions, 13.7 and
and MariaDB higher 13
10.4.25 and versions
higher 10.4
versions

db.r5b – memory-optimized instance classes preconfigured for high memory, storage, and I/O

db.r5b.8xlarge.tpc2.mem3x No No No Yes No

db.r5b.6xlarge.tpc2.mem4x No No No Yes No

db.r5b.4xlarge.tpc2.mem4x No No No Yes No

db.r5b.4xlarge.tpc2.mem3x No No No Yes No

db.r5b.4xlarge.tpc2.mem2x No No No Yes No

db.r5b.2xlarge.tpc2.mem8x No No No Yes No

db.r5b.2xlarge.tpc2.mem4x No No No Yes No

db.r5b.2xlarge.tpc1.mem2x No No No Yes No

db.r5b.xlarge.tpc2.mem4x No No No Yes No

db.r5b.xlarge.tpc2.mem2x No No No Yes No

db.r5b.large.tpc1.mem2x No No No Yes No

db.r5b – memory-optimized instance classes

56
Amazon Relational Database Service User Guide
Supported DB engines

Instance class MariaDB Microsoft MySQL Oracle PostgreSQL


SQL Server

db.r5b.24xlarge All MariaDB Yes MySQL 8.0.25 Yes All


10.11 and higher PostgreSQL
versions, 15
MariaDB versions, all
10.6.5 and PostgreSQL
higher 10.6 14
versions, versions, all
MariaDB PostgreSQL
10.5.12 and 13 versions;
higher 10.5 PostgreSQL
versions, 12.7 and
MariaDB higher
10.4.24 and
higher 10.4
versions,
and MariaDB
10.3.34 and
higher 10.3
versions

db.r5b.16xlarge All MariaDB Yes MySQL 8.0.25 Yes All


10.11 and higher PostgreSQL
versions, 15
MariaDB versions, all
10.6.5 and PostgreSQL
higher 10.6 14
versions, versions, all
MariaDB PostgreSQL
10.5.12 and 13 versions;
higher 10.5 PostgreSQL
versions, 12.7 and
MariaDB higher
10.4.24 and
higher 10.4
versions,
and MariaDB
10.3.34 and
higher 10.3
versions

57
Amazon Relational Database Service User Guide
Supported DB engines

Instance class MariaDB Microsoft MySQL Oracle PostgreSQL


SQL Server

db.r5b.12xlarge All MariaDB Yes MySQL 8.0.25 Yes All


10.11 and higher PostgreSQL
versions, 15
MariaDB versions, all
10.6.5 and PostgreSQL
higher 10.6 14
versions, versions, all
MariaDB PostgreSQL
10.5.12 and 13 versions;
higher 10.5 PostgreSQL
versions, 12.7 and
MariaDB higher
10.4.24 and
higher 10.4
versions,
and MariaDB
10.3.34 and
higher 10.3
versions

db.r5b.8xlarge All MariaDB Yes MySQL 8.0.25 >Yes All


10.11 and higher PostgreSQL
versions, 15
MariaDB versions, all
10.6.5 and PostgreSQL
higher 10.6 14
versions, versions, all
MariaDB PostgreSQL
10.5.12 and 13 versions;
higher 10.5 PostgreSQL
versions, 12.7 and
MariaDB higher
10.4.24 and
higher 10.4
versions,
and MariaDB
10.3.34 and
higher 10.3
versions

58
Amazon Relational Database Service User Guide
Supported DB engines

Instance class MariaDB Microsoft MySQL Oracle PostgreSQL


SQL Server

db.r5b.4xlarge All MariaDB Yes MySQL 8.0.25 Yes All


10.11 and higher PostgreSQL
versions, 15
MariaDB versions, all
10.6.5 and PostgreSQL
higher 10.6 14
versions, versions, all
MariaDB PostgreSQL
10.5.12 and 13 versions;
higher 10.5 PostgreSQL
versions, 12.7 and
MariaDB higher
10.4.24 and
higher 10.4
versions,
and MariaDB
10.3.34 and
higher 10.3
versions

db.r5b.2xlarge All MariaDB Yes MySQL 8.0.25 Yes All


10.11 and higher PostgreSQL
versions, 15
MariaDB versions, all
10.6.5 and PostgreSQL
higher 10.6 14
versions, versions, all
MariaDB PostgreSQL
10.5.12 and 13 versions;
higher 10.5 PostgreSQL
versions, 12.7 and
MariaDB higher
10.4.24 and
higher 10.4
versions,
and MariaDB
10.3.34 and
higher 10.3
versions

59
Amazon Relational Database Service User Guide
Supported DB engines

Instance class MariaDB Microsoft MySQL Oracle PostgreSQL


SQL Server

db.r5b.xlarge All MariaDB Yes MySQL 8.0.25 Yes All


10.11 and higher PostgreSQL
versions, 15
MariaDB versions, all
10.6.5 and PostgreSQL
higher 10.6 14
versions, versions, all
MariaDB PostgreSQL
10.5.12 and 13 versions;
higher 10.5 PostgreSQL
versions, 12.7 and
MariaDB higher
10.4.24 and
higher 10.4
versions,
and MariaDB
10.3.34 and
higher 10.3
versions

db.r5b.large All MariaDB Yes MySQL 8.0.25 Yes All


10.11 and higher PostgreSQL
versions, 15
MariaDB versions, all
10.6.5 and PostgreSQL
higher 10.6 14
versions, versions, all
MariaDB PostgreSQL
10.5.12 and 13 versions;
higher 10.5 PostgreSQL
versions, 12.7 and
MariaDB higher
10.4.24 and
higher 10.4
versions,
and MariaDB
10.3.34 and
higher 10.3
versions

db.r5 – memory-optimized instance classes preconfigured for high memory, storage, and I/O

db.r5.12xlarge.tpc2.mem2x No No No Yes No

db.r5.8xlarge.tpc2.mem3x No No No Yes No

db.r5.6xlarge.tpc2.mem4x No No No Yes No

db.r5.4xlarge.tpc2.mem4x No No No Yes No

db.r5.4xlarge.tpc2.mem3x No No No Yes No

db.r5.4xlarge.tpc2.mem2x No No No Yes No

db.r5.2xlarge.tpc2.mem8x No No No Yes No

60
Amazon Relational Database Service User Guide
Supported DB engines

Instance class MariaDB Microsoft MySQL Oracle PostgreSQL


SQL Server

db.r5.2xlarge.tpc2.mem4x No No No Yes No

db.r5.2xlarge.tpc1.mem2x No No No Yes No

db.r5.xlarge.tpc2.mem4x No No No Yes No

db.r5.xlarge.tpc2.mem2x No No No Yes No

db.r5.large.tpc1.mem2x No No No Yes No

db.r5 – memory-optimized instance classes

db.r5.24xlarge Yes Yes Yes Yes All


PostgreSQL
15, 14, 13,
12, and 11
versions;
10.17 and
higher; 9.6.22
and higher

db.r5.16xlarge Yes Yes Yes Yes All


PostgreSQL
15, 14, 13,
12, and 11
versions;
10.17 and
higher; 9.6.22
and higher

db.r5.12xlarge Yes Yes Yes Yes All


PostgreSQL
15, 14, 13,
12, and 11
versions;
10.17 and
higher; 9.6.22
and higher

db.r5.8xlarge Yes Yes Yes Yes All


PostgreSQL
15, 14, 13,
12, and 11
versions;
10.17 and
higher; 9.6.22
and higher

db.r5.4xlarge Yes Yes Yes Yes All


PostgreSQL
15, 14, 13,
12, and 11
versions;
10.17 and
higher; 9.6.22
and higher

61
Amazon Relational Database Service User Guide
Supported DB engines

Instance class MariaDB Microsoft MySQL Oracle PostgreSQL


SQL Server

db.r5.2xlarge Yes Yes Yes Yes All


PostgreSQL
15, 14, 13,
12, and 11
versions;
10.17 and
higher; 9.6.22
and higher

db.r5.xlarge Yes Yes Yes Yes All


PostgreSQL
15, 14, 13,
12, and 11
versions;
10.17 and
higher; 9.6.22
and higher

db.r5.large Yes Yes Yes Yes All


PostgreSQL
15, 14, 13,
12, and 11
versions;
10.17 and
higher; 9.6.22
and higher

db.r4 – memory-optimized instance classes

db.r4.16xlarge All MariaDB Yes All MySQL 8.0, Deprecated Lower than
10.6 versions, 5.7 PostgreSQL
all MariaDB 13
10.5 versions,
all MariaDB
10.4 versions,
all MariaDB
10.3 versions

db.r4.8xlarge All MariaDB Yes All MySQL 8.0, Deprecated Lower than
10.6 versions, 5.7 PostgreSQL
all MariaDB 13
10.5 versions,
all MariaDB
10.4 versions,
all MariaDB
10.3 versions

db.r4.4xlarge All MariaDB Yes All MySQL 8.0, Deprecated Lower than
10.6 versions, 5.7 PostgreSQL
all MariaDB 13
10.5 versions,
all MariaDB
10.4 versions,
all MariaDB
10.3 versions

62
Amazon Relational Database Service User Guide
Supported DB engines

Instance class MariaDB Microsoft MySQL Oracle PostgreSQL


SQL Server

db.r4.2xlarge All MariaDB Yes All MySQL 8.0, Deprecated Lower than
10.6 versions, 5.7 PostgreSQL
all MariaDB 13
10.5 versions,
all MariaDB
10.4 versions,
all MariaDB
10.3 versions

db.r4.xlarge All MariaDB Yes All MySQL 8.0, Deprecated Lower than
10.6 versions, 5.7 PostgreSQL
all MariaDB 13
10.5 versions,
all MariaDB
10.4 versions,
all MariaDB
10.3 versions

db.r4.large All MariaDB Yes All MySQL 8.0, Deprecated Lower than
10.6 versions, 5.7 PostgreSQL
all MariaDB 13
10.5 versions,
all MariaDB
10.4 versions,
all MariaDB
10.3 versions

db.r3 – memory-optimized instance classes

db.r3.8xlarge** All MariaDB Yes Yes Deprecated Deprecated


10.6 versions,
all MariaDB
10.5 versions,
all MariaDB
10.4 versions,
all MariaDB
10.3 versions

db.r3.4xlarge All MariaDB Yes Yes Deprecated Deprecated


10.6 versions,
all MariaDB
10.5 versions,
all MariaDB
10.4 versions,
all MariaDB
10.3 versions

db.r3.2xlarge All MariaDB Yes Yes Deprecated Deprecated


10.6 versions,
all MariaDB
10.5 versions,
all MariaDB
10.4 versions,
all MariaDB
10.3 versions

63
Amazon Relational Database Service User Guide
Supported DB engines

Instance class MariaDB Microsoft MySQL Oracle PostgreSQL


SQL Server

db.r3.xlarge All MariaDB Yes Yes Deprecated Deprecated


10.6 versions,
all MariaDB
10.5 versions,
all MariaDB
10.4 versions,
all MariaDB
10.3 versions

db.r3.large All MariaDB Yes Yes Deprecated Deprecated


10.6 versions,
all MariaDB
10.5 versions,
all MariaDB
10.4 versions,
all MariaDB
10.3 versions

db.t4g – burstable-performance instance classes powered by AWS Graviton2 processors

db.t4g.2xlarge All MariaDB No MySQL 8.0.25 No All


10.11 and higher PostgreSQL
versions, all 15
MariaDB 10.6 versions, all
versions, all PostgreSQL
MariaDB 10.5 14
versions, and versions, all
all MariaDB PostgreSQL
10.4 versions 13 versions,
PostgreSQL
12.7 and
higher

db.t4g.xlarge All MariaDB No MySQL 8.0.25 No All


10.11 and higher PostgreSQL
versions, all 15
MariaDB 10.6 versions, all
versions, all PostgreSQL
MariaDB 10.5 14
versions, and versions, all
all MariaDB PostgreSQL
10.4 versions 13 versions,
PostgreSQL
12.7 and
higher

64
Amazon Relational Database Service User Guide
Supported DB engines

Instance class MariaDB Microsoft MySQL Oracle PostgreSQL


SQL Server

db.t4g.large All MariaDB No MySQL 8.0.25 No All


10.11 and higher PostgreSQL
versions, all 15
MariaDB 10.6 versions, all
versions, all PostgreSQL
MariaDB 10.5 14
versions, and versions, all
all MariaDB PostgreSQL
10.4 versions 13 versions,
PostgreSQL
12.7 and
higher

db.t4g.medium All MariaDB No MySQL 8.0.25 No All


10.11 and higher PostgreSQL
versions, all 15
MariaDB 10.6 versions, all
versions, all PostgreSQL
MariaDB 10.5 14 and 13
versions, and versions, and
all MariaDB PostgreSQL
10.4 versions 12.7 and
higher 12
versions

db.t4g.small All MariaDB No MySQL 8.0.25 No All


10.11 and higher PostgreSQL
versions, all 15
MariaDB 10.6 versions, all
versions, all PostgreSQL
MariaDB 10.5 14
versions, and versions, all
all MariaDB PostgreSQL
10.4 versions 13 versions,
PostgreSQL
12.7 and
higher

db.t4g.micro All MariaDB No MySQL 8.0.25 No All


10.11 and higher PostgreSQL
versions, all 15
MariaDB 10.6 versions, all
versions, all PostgreSQL
MariaDB 10.5 14
versions, and versions, all
all MariaDB PostgreSQL
10.4 versions 13 versions,
PostgreSQL
12.7 and
higher

db.t3 – burstable-performance instance classes

65
Amazon Relational Database Service User Guide
Supported DB engines

Instance class MariaDB Microsoft MySQL Oracle PostgreSQL


SQL Server

db.t3.2xlarge Yes Yes Yes Yes All


PostgreSQL
15, 14, 13,
12, 11, and
10 versions;
PostgreSQL
9.6.22 and
higher
versions

db.t3.xlarge Yes Yes Yes Yes All


PostgreSQL
15, 14, 13,
12, 11, and
10 versions;
PostgreSQL
9.6.22 and
higher
versions

db.t3.large Yes Yes Yes Yes All


PostgreSQL
15, 14, 13, 12,
11, and 10
versions, and
PostgreSQL
9.6.22 and
higher
versions

db.t3.medium Yes Yes Yes Yes All


PostgreSQL
15, 14, 13,
12, 11, and
10 versions;
PostgreSQL
9.6.22 and
higher
versions

db.t3.small Yes Yes Yes Yes All


PostgreSQL
15, 14, 13,
12, 11, and
10 versions;
PostgreSQL
9.6.22 and
higher
versions

66
Amazon Relational Database Service User Guide
Supported DB engines

Instance class MariaDB Microsoft MySQL Oracle PostgreSQL


SQL Server

db.t3.micro Yes No Yes Only on All


Oracle PostgreSQL
Database 15, 14, 13,
12c Release 12, 11, and
1 (12.1.0.2), 10 versions;
which is PostgreSQL
deprecated 9.6.22 and
higher
versions

db.t2 – burstable-performance instance classes

db.t2.2xlarge All MariaDB No All MySQL 8.0, Deprecated Lower than


10.6 versions, 5.7 PostgreSQL
all MariaDB 13
10.5 versions,
all MariaDB
10.4 versions,
all MariaDB
10.3 versions

db.t2.xlarge All MariaDB No All MySQL 8.0, Deprecated Lower than


10.6 versions, 5.7 PostgreSQL
all MariaDB 13
10.5 versions,
all MariaDB
10.4 versions,
all MariaDB
10.3 versions

db.t2.large All MariaDB Yes Yes Deprecated Lower than


10.6 versions, PostgreSQL
all MariaDB 13
10.5 versions,
all MariaDB
10.4 versions,
all MariaDB
10.3 versions

db.t2.medium All MariaDB Yes Yes Deprecated Lower than


10.6 versions, PostgreSQL
all MariaDB 13
10.5 versions,
all MariaDB
10.4 versions,
all MariaDB
10.3 versions

67
Amazon Relational Database Service User Guide
Determining DB instance class support in AWS Regions

Instance class MariaDB Microsoft MySQL Oracle PostgreSQL


SQL Server

db.t2.small All MariaDB Yes Yes Deprecated Lower than


10.6 versions, PostgreSQL
all MariaDB 13
10.5 versions,
all MariaDB
10.4 versions,
all MariaDB
10.3 versions

db.t2.micro All MariaDB Yes Yes Deprecated Lower than


10.6 versions, PostgreSQL
all MariaDB 13
10.5 versions,
all MariaDB
10.4 versions,
all MariaDB
10.3 versions

Determining DB instance class support in AWS


Regions
To determine the DB instance classes supported by each DB engine in a specific AWS Region, you can
take one of several approaches. You can use the AWS Management Console, the Amazon RDS Pricing
page, or the describe-orderable-db-instance-options command for the AWS Command Line Interface
(AWS CLI).
Note
When you perform operations with the AWS CLI, it automatically shows the supported DB
instance classes for a specific DB engine, DB engine version, and AWS Region. Examples of the
operations that you can perform include creating and modifying a DB instance.

Contents
• Using the Amazon RDS pricing page to determine DB instance class support in AWS
Regions (p. 68)
• Using the AWS CLI to determine DB instance class support in AWS Regions (p. 69)
• Listing the DB instance classes that are supported by a specific DB engine version in an AWS
Region (p. 69)
• Listing the DB engine versions that support a specific DB instance class in an AWS
Region (p. 70)

Using the Amazon RDS pricing page to determine DB instance


class support in AWS Regions
You can use the Amazon RDS Pricing page to determine the DB instance classes supported by each DB
engine in a specific AWS Region.

To use the pricing page to determine the DB instance classes supported by each engine in a
Region

1. Go to Amazon RDS Pricing.

68
Amazon Relational Database Service User Guide
Determining DB instance class support in AWS Regions

2. Choose a DB engine.
3. On the pricing page for the DB engine, choose On-Demand DB Instances or Reserved DB Instances.
4. To see the DB instance classes available in an AWS Region, choose the AWS Region in Region.

Other choices might be available for some DB engines, such as Single-AZ Deployment or Multi-AZ
Deployment.

Using the AWS CLI to determine DB instance class support in


AWS Regions
You can use the AWS CLI to determine which DB instance classes are supported for specific DB engines
and DB engine versions in an AWS Region. The following table shows the valid DB engine values.

Engine names Engine values in CLI More information about versions


commands

MariaDB mariadb MariaDB on Amazon RDS versions (p. 1265)

Microsoft SQL Server sqlserver-ee Microsoft SQL Server versions on Amazon


RDS (p. 1362)
sqlserver-se

sqlserver-ex

sqlserver-web

MySQL mysql MySQL on Amazon RDS versions (p. 1627)

Oracle oracle-ee Amazon RDS for Oracle Release Notes

oracle-se2

PostgreSQL postgres Available PostgreSQL database versions (p. 2154)

For information about AWS Region names, see AWS Regions (p. 111).

The following examples demonstrate how to determine DB instance class support in an AWS Region
using the describe-orderable-db-instance-options AWS CLI command.
Note
To limit the output, these examples show results only for the General Purpose SSD (gp2) storage
type. If necessary, you can change the storage type to General Purpose SSD (gp3), Provisioned
IOPS (io1), or magnetic (standard) in the commands.

Topics
• Listing the DB instance classes that are supported by a specific DB engine version in an AWS
Region (p. 69)
• Listing the DB engine versions that support a specific DB instance class in an AWS Region (p. 70)

Listing the DB instance classes that are supported by a specific DB engine


version in an AWS Region
To list the DB instance classes that are supported by a specific DB engine version in an AWS Region, run
the following command.

69
Amazon Relational Database Service User Guide
Determining DB instance class support in AWS Regions

For Linux, macOS, or Unix:

aws rds describe-orderable-db-instance-options --engine engine --engine-version version \


--query "*[].{DBInstanceClass:DBInstanceClass,StorageType:StorageType}|[?
StorageType=='gp2']|[].{DBInstanceClass:DBInstanceClass}" \
--output text \
--region region

For Windows:

aws rds describe-orderable-db-instance-options --engine engine --engine-version version ^


--query "*[].{DBInstanceClass:DBInstanceClass,StorageType:StorageType}|[?
StorageType=='gp2']|[].{DBInstanceClass:DBInstanceClass}" ^
--output text ^
--region region

For example, the following command lists the supported DB instance classes for version 13.6 of the RDS
for PostgreSQL DB engine in US East (N. Virginia).

For Linux, macOS, or Unix:

aws rds describe-orderable-db-instance-options --engine postgres --engine-version 13.6 \


--query "*[].{DBInstanceClass:DBInstanceClass,StorageType:StorageType}|[?
StorageType=='gp2']|[].{DBInstanceClass:DBInstanceClass}" \
--output text \
--region us-east-1

For Windows:

aws rds describe-orderable-db-instance-options --engine postgres --engine-version 13.6 ^


--query "*[].{DBInstanceClass:DBInstanceClass,StorageType:StorageType}|[?
StorageType=='gp2']|[].{DBInstanceClass:DBInstanceClass}" ^
--output text ^
--region us-east-1

Listing the DB engine versions that support a specific DB instance class in an


AWS Region
To list the DB engine versions that support a specific DB instance class in an AWS Region, run the
following command.

For Linux, macOS, or Unix:

aws rds describe-orderable-db-instance-options --engine engine --db-instance-


class DB_instance_class \
--query "*[].{EngineVersion:EngineVersion,StorageType:StorageType}|[?
StorageType=='gp2']|[].{EngineVersion:EngineVersion}" \
--output text \
--region region

For Windows:

aws rds describe-orderable-db-instance-options --engine engine --db-instance-


class DB_instance_class ^
--query "*[].{EngineVersion:EngineVersion,StorageType:StorageType}|[?
StorageType=='gp2']|[].{EngineVersion:EngineVersion}" ^
--output text ^

70
Amazon Relational Database Service User Guide
Changing your DB instance class

--region region

For example, the following command lists the DB engine versions of the RDS for PostgreSQL DB engine
that support the db.r5.large DB instance class in US East (N. Virginia).

For Linux, macOS, or Unix:

aws rds describe-orderable-db-instance-options --engine postgres --db-instance-class


db.r5.large \
--query "*[].{EngineVersion:EngineVersion,StorageType:StorageType}|[?
StorageType=='gp2']|[].{EngineVersion:EngineVersion}" \
--output text \
--region us-east-1

For Windows:

aws rds describe-orderable-db-instance-options --engine postgres --db-instance-class


db.r5.large ^
--query "*[].{EngineVersion:EngineVersion,StorageType:StorageType}|[?
StorageType=='gp2']|[].{EngineVersion:EngineVersion}" ^
--output text ^
--region us-east-1

Changing your DB instance class


You can change the CPU and memory available to a DB instance by changing its DB instance class. To
change the DB instance class, modify your DB instance by following the instructions in Modifying an
Amazon RDS DB instance (p. 401).

Configuring the processor for a DB instance class in


RDS for Oracle
Amazon RDS DB instance classes support Intel Hyper-Threading Technology, which enables multiple
threads to run concurrently on a single Intel Xeon CPU core. Each thread is represented as a virtual CPU
(vCPU) on the DB instance. A DB instance has a default number of CPU cores, which varies according to
DB instance class. For example, a db.m4.xlarge DB instance class has two CPU cores and two threads per
core by default—four vCPUs in total.
Note
Each vCPU is a hyperthread of an Intel Xeon CPU core.

Topics
• Overview of configuring the processor (p. 71)
• DB instance classes that support processor configuration (p. 72)
• Setting the CPU cores and threads per CPU core for a DB instance class (p. 80)

Overview of configuring the processor


When you use RDS for Oracle, you can usually find a DB instance class that has a combination of memory
and number of vCPUs to suit your workloads. However, you can also specify the following processor
features to optimize youri RDS for Oracle DB instance for specific workloads or business needs:

• Number of CPU cores – You can customize the number of CPU cores for the DB instance. You might do
this to potentially optimize the licensing costs of your software with a DB instance that has sufficient
amounts of RAM for memory-intensive workloads but fewer CPU cores.

71
Amazon Relational Database Service User Guide
Configuring the processor for RDS for Oracle

• Threads per core – You can disable Intel Hyper-Threading Technology by specifying a single thread
per CPU core. You might do this for certain workloads, such as high-performance computing (HPC)
workloads.

You can control the number of CPU cores and threads for each core separately. You can set one or both
in a request. After a setting is associated with a DB instance, the setting persists until you change it.

The processor settings for a DB instance are associated with snapshots of the DB instance. When a
snapshot is restored, its restored DB instance uses the processor feature settings used when the snapshot
was taken.

If you modify the DB instance class for a DB instance with nondefault processor settings, either specify
default processor settings or explicitly specify processor settings at modification. This requirement
ensures that you are aware of the third-party licensing costs that might be incurred when you modify the
DB instance.

There is no additional or reduced charge for specifying processor features on an RDS for Oracle DB
instance. You're charged the same as for DB instances that are launched with default CPU configurations.

DB instance classes that support processor configuration


You can configure the number of CPU cores and threads per core only when the following conditions are
met:

• You're configuring an RDS for Oracle DB instance. For information about the DB instance classes
supported by different Oracle Database editions, see RDS for Oracle instance classes (p. 1796).
• Your DB instance is using the Bring Your Own License (BYOL) licensing option of RDS for Oracle. For
more information about Oracle licensing options, see RDS for Oracle licensing options (p. 1793).
• Your DB instance doesn't belong to one of the db.r5 or db.r5b instance classes
that have predefined processor configurations. These instance classes have names
in the form db.r5.instance_size.tpcthreads_per_core.memratio or
db.r5b.instance_size.tpcthreads_per_core.memratio. For example, db.r5b.xlarge.tpc2.mem4x
is preconfigured with 2 threads per core (tpc2) and 4x as much memory as the standard db.r5b.xlarge
instance class. You can't configure the processor features of these optimized instance classes. For more
information, see Supported RDS for Oracle instance classes (p. 1797).

In the following table, you can find the DB instance classes that support setting a number of CPU cores
and CPU threads per core. You can also find the default value and the valid values for the number of CPU
cores and CPU threads per core for each DB instance class.

DB instance class Default vCPUs Default CPU Default Valid number Valid number
cores threads per of CPU cores of threads per
core core

db.m6i – memory-optimized instance classes

db.m6i.large 2 1 2 1 1, 2

db.m6i.xlarge 4 2 2 2 1, 2

db.m6i.2xlarge 8 4 2 2, 4 1, 2

db.m6i.4xlarge 16 8 2 2, 4, 6, 8 1, 2

db.m6i.4xlarge 16 8 2 2, 4, 6, 8 1, 2

72
Amazon Relational Database Service User Guide
Configuring the processor for RDS for Oracle

DB instance class Default vCPUs Default CPU Default Valid number Valid number
cores threads per of CPU cores of threads per
core core

db.m6i.8xlarge 32 16 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16

db.m6i.12xlarge 48 24 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16, 18,
20, 22, 24

db.m6i.16xlarge 64 32 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16, 18,
20, 22, 24, 26,
28, 30, 32

db.m6i.24xlarge 96 48 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16, 18,
20, 22, 24, 26,
28, 30, 32, 34,
36, 38, 40, 42,
44, 46, 48

db.m6i.32xlarge 128 64 2 2, 4, 6, 8, 10, 1, 2


12, 14, 16, 18,
20, 22, 24, 26,
28, 30, 32, 34,
36, 38, 40, 42,
44, 46, 48, 50,
52, 54, 56, 58,
60, 62, 64

db.m5 – general-purpose instance classes

db.m5.large 2 1 2 1 1, 2

db.m5.xlarge 4 2 2 2 1, 2

db.m5.2xlarge 8 4 2 2, 4 1, 2

db.m5.4xlarge 16 8 2 2, 4, 6, 8 1, 2

db.m5.8xlarge 32 16 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16

db.m5.12xlarge 48 24 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16, 18,
20, 22, 24

db.m5.16xlarge 64 32 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16, 18,
20, 22, 24, 26,
28, 30, 32

73
Amazon Relational Database Service User Guide
Configuring the processor for RDS for Oracle

DB instance class Default vCPUs Default CPU Default Valid number Valid number
cores threads per of CPU cores of threads per
core core

db.m5.24xlarge 96 48 2 4, 6, 8, 10, 12, 1, 2


14, 16, 18, 20,
22, 24, 26, 28,
30, 32, 34, 36,
38, 40, 42, 44,
46, 48

db.m5d – general-purpose instance classes

db.m5d.large 2 1 2 1 1, 2

db.m5d.xlarge 4 2 2 2 1, 2

db.m5d.2xlarge 8 4 2 2, 4 1, 2

db.m5d.4xlarge 16 8 2 2, 4, 6, 8 1, 2

db.m5d.8xlarge 32 16 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16

db.m5d.12xlarge 48 24 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16, 18,
20, 22, 24

db.m5d.16xlarge 64 32 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16, 18,
20, 22, 24, 26,
28, 30, 32

db.m5d.24xlarge 96 48 2 4, 6, 8, 10, 12, 1, 2


14, 16, 18, 20,
22, 24, 26, 28,
30, 32, 34, 36,
38, 40, 42, 44,
46, 48

db.m4 – general-purpose instance classes

db.m4.10xlarge 40 20 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16, 18,
20

db.m4.16xlarge 64 32 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16, 18,
20, 22, 24, 26,
28, 30, 32

db.r6i – memory-optimized instance classes

db.r6i.large 2 1 2 1 1, 2

db.r6i.xlarge 4 2 2 1, 2 1, 2

db.r6i.2xlarge 8 4 2 2, 4 1, 2

db.r6i.4xlarge 16 8 2 2, 4, 6, 8 1, 2

74
Amazon Relational Database Service User Guide
Configuring the processor for RDS for Oracle

DB instance class Default vCPUs Default CPU Default Valid number Valid number
cores threads per of CPU cores of threads per
core core

db.r6i.4xlarge 16 8 2 2, 4, 6, 8 1, 2

db.r6i.8xlarge 32 16 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16

db.r6i.12xlarge 48 24 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16, 18,
20, 22, 24

db.r6i.16xlarge 64 32 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16, 18,
20, 22, 24, 26,
28, 30, 32

db.r6i.24xlarge 96 48 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16, 18,
20, 22, 24, 26,
28, 30, 32, 34,
36, 38, 40, 42,
44, 46, 48

db.r6i.32xlarge 128 64 2 2, 4, 6, 8, 10, 1, 2


12, 14, 16, 18,
20, 22, 24, 26,
28, 30, 32, 34,
36, 38, 40, 42,
44, 46, 48, 50,
52, 54, 56, 58,
60, 62, 64

db.r5 – memory-optimized instance classes

db.r5.large 2 1 2 1 1, 2

db.r5.xlarge 4 2 2 2 1, 2

db.r5.2xlarge 8 4 2 2, 4 1, 2

db.r5.4xlarge 16 8 2 2, 4, 6, 8 1, 2

db.r5.8xlarge 32 16 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16

db.r5.12xlarge 48 24 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16, 18,
20, 22, 24

db.r5.16xlarge 64 32 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16, 18,
20, 22, 24, 26,
28, 30, 32

75
Amazon Relational Database Service User Guide
Configuring the processor for RDS for Oracle

DB instance class Default vCPUs Default CPU Default Valid number Valid number
cores threads per of CPU cores of threads per
core core

db.r5.24xlarge 96 48 2 4, 6, 8, 10, 12, 1, 2


14, 16, 18, 20,
22, 24, 26, 28,
30, 32, 34, 36,
38, 40, 42, 44,
46, 48

db.r5 – memory-optimized instance classes

db.r5b.large 2 1 2 1 1, 2

db.r5b.xlarge 4 2 2 2 1, 2

db.r5b.2xlarge 8 4 2 2, 4 1, 2

db.r5b.4xlarge 16 8 2 2, 4, 6, 8 1, 2

db.r5b.8xlarge 32 16 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16

db.r5b.12xlarge 48 24 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16, 18,
20, 22, 24

db.r5b.16xlarge 64 32 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16, 18,
20, 22, 24, 26,
28, 30, 32

db.r5b.24xlarge 96 48 2 4, 6, 8, 10, 12, 1, 2


14, 16, 18, 20,
22, 24, 26, 28,
30, 32, 34, 36,
38, 40, 42, 44,
46, 48

db.r5d – memory-optimized instance classes

db.r5d.large 2 1 2 1 1, 2

db.r5d.xlarge 4 2 2 2 1, 2

db.r5d.2xlarge 8 4 2 2, 4 1, 2

db.r5d.4xlarge 16 8 2 2, 4, 6, 8 1, 2

db.r5d.8xlarge 32 16 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16

db.r5d.12xlarge 48 24 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16, 18,
20, 22, 24

76
Amazon Relational Database Service User Guide
Configuring the processor for RDS for Oracle

DB instance class Default vCPUs Default CPU Default Valid number Valid number
cores threads per of CPU cores of threads per
core core

db.r5d.16xlarge 64 32 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16, 18,
20, 22, 24, 26,
28, 30, 32

db.r5d.24xlarge 96 48 2 4, 6, 8, 10, 12, 1, 2


14, 16, 18, 20,
22, 24, 26, 28,
30, 32, 34, 36,
38, 40, 42, 44,
46, 48

db.r4 – memory-optimized instance classes

db.r4.large 2 1 2 1 1, 2

db.r4.xlarge 4 2 2 1, 2 1, 2

db.r4.2xlarge 8 4 2 1, 2, 3, 4 1, 2

db.r4.4xlarge 16 8 2 1, 2, 3, 4, 5, 6, 1, 2
7, 8

db.r4.8xlarge 32 16 2 1, 2, 3, 4, 5, 6, 1, 2
7, 8, 9, 10, 11,
12, 13, 14, 15,
16

db.r4.16xlarge 64 32 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16, 18,
20, 22, 24, 26,
28, 30, 32

db.r3 – memory-optimized instance classes

db.r3.large 2 1 2 1 1, 2

db.r3.xlarge 4 2 2 1, 2 1, 2

db.r3.2xlarge 8 4 2 1, 2, 3, 4 1, 2

db.r3.4xlarge 16 8 2 1, 2, 3, 4, 5, 6, 1, 2
7, 8

db.r3.8xlarge 32 16 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16

db.x2idn – memory-optimized instance classes

db.x2idn.16xlarge 64 32 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16, 18,
20, 22, 24, 26,
28, 30, 32

77
Amazon Relational Database Service User Guide
Configuring the processor for RDS for Oracle

DB instance class Default vCPUs Default CPU Default Valid number Valid number
cores threads per of CPU cores of threads per
core core

db.x2idn.24xlarge 96 48 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16, 18,
20, 22, 24, 26,
28, 30, 32, 34,
36, 38, 40, 42,
44, 46, 48

db.x2idn.32xlarge 128 64 2 2, 4, 6, 8, 10, 1, 2


12, 14, 16, 18,
20, 22, 24, 26,
28, 30, 32, 34,
36, 38, 40, 42,
44, 46, 48, 50,
52, 54, 56, 58,
60, 62, 64

db.x2iedn – memory-optimized instance classes

db.x2iedn.xlarge 4 2 2 1, 2 1, 2

db.x2iedn.2xlarge 8 4 2 2, 4 1, 2

db.x2iedn.4xlarge 16 8 2 2, 4, 6, 8 1, 2

db.x2iedn.8xlarge 32 16 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16

db.x2iedn.16xlarge 64 32 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16, 18,
20, 22, 24, 26,
28, 30, 32

db.x2iedn.24xlarge 96 48 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16, 18,
20, 22, 24, 26,
28, 30, 32, 34,
36, 38, 40, 42,
44, 46, 48

db.x2iedn.32xlarge 128 64 2 2, 4, 6, 8, 10, 1, 2


12, 14, 16, 18,
20, 22, 24, 26,
28, 30, 32, 34,
36, 38, 40, 42,
44, 46, 48, 50,
52, 54, 56, 58,
60, 62, 64

db.x2iezn – memory-optimized instance classes

db.x2iezn.2xlarge 8 4 2 2, 4 1, 2

db.x2iezn.4xlarge 16 8 2 2, 4, 6, 8 1, 2

78
Amazon Relational Database Service User Guide
Configuring the processor for RDS for Oracle

DB instance class Default vCPUs Default CPU Default Valid number Valid number
cores threads per of CPU cores of threads per
core core

db.x2iezn.6xlarge 24 12 2 2, 4, 6, 8, 10, 1, 2
12

db.x2iezn.8xlarge 32 16 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16

db.x2iezn.12xlarge 48 24 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16, 18,
20, 22, 24

db.x1 – memory-optimized instance classes

db.x1.16xlarge 64 32 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16, 18,
20, 22, 24, 26,
28, 30, 32

db.x1.32xlarge 128 64 2 4, 8, 12, 16, 20, 1, 2


24, 28, 32, 36,
40, 44, 48, 52,
56, 60, 64

db.x1e – memory-optimized instance classes

db.x1e.xlarge 4 2 2 1, 2 1, 2

db.x1e.2xlarge 8 4 2 1, 2, 3, 4 1, 2

db.x1e.4xlarge 16 8 2 1, 2, 3, 4, 5, 6, 1, 2
7, 8

db.x1e.8xlarge 32 16 2 1, 2, 3, 4, 5, 6, 1, 2
7, 8, 9, 10, 11,
12, 13, 14, 15,
16

db.x1e.16xlarge 64 32 2 2, 4, 6, 8, 10, 1, 2
12, 14, 16, 18,
20, 22, 24, 26,
28, 30, 32

db.x1e.32xlarge 128 64 2 4, 8, 12, 16, 20, 1, 2


24, 28, 32, 36,
40, 44, 48, 52,
56, 60, 64

db.z1d – memory-optimized instance classes

db.z1d.large 2 1 2 1 1, 2

db.z1d.xlarge 4 2 2 2 1, 2

db.z1d.2xlarge 8 4 2 2, 4 1, 2

db.z1d.3xlarge 12 6 2 2, 4, 6 1, 2

79
Amazon Relational Database Service User Guide
Configuring the processor for RDS for Oracle

DB instance class Default vCPUs Default CPU Default Valid number Valid number
cores threads per of CPU cores of threads per
core core

db.z1d.6xlarge 24 12 2 2, 4, 6, 8, 10, 1, 2
12

db.z1d.12xlarge 48 24 2 4, 6, 8, 10, 12, 1, 2


14, 16, 18, 20,
22, 24

Note
You can use AWS CloudTrail to monitor and audit changes to the process configuration of
Amazon RDS for Oracle DB instances. For more information about using CloudTrail, see
Monitoring Amazon RDS API calls in AWS CloudTrail (p. 940).

Setting the CPU cores and threads per CPU core for a DB
instance class
You can configure the number of CPU cores and threads per core for the DB instance class when you
perform the following operations:

• Creating an Amazon RDS DB instance (p. 300)


• Modifying an Amazon RDS DB instance (p. 401)
• Restoring from a DB snapshot (p. 615)
• Restoring a DB instance to a specified time (p. 660)

Note
When you modify a DB instance to configure the number of CPU cores or threads per core, there
is a brief DB instance outage.

You can set the CPU cores and the threads per CPU core for a DB instance class using the AWS
Management Console, the AWS CLI, or the RDS API.

Console

When you are creating, modifying, or restoring a DB instance, you set the DB instance class in the
AWS Management Console. The Instance specifications section shows options for the processor. The
following image shows the processor features options.

80
Amazon Relational Database Service User Guide
Configuring the processor for RDS for Oracle

81
Amazon Relational Database Service User Guide
Configuring the processor for RDS for Oracle

Set the following options to the appropriate values for your DB instance class under Processor features:

• Core count – Set the number of CPU cores using this option. The value must be equal to or less than
the maximum number of CPU cores for the DB instance class.
• Threads per core – Specify 2 to enable multiple threads per core, or specify 1 to disable multiple
threads per core.

When you modify or restore a DB instance, you can also set the CPU cores and the threads per CPU core
to the defaults for the instance class.

When you view the details for a DB instance in the console, you can view the processor information for
its DB instance class on the Configuration tab. The following image shows a DB instance class with one
CPU core and multiple threads per core enabled.

For Oracle DB instances, the processor information only appears for Bring Your Own License (BYOL) DB
instances.

AWS CLI

You can set the processor features for a DB instance when you run one of the following AWS CLI
commands:

• create-db-instance

82
Amazon Relational Database Service User Guide
Configuring the processor for RDS for Oracle

• modify-db-instance
• restore-db-instance-from-db-snapshot
• restore-db-instance-from-s3
• restore-db-instance-to-point-in-time

To configure the processor of a DB instance class for a DB instance by using the AWS CLI, include the --
processor-features option in the command. Specify the number of CPU cores with the coreCount
feature name, and specify whether multiple threads per core are enabled with the threadsPerCore
feature name.

The option has the following syntax.

--processor-features "Name=coreCount,Value=<value>" "Name=threadsPerCore,Value=<value>"

The following are examples that configure the processor:

Examples
• Setting the number of CPU cores for a DB instance (p. 83)
• Setting the number of CPU cores and disabling multiple threads for a DB instance (p. 83)
• Viewing the valid processor values for a DB instance class (p. 84)
• Returning to default processor settings for a DB instance (p. 85)
• Returning to the default number of CPU cores for a DB instance (p. 85)
• Returning to the default number of threads per core for a DB instance (p. 86)

Setting the number of CPU cores for a DB instance

Example

The following example modifies mydbinstance by setting the number of CPU cores to 4. The changes
are applied immediately by using --apply-immediately. If you want to apply the changes during the
next scheduled maintenance window, omit the --apply-immediately option.

For Linux, macOS, or Unix:

aws rds modify-db-instance \


--db-instance-identifier mydbinstance \
--processor-features "Name=coreCount,Value=4" \
--apply-immediately

For Windows:

aws rds modify-db-instance ^


--db-instance-identifier mydbinstance ^
--processor-features "Name=coreCount,Value=4" ^
--apply-immediately

Setting the number of CPU cores and disabling multiple threads for a DB instance

Example

The following example modifies mydbinstance by setting the number of CPU cores to 4 and disabling
multiple threads per core. The changes are applied immediately by using --apply-immediately. If
you want to apply the changes during the next scheduled maintenance window, omit the --apply-
immediately option.

83
Amazon Relational Database Service User Guide
Configuring the processor for RDS for Oracle

For Linux, macOS, or Unix:

aws rds modify-db-instance \


--db-instance-identifier mydbinstance \
--processor-features "Name=coreCount,Value=4" "Name=threadsPerCore,Value=1" \
--apply-immediately

For Windows:

aws rds modify-db-instance ^


--db-instance-identifier mydbinstance ^
--processor-features "Name=coreCount,Value=4" "Name=threadsPerCore,Value=1" ^
--apply-immediately

Viewing the valid processor values for a DB instance class

Example
You can view the valid processor values for a particular DB instance class by running the describe-
orderable-db-instance-options command and specifying the instance class for the --db-instance-
class option. For example, the output for the following command shows the processor options for the
db.r3.large instance class.

aws rds describe-orderable-db-instance-options --engine oracle-ee --db-instance-class


db.r3.large

Following is sample output for the command in JSON format.

{
"SupportsIops": true,
"MaxIopsPerGib": 50.0,
"LicenseModel": "bring-your-own-license",
"DBInstanceClass": "db.r3.large",
"SupportsIAMDatabaseAuthentication": false,
"MinStorageSize": 100,
"AvailabilityZones": [
{
"Name": "us-west-2a"
},
{
"Name": "us-west-2b"
},
{
"Name": "us-west-2c"
}
],
"EngineVersion": "12.1.0.2.v2",
"MaxStorageSize": 32768,
"MinIopsPerGib": 1.0,
"MaxIopsPerDbInstance": 40000,
"ReadReplicaCapable": false,
"AvailableProcessorFeatures": [
{
"Name": "coreCount",
"DefaultValue": "1",
"AllowedValues": "1"
},
{
"Name": "threadsPerCore",
"DefaultValue": "2",
"AllowedValues": "1,2"
}

84
Amazon Relational Database Service User Guide
Configuring the processor for RDS for Oracle

],
"SupportsEnhancedMonitoring": true,
"SupportsPerformanceInsights": false,
"MinIopsPerDbInstance": 1000,
"StorageType": "io1",
"Vpc": false,
"SupportsStorageEncryption": true,
"Engine": "oracle-ee",
"MultiAZCapable": true
}

In addition, you can run the following commands for DB instance class processor information:

• describe-db-instances – Shows the processor information for the specified DB instance.


• describe-db-snapshots – Shows the processor information for the specified DB snapshot.
• describe-valid-db-instance-modifications – Shows the valid modifications to the processor for the
specified DB instance.

In the output of the preceding commands, the values for the processor features are not null only if the
following conditions are met:

• You are using an RDS for Oracle DB instance.


• Your RDS for Oracle DB instance supports changing processor values.
• The current CPU core and thread settings are set to nondefault values.

If the preceding conditions aren't met, you can get the instance type using describe-db-instances. You
can get the processor information for this instance type by running the EC2 operation describe-instance-
types.

Returning to default processor settings for a DB instance

Example
The following example modifies mydbinstance by returning its DB instance class to the default
processor values for it. The changes are applied immediately by using --apply-immediately. If
you want to apply the changes during the next scheduled maintenance window, omit the --apply-
immediately option.

For Linux, macOS, or Unix:

aws rds modify-db-instance \


--db-instance-identifier mydbinstance \
--use-default-processor-features \
--apply-immediately

For Windows:

aws rds modify-db-instance ^


--db-instance-identifier mydbinstance ^
--use-default-processor-features ^
--apply-immediately

Returning to the default number of CPU cores for a DB instance

Example
The following example modifies mydbinstance by returning its DB instance class to the default number
of CPU cores for it. The threads per core setting isn't changed. The changes are applied immediately

85
Amazon Relational Database Service User Guide
Configuring the processor for RDS for Oracle

by using --apply-immediately. If you want to apply the changes during the next scheduled
maintenance window, omit the --apply-immediately option.

For Linux, macOS, or Unix:

aws rds modify-db-instance \


--db-instance-identifier mydbinstance \
--processor-features "Name=coreCount,Value=DEFAULT" \
--apply-immediately

For Windows:

aws rds modify-db-instance ^


--db-instance-identifier mydbinstance ^
--processor-features "Name=coreCount,Value=DEFAULT" ^
--apply-immediately

Returning to the default number of threads per core for a DB instance

Example
The following example modifies mydbinstance by returning its DB instance class to the default number
of threads per core for it. The number of CPU cores setting isn't changed. The changes are applied
immediately by using --apply-immediately. If you want to apply the changes during the next
scheduled maintenance window, omit the --apply-immediately option.

For Linux, macOS, or Unix:

aws rds modify-db-instance \


--db-instance-identifier mydbinstance \
--processor-features "Name=threadsPerCore,Value=DEFAULT" \
--apply-immediately

For Windows:

aws rds modify-db-instance ^


--db-instance-identifier mydbinstance ^
--processor-features "Name=threadsPerCore,Value=DEFAULT" ^
--apply-immediately

RDS API
You can set the processor features for a DB instance when you call one of the following Amazon RDS API
operations:

• CreateDBInstance
• ModifyDBInstance
• RestoreDBInstanceFromDBSnapshot
• RestoreDBInstanceFromS3
• RestoreDBInstanceToPointInTime

To configure the processor features of a DB instance class for a DB instance by using the Amazon RDS
API, include the ProcessFeatures parameter in the call.

The parameter has the following syntax.

ProcessFeatures "Name=coreCount,Value=<value>" "Name=threadsPerCore,Value=<value>"

86
Amazon Relational Database Service User Guide
Hardware specifications

Specify the number of CPU cores with the coreCount feature name, and specify whether multiple
threads per core are enabled with the threadsPerCore feature name.

You can view the valid processor values for a particular DB instance class by running the
DescribeOrderableDBInstanceOptions operation and specifying the instance class for the
DBInstanceClass parameter. You can also use the following operations:

• DescribeDBInstances – Shows the processor information for the specified DB instance.


• DescribeDBSnapshots – Shows the processor information for the specified DB snapshot.
• DescribeValidDBInstanceModifications – Shows the valid modifications to the processor for the
specified DB instance.

In the output of the preceding operations, the values for the processor features are not null only if the
following conditions are met:

• You are using an RDS for Oracle DB instance.


• Your RDS for Oracle DB instance supports changing processor values.
• The current CPU core and thread settings are set to nondefault values.

If the preceding conditions aren't met, you can get the instance type using DescribeDBInstances.
You can get the processor information for this instance type by running the EC2 operation
DescribeInstanceTypes.

Hardware specifications for DB instance classes


The following terminology is used to describe hardware specifications for DB instance classes:

vCPU

The number of virtual central processing units (CPUs). A virtual CPU is a unit of capacity that you can
use to compare DB instance classes. Instead of purchasing or leasing a particular processor to use for
several months or years, you are renting capacity by the hour. Our goal is to make a consistent and
specific amount of CPU capacity available, within the limits of the actual underlying hardware.
ECU

The relative measure of the integer processing power of an Amazon EC2 instance. To make it easy
for developers to compare CPU capacity between different instance classes, we have defined an
Amazon EC2 Compute Unit. The amount of CPU that is allocated to a particular instance is expressed
in terms of these EC2 Compute Units. One ECU currently provides CPU capacity equivalent to a 1.0–
1.2 GHz 2007 Opteron or 2007 Xeon processor.
Memory (GiB)

The RAM, in gibibytes, allocated to the DB instance. There is often a consistent ratio between
memory and vCPU. As an example, take the db.r4 instance class, which has a memory to vCPU ratio
similar to the db.r5 instance class. However, for most use cases the db.r5 instance class provides
better, more consistent performance than the db.r4 instance class.
EBS-optimized

The DB instance uses an optimized configuration stack and provides additional, dedicated capacity
for I/O. This optimization provides the best performance by minimizing contention between I/O and
other traffic from your instance. For more information about Amazon EBS–optimized instances, see
Amazon EBS–Optimized instances in the Amazon EC2 User Guide for Linux Instances.

EBS-optimized instances have a baseline and maximum IOPS rate. The maximum IOPS rate is
enforced at the DB instance level. A set of EBS volumes that combine to have an IOPS rate that is
higher than the maximum can't exceed the instance-level threshold. For example, if the maximum

87
Amazon Relational Database Service User Guide
Hardware specifications

IOPS for a particular DB instance class is 40,000, and you attach four 64,000 IOPS EBS volumes, the
maximum IOPS is 40,000 rather than 256,000. For the IOPS maximum specific to each EC2 instance
type, see Supported instance types in the Amazon EC2 User Guide for Linux Instances.
Max. EBS bandwidth (Mbps)

The maximum EBS bandwidth in megabits per second. Divide by 8 to get the expected throughput in
megabytes per second.
Important
General Purpose SSD (gp2) volumes for Amazon RDS DB instances have a throughput limit
of 250 MiB/s in most cases. However, the throughput limit can vary depending on volume
size. For more information, see Amazon EBS volume types in the Amazon EC2 User Guide for
Linux Instances.
Network bandwidth

The network speed relative to other DB instance classes.

In the following table, you can find hardware details about the Amazon RDS DB instance classes.

For information about Amazon RDS DB engine support for each DB instance class, see Supported DB
engines for DB instance classes (p. 14).

Instance class vCPU ECU Memory Instance storage Max. EBS Network
(GiB) (GiB) bandwidth bandwidth
(Mbps) (Gbps)

db.m7g – general-purpose instance classes

db.m7g.16xlarge 64 — 256 EBS-optimized 20,000 30


only

db.m7g.12xlarge 48 — 192 EBS-optimized 15,000 22.5


only

db.m7g.8xlarge 32 — 128 EBS-optimized 10,000 15


only

db.m7g.4xlarge 16 — 64 EBS-optimized Up to 10,000 Up to 15


only

db.m7g.2xlarge* 8 — 32 EBS-optimized Up to 10,000 Up to 15


only

db.m7g.xlarge* 4 — 16 EBS-optimized Up to 10,000 Up to 12.5


only

db.m7g.large* 2 — 8 EBS-optimized Up to 10,000 Up to 12.5


only

db.m6g – general-purpose instance classes

db.m6g.16xlarge 64 — 256 EBS-optimized 19,000 25


only

db.m6g.12xlarge 48 — 192 EBS-optimized 13,500 20


only

db.m6g.8xlarge 32 — 128 EBS-optimized 9,500 12


only

88
Amazon Relational Database Service User Guide
Hardware specifications

Instance class vCPU ECU Memory Instance storage Max. EBS Network
(GiB) (GiB) bandwidth bandwidth
(Mbps) (Gbps)

db.m6g.4xlarge 16 — 64 EBS-optimized 6,800 Up to 10


only

db.m6g.2xlarge* 8 — 32 EBS-optimized Up to 4,750 Up to 10


only

db.m6g.xlarge* 4 — 16 EBS-optimized Up to 4,750 Up to 10


only

db.m6g.large* 2 — 8 EBS-optimized Up to 4,750 Up to 10


only

db.m6gd – general-purpose instance classes with SSD storage

db.m6gd.16xlarge 64 — 256 2 x 1900 NVMe 19,000 25


SSD

db.m6gd.12xlarge 48 — 192 2 x 1425 NVMe 13,500 20


SSD

db.m6gd.8xlarge 32 — 128 1 x 1900 NVMe 9,000 12


SSD

db.m6gd.4xlarge 16 — 64 1 x 950 NVMe 4,750 Up to 10


SSD

db.m6gd.2xlarge 8 — 32 1 x 474 NVMe Up to 4,750 Up to 10


SSD

db.m6gd.xlarge 4 — 16 1 x 237 NVMe Up to 4,750 Up to 10


SSD

db.m6gd.large 2 — 8 1 x 118 NVMe Up to 4,750 Up to 10


SSD

db.m6id – general-purpose instance classes with SSD storage

db.m6id.32xlarge 128 — 512 4 x 1900 NVMe 40,000 50


SSD

db.m6id.24xlarge 96 — 384 4 x 1425 NVMe 30,000 37.5


SSD

db.m6id.16xlarge 64 — 256 2 x 1900 NVMe 20,000 25


SSD

db.m6id.12xlarge 48 — 192 2 x 1425 NVMe 15,000 18.75


SSD

db.m6id.8xlarge 32 — 128 1 x 1900 NVMe 10,000 12.5


SSD

db.m6id.4xlarge* 16 — 64 1 x 950 NVMe Up to 10,000 Up to 12.5


SSD

db.m6id.2xlarge* 8 — 32 1 x 474 NVMe Up to 10,000 Up to 12.5


SSD

89
Amazon Relational Database Service User Guide
Hardware specifications

Instance class vCPU ECU Memory Instance storage Max. EBS Network
(GiB) (GiB) bandwidth bandwidth
(Mbps) (Gbps)

db.m6id.xlarge* 4 — 16 1 x 237 NVMe Up to 10,000 Up to 12.5


SSD

db.m6id.large* 2 — 8 1 x 118 NVMe Up to 10,000 Up to 12.5


SSD

db.m6i – general-purpose instance classes

db.m6i.32xlarge 128 — 512 EBS-optimized 50,000 40


only

db.m6i.24xlarge 96 — 384 EBS-optimized 37,500 30


only

db.m6i.16xlarge 64 — 256 EBS-optimized 25,000 20


only

db.m6i.12xlarge 48 — 192 EBS-optimized 18,750 15


only

db.m6i.8xlarge 32 — 128 EBS-optimized 12,500 10


only

db.m6i.4xlarge* 16 — 64 EBS-optimized Up to 12,500 Up to 10


only

db.m6i.2xlarge* 8 — 32 EBS-optimized Up to 12,500 Up to 10


only

db.m6i.xlarge* 4 — 16 EBS-optimized Up to 12,500 Up to 10


only

db.m6i.large* 2 — 8 EBS-optimized Up to 12,500 Up to 10


only

db.m5d – general-purpose instance classes that use NVMe SSDs

db.m5d.24xlarge 96 345 384 4 x 900 NVMe 19,000 25


SSD

db.m5d.16xlarge 64 262 256 4 x 600 NVMe 13,600 20


SSD

db.m5d.12xlarge 48 173 192 2 x 900 NVMe 9,500 10


SSD

db.m5d.8xlarge 32 131 128 2 x 600 NVMe 6,800 10


SSD

db.m5d.4xlarge 16 61 64 2 x 300 NVMe 4,750 Up to 10


SSD

db.m5d.2xlarge* 8 31 32 1 x 300 NVMe Up to 4,750 Up to 10


SSD

db.m5d.xlarge* 4 15 16 1 x 150 NVMe Up to 4,750 Up to 10


SSD

90
Amazon Relational Database Service User Guide
Hardware specifications

Instance class vCPU ECU Memory Instance storage Max. EBS Network
(GiB) (GiB) bandwidth bandwidth
(Mbps) (Gbps)

db.m5d.large* 2 10 8 1 x 75 NVMe SSD Up to 4,750 Up to 10

db.m5 – general-purpose instance classes

db.m5.24xlarge 96 345 384 EBS-optimized 19,000 25


only

db.m5.16xlarge 64 262 256 EBS-optimized 13,600 20


only

db.m5.12xlarge 48 173 192 EBS-optimized 9,500 10


only

db.m5.8xlarge 32 131 128 EBS-optimized 6,800 10


only

db.m5.4xlarge 16 61 64 EBS-optimized 4,750 Up to 10


only

db.m5.2xlarge* 8 31 32 EBS-optimized Up to 4,750 Up to 10


only

db.m5.xlarge* 4 15 16 EBS-optimized Up to 4,750 Up to 10


only

db.m5.large* 2 10 8 EBS-optimized Up to 4,750 Up to 10


only

db.m4 – general-purpose instance classes

db.m4.16xlarge 64 188 256 EBS-optimized 10,000 25


only

db.m4.10xlarge 40 124.5 160 EBS-optimized 4,000 10


only

db.m4.4xlarge 16 53.5 64 EBS-optimized 2,000 High


only

db.m4.2xlarge 8 25.5 32 EBS-optimized 1,000 High


only

db.m4.xlarge 4 13 16 EBS-optimized 750 High


only

db.m4.large 2 6.5 8 EBS-optimized 450 Moderate


only

db.m3 – general-purpose instance classes

db.m3.2xlarge 8 26 30 EBS-optimized 1,000 High


only

db.m3.xlarge 4 13 15 EBS-optimized 500 High


only

db.m3.large 2 6.5 7.5 EBS only — Moderate

91
Amazon Relational Database Service User Guide
Hardware specifications

Instance class vCPU ECU Memory Instance storage Max. EBS Network
(GiB) (GiB) bandwidth bandwidth
(Mbps) (Gbps)

db.m3.medium 1 3 3.75 EBS only — Moderate

db.m1 – general-purpose instance classes

db.m1.xlarge 4 4 15 EBS-optimized 450 High


only

db.m1.large 2 2 7.5 EBS-optimized 450 Moderate


only

db.m1.medium 1 1 3.75 EBS only — Moderate

db.m1.small 1 1 1.7 EBS only — Very Low

db.x2iezn – memory-optimized instance classes

db.x2iezn.12xlarge >48 — 1,536 EBS-optimized 19,000 100


only

db.x2iezn.8xlarge 32 — 1,024 EBS-optimized 12,000 75


only

db.x2iezn.6xlarge 24 — 768 EBS-optimized Up to 9,500 50


only

db.x2iezn.4xlarge 16 — 512 EBS-optimized Up to 4,750 Up to 25


only

db.x2iezn.2xlarge 8 — 256 EBS-optimized Up to 3,170 Up to 25


only

db.x2iedn – memory-optimized instance classes with SSD storage

db.x2iedn.32xlarge 128 — 4,096 2 x 1900 NVMe 80,000 100


SSD

db.x2iedn.24xlarge 96 — 3,072 2 x 1425 NVMe 60,000 75


SSD

db.x2iedn.16xlarge 64 — 2,048 1 x 1900 NVMe 40,000 50


SSD

db.x2iedn.8xlarge 32 — 1,024 1 x 950 NVMe 20,000 25


SSD

db.x2iedn.4xlarge 16 — 512 1 x 475 NVMe Up to 20,000 Up to 25


SSD

db.x2iedn.2xlarge 8 — 256 1 x 237 NVMe Up to 20,000 Up to 25


SSD

db.x2iedn.xlarge 4 — 128 1 x 118 NVMe Up to 20,000 Up to 25


SSD

db.x2idn – memory-optimized instance classes with SSD storage

92
Amazon Relational Database Service User Guide
Hardware specifications

Instance class vCPU ECU Memory Instance storage Max. EBS Network
(GiB) (GiB) bandwidth bandwidth
(Mbps) (Gbps)

db.x2idn.32xlarge 128 — 2,048 2 x 1900 NVMe 80,000 100


SSD

db.x2idn.24xlarge 96 — 1,536 2 x 1425 NVMe 60,000 75


SSD

db.x2idn.16xlarge 64 — 1,024 1 x 1900 NVMe 40,000 50


SSD

db.x2g – memory-optimized instance classes

db.x2g.16xlarge 64 — 1024 EBS-optimized 19,000 25


only

db.x2g.12xlarge 48 — 768 EBS-optimized 14,250 20


only

db.x2g.8xlarge 32 — 512 EBS-optimized 9,500 12


only

db.x2g.4xlarge 16 — 256 EBS-optimized 4,750 Up to 10


only

db.x2g.2xlarge 8 — 128 EBS-optimized Up to 4,750 Up to 10


only

db.x2g.xlarge 4 — 64 EBS-optimized Up to 4,750 Up to 10


only

db.x2g.large 2 — 32 EBS-optimized Up to 4,750 Up to 10


only

db.z1d – memory-optimized instance classes with SSD storage

db.z1d.12xlarge 48 271 384 2 x 900 NVMe 14,000 25


SSD

db.z1d.6xlarge 24 134 192 1 x 900 NVMe 7,000 10


SSD

db.z1d.3xlarge 12 75 96 1 x 450 NVMe 3,500 Up to 10


SSD

db.z1d.2xlarge 8 53 64 1 x 300 NVMe 2,333 Up to 10


SSD

db.z1d.xlarge* 4 28 32 1 x 150 NVMe Up to 2,333 Up to 10


SSD

db.z1d.large* 2 15 16 1 x 75 NVMe SSD Up to 2,333 Up to 10

db.x1e – memory-optimized instance classes

db.x1e.32xlarge 128 340 3,904 EBS-optimized 14,000 25


only

93
Amazon Relational Database Service User Guide
Hardware specifications

Instance class vCPU ECU Memory Instance storage Max. EBS Network
(GiB) (GiB) bandwidth bandwidth
(Mbps) (Gbps)

db.x1e.16xlarge 64 179 1,952 EBS-optimized 7,000 10


only

db.x1e.8xlarge 32 91 976 EBS-optimized 3,500 Up to 10


only

db.x1e.4xlarge 16 47 488 EBS-optimized 1,750 Up to 10


only

db.x1e.2xlarge 8 23 244 EBS-optimized 1,000 Up to 10


only

db.x1e.xlarge 4 12 122 EBS-optimized 500 Up to 10


only

db.x1 – memory-optimized instance classes

db.x1.32xlarge 128 349 1,952 EBS-optimized 14,000 25


only

db.x1.16xlarge 64 174.5 976 EBS-optimized 7,000 10


only

db.r7g – memory-optimized instance classes

db.r7g.16xlarge 64 — 512 EBS-optimized 20,000 30


only

db.r7g.12xlarge 48 — 384 EBS-optimized 15,000 22.5


only

db.r7g.8xlarge 32 — 256 EBS-optimized 10,000 15


only

db.r7g.4xlarge 16 — 128 EBS-optimized Up to 10,000 Up to 15


only

db.r7g.2xlarge* 8 — 64 EBS-optimized Up to 10,000 Up to 15


only

db.r7g.xlarge* 4 — 32 EBS-optimized Up to 10,000 Up to 12.5


only

db.r7g.large* 2 — 16 EBS-optimized Up to 10,000 Up to 12.5


only

db.r6g – memory-optimized instance classes

db.r6g.16xlarge 64 — 512 EBS-optimized 19,000 25


only

db.r6g.12xlarge 48 — 384 EBS-optimized 13,500 20


only

db.r6g.8xlarge 32 — 256 EBS-optimized 9,000 12


only

94
Amazon Relational Database Service User Guide
Hardware specifications

Instance class vCPU ECU Memory Instance storage Max. EBS Network
(GiB) (GiB) bandwidth bandwidth
(Mbps) (Gbps)

db.r6g.4xlarge 16 — 128 EBS-optimized 4,750 Up to 10


only

db.r6g.2xlarge* 8 — 64 EBS-optimized Up to 4,750 Up to 10


only

db.r6g.xlarge* 4 — 32 EBS-optimized Up to 4,750 Up to 10


only

db.r6g.large* 2 — 16 EBS-optimized Up to 4,750 Up to 10


only

db.r6gd – memory-optimized instance classes with SSD storage

db.r6gd.16xlarge 64 — 512 2 x 1900 NVMe 19,000 25


SSD

db.r6gd.12xlarge 48 — 384 2 x 1425 NVMe 13,500 20


SSD

db.r6gd.8xlarge 32 — 256 1 x 1900 NVMe 9,000 12


SSD

db.r6gd.4xlarge 16 — 128 1 x 950 NVMe 4,750 Up to 10


SSD

db.r6gd.2xlarge 8 — 64 1 x 474 NVMe Up to 4,750 Up to 10


SSD

db.r6gd.xlarge 4 — 32 1 x 237 NVMe Up to 4,750 Up to 10


SSD

db.r6gd.large 2 — 16 1 x 118 NVMe Up to 4,750 Up to 10


SSD

db.r6id – general-purpose instance classes with SSD storage

db.r6id.32xlarge 128 — 1,024 4x1900 NVMe 40,000 50


SSD

db.r6id.24xlarge 96 — 768 4x1425 NVMe 30,000 37.5


SSD

db.r6id.16xlarge 64 — 512 2x1900 NVMe 20,000 25


SSD

db.r6id.12xlarge 48 — 384 2x1425 NVMe 15,000 18.75


SSD

db.r6id.8xlarge 32 — 256 1x1900 NVMe 10,000 12.5


SSD

db.r6id.4xlarge* 16 — 128 1x950 NVMe SSD Up to 10,000 Up to 12.5

db.r6id.2xlarge* 8 — 64 1x474 NVMe SSD Up to 10,000 Up to 12.5

db.r6id.xlarge* 4 — 32 1x237 NVMe SSD Up to 10,000 Up to 12.5

95
Amazon Relational Database Service User Guide
Hardware specifications

Instance class vCPU ECU Memory Instance storage Max. EBS Network
(GiB) (GiB) bandwidth bandwidth
(Mbps) (Gbps)

db.r6id.large* 2 — 16 1x118 NVMe SSD Up to 10,000 Up to 12.5

db.r6i – memory-optimized instance classes

db.r6i.32xlarge 128 — 1,024 EBS-optimized 40,000 50


only

db.r6i.24xlarge 96 — 768 EBS-optimized 30,000 37.5


only

db.r6i.16xlarge 64 — 512 EBS-optimized 20,000 25


only

db.r6i.12xlarge 48 — 384 EBS-optimized 15,000 18.75


only

db.r6i.8xlarge 32 — 256 EBS-optimized 10,000 12.5


only

db.r6i.4xlarge* 16 — 128 EBS-optimized Up to 10,000 Up to 12.5


only

db.r6i.2xlarge* 8 — 64 EBS-optimized Up to 10,000 Up to 12.5


only

db.r6i.xlarge* 4 — 32 EBS-optimized Up to 10,000 Up to 12.5


only

db.r6i.large* 2 — 16 EBS-optimized Up to 10,000 Up to 12.5


only

db.r5d – memory-optimized instance classes with SSD storage

db.r5d.24xlarge 96 347 768 4 x 900 NVMe 19,000 25


SSD

db.r5d.16xlarge 64 264 512 4 x 600 NVMe 13,600 20


SSD

db.r5d.12xlarge 48 173 384 2 x 900 NVMe 9,500 10


SSD

db.r5d.8xlarge 32 132 256 2 x 600 NVMe 6,800 10


SSD

db.r5d.4xlarge 16 71 128 2 x 300 NVMe 4,750 Up to 10


SSD

db.r5d.2xlarge* 8 38 64 1 x 300 NVMe Up to 4,750 Up to 10


SSD

db.r5d.xlarge* 4 19 32 1 x 150 NVMe Up to 4,750 Up to 10


SSD

db.r5d.large* 2 10 16 1 x 75 NVMe SSD Up to 4,750 Up to 10

db.r5b – memory-optimized instance classes

96
Amazon Relational Database Service User Guide
Hardware specifications

Instance class vCPU ECU Memory Instance storage Max. EBS Network
(GiB) (GiB) bandwidth bandwidth
(Mbps) (Gbps)

db.r5b.24xlarge 96 347 768 EBS-optimized 60,000 25


only

db.r5b.16xlarge 64 264 512 EBS-optimized 40,000 20


only

db.r5b.12xlarge 48 173 384 EBS-optimized 30,000 10


only

db.r5b.8xlarge 32 132 256 EBS-optimized 20,000 10


only

db.r5b.4xlarge 16 71 128 EBS-optimized 10,000 Up to 10


only

db.r5b.2xlarge* 8 38 64 EBS-optimized Up to 10,000 Up to 10


only

db.r5b.xlarge* 4 19 32 EBS-optimized Up to 10,000 Up to 10


only

db.r5b.large* 2 10 16 EBS-optimized Up to 10,000 Up to 10


only

db.r5b – Oracle memory-optimized instance classes preconfigured for high memory, storage, and I/O

db.r5b.8xlarge.tpc2.mem3x 32 — 768 EBS-optimized 60,000 25


only

db.r5b.6xlarge.tpc2.mem4x 24 — 768 EBS-optimized 60,000 25


only

db.r5b.4xlarge.tpc2.mem4x 16 — 512 EBS-optimized 40,000 20


only

db.r5b.4xlarge.tpc2.mem3x 16 — 384 EBS-optimized 30,000 10


only

db.r5b.4xlarge.tpc2.mem2x 16 — 256 EBS-optimized 20,000 10


only

db.r5b.2xlarge.tpc2.mem8x 8 — 512 EBS-optimized 40,000 20


only

db.r5b.2xlarge.tpc2.mem4x 8 — 256 EBS-optimized 20,000 10


only

db.r5b.2xlarge.tpc1.mem2x 8 — 128 EBS-optimized 10,000 Up to 10


only

db.r5b.xlarge.tpc2.mem4x 4 — 128 EBS-optimized 10,000 Up to 10


only

db.r5b.xlarge.tpc2.mem2x 4 — 64 EBS-optimized Up to 10,000 Up to 10


only

97
Amazon Relational Database Service User Guide
Hardware specifications

Instance class vCPU ECU Memory Instance storage Max. EBS Network
(GiB) (GiB) bandwidth bandwidth
(Mbps) (Gbps)

db.r5b.large.tpc1.mem2x 2 — 32 EBS-optimized Up to 10,000 Up to 10


only

db.r5 – memory-optimized instance classes

db.r5.24xlarge 96 347 768 EBS-optimized 19,000 25


only

db.r5.16xlarge 64 264 512 EBS-optimized 13,600 20


only

db.r5.12xlarge 48 173 384 EBS-optimized 9,500 12


only

db.r5.8xlarge 32 132 256 EBS-optimized 6,800 10


only

db.r5.4xlarge 16 71 128 EBS-optimized 4,750 Up to 10


only

db.r5.2xlarge* 8 38 64 EBS-optimized Up to 4,750 Up to 10


only

db.r5.xlarge* 4 19 32 EBS-optimized Up to 4,750 Up to 10


only

db.r5.large* 2 10 16 EBS-optimized Up to 4,750 Up to 10


only

db.r5 – Oracle memory-optimized instance classes preconfigured for high memory, storage, and I/O

db.r5.12xlarge.tpc2.mem2x 48 — 768 EBS-optimized 19,000 25


only

db.r5.8xlarge.tpc2.mem3x 32 — 768 EBS-optimized 19,000 25


only

db.r5.6xlarge.tpc2.mem4x 24 — 768 EBS-optimized 19,000 25


only

db.r5.4xlarge.tpc2.mem4x 16 — 512 EBS-optimized 13,600 20


only

db.r5.4xlarge.tpc2.mem3x 16 — 384 EBS-optimized 9,500 10


only

db.r5.4xlarge.tpc2.mem2x 16 — 256 EBS-optimized 6,800 10


only

db.r5.2xlarge.tpc2.mem8x 8 — 512 EBS-optimized 13,600 20


only

db.r5.2xlarge.tpc2.mem4x 8 — 256 EBS-optimized 6,800 10


only

db.r5.2xlarge.tpc1.mem2x 8 — 128 EBS-optimized 4,750 Up to 10


only

98
Amazon Relational Database Service User Guide
Hardware specifications

Instance class vCPU ECU Memory Instance storage Max. EBS Network
(GiB) (GiB) bandwidth bandwidth
(Mbps) (Gbps)

db.r5.xlarge.tpc2.mem4x 4 — 128 EBS-optimized 4,750 Up to 10


only

db.r5.xlarge.tpc2.mem2x 4 — 64 EBS-optimized Up to 4,750 Up to 10


only

db.r5.large.tpc1.mem2x 2 — 32 EBS-optimized Up to 4,750 Up to 10


only

db.r4 – memory-optimized instance classes

db.r4.16xlarge 64 195 488 EBS-optimized 14,000 25


only

db.r4.8xlarge 32 99 244 EBS-optimized 7,000 10


only

db.r4.4xlarge 16 53 122 EBS-optimized 3,500 Up to 10


only

db.r4.2xlarge 8 27 61 EBS-optimized 1,700 Up to 10


only

db.r4.xlarge 4 13.5 30.5 EBS-optimized 850 Up to 10


only

db.r4.large 2 7 15.25 EBS-optimized 425 Up to 10


only

db.r3 – memory-optimized instance classes

db.r3.8xlarge 32 104 244 EBS only — 10

db.r3.4xlarge 16 52 122 EBS-optimized 2,000 High


only

db.r3.2xlarge 8 26 61 EBS-optimized 1,000 High


only

db.r3.xlarge 4 13 30.5 EBS-optimized 500 Moderate


only

db.r3.large 2 6.5 15.25 EBS-optimized — Moderate


only

db.t4g – burstable-performance instance classes

db.t4g.2xlarge* 8 — 32 EBS-optimized Up to 2,780 Up to 5


only

db.t4g.xlarge* 4 — 16 EBS-optimized Up to 2,780 Up to 5


only

db.t4g.large* 2 — 8 EBS-optimized Up to 2,780 Up to 5


only

99
Amazon Relational Database Service User Guide
Hardware specifications

Instance class vCPU ECU Memory Instance storage Max. EBS Network
(GiB) (GiB) bandwidth bandwidth
(Mbps) (Gbps)

db.t4g.medium* 2 — 4 EBS-optimized Up to 2,085 Up to 5


only

db.t4g.small* 2 — 2 EBS-optimized Up to 2,085 Up to 5


only

db.t4g.micro* 2 — 1 EBS-optimized Up to 2,085 Up to 5


only

db.t3 – burstable-performance instance classes

db.t3.2xlarge* 8 Variable 32 EBS-optimized Up to 2,048 Up to 5


only

db.t3.xlarge* 4 Variable 16 EBS-optimized Up to 2,048 Up to 5


only

db.t3.large* 2 Variable 8 EBS-optimized Up to 2,048 Up to 5


only

db.t3.medium* 2 Variable 4 EBS-optimized Up to 1,536 Up to 5


only

db.t3.small* 2 Variable 2 EBS-optimized Up to 1,536 Up to 5


only

db.t3.micro* 2 Variable 1 EBS-optimized Up to 1,536 Up to 5


only

db.t2 – burstable-performance instance classes

db.t2.2xlarge 8 Variable 32 EBS only — Moderate

db.t2.xlarge 4 Variable 16 EBS only — Moderate

db.t2.large 2 Variable 8 EBS only — Moderate

db.t2.medium 2 Variable 4 EBS only — Moderate

db.t2.small 1 Variable 2 EBS only — Low

db.t2.micro 1 Variable 1 EBS only — Low

* These DB instance classes can support maximum performance for 30 minutes at least once every 24
hours. For more information on baseline performance of the underlying EC2 instance types, see Amazon
EBS-optimized instances in the Amazon EC2 User Guide for Linux Instances.

** The r3.8xlarge DB instance class doesn't have dedicated EBS bandwidth and therefore doesn't offer
EBS optimization. For this instance class, network traffic and Amazon EBS traffic share the same 10-
gigabit network interface.

100
Amazon Relational Database Service User Guide
DB instance storage

Amazon RDS DB instance storage


DB instances for Amazon RDS for MySQL, MariaDB, PostgreSQL, Oracle, and Microsoft SQL Server use
Amazon Elastic Block Store (Amazon EBS) volumes for database and log storage.

In some cases, your database workload might not be able to achieve 100 percent of the IOPS that you
have provisioned. For more information, see Factors that affect storage performance (p. 108).

For more information about instance storage pricing, see Amazon RDS pricing.

Amazon RDS storage types


Amazon RDS provides three storage types: General Purpose SSD (also known as gp2 and gp3),
Provisioned IOPS SSD (also known as io1), and magnetic (also known as standard). They differ in
performance characteristics and price, which means that you can tailor your storage performance and
cost to the needs of your database workload. You can create MySQL, MariaDB, Oracle, and PostgreSQL
RDS DB instances with up to 64 tebibytes (TiB) of storage. You can create SQL Server RDS DB instances
with up to 16 TiB of storage. For this amount of storage, use the Provisioned IOPS SSD and General
Purpose SSD storage types.

The following list briefly describes the three storage types:

• General Purpose SSD – General Purpose SSD volumes offer cost-effective storage that is ideal for a
broad range of workloads running on medium-sized DB instances. General Purpose storage is best
suited for development and testing environments.

For more information about General Purpose SSD storage, including the storage size ranges, see
General Purpose SSD storage (p. 102).
• Provisioned IOPS SSD – Provisioned IOPS storage is designed to meet the needs of I/O-intensive
workloads, particularly database workloads, that require low I/O latency and consistent I/O
throughput. Provisioned IOPS storage is best suited for production environments.

For more information about Provisioned IOPS storage, including the storage size ranges, see
Provisioned IOPS SSD storage (p. 104).
• Magnetic – Amazon RDS also supports magnetic storage for backward compatibility. We recommend
that you use General Purpose SSD or Provisioned IOPS SSD for any new storage needs. The maximum
amount of storage allowed for DB instances on magnetic storage is less than that of the other storage
types. For more information, see Magnetic storage (p. 107).

When you select General Purpose SSD or Provisioned IOPS SSD, depending on the engine selected and
the amount of storage requested, Amazon RDS automatically stripes across multiple volumes to enhance
performance, as shown in the following table.

Database engine Amazon RDS storage Number of volumes provisioned


size

MariaDB, MySQL, and Less than 400 GiB 1


PostgreSQL

MariaDB, MySQL, and Between 400 and 4


PostgreSQL 64,000 GiB

Oracle Less than 200 GiB 1

Oracle Between 200 and 4


64,000 GiB

101
Amazon Relational Database Service User Guide
General Purpose SSD storage

Database engine Amazon RDS storage Number of volumes provisioned


size

SQL Server Any 1

When you modify a General Purpose SSD or Provisioned IOPS SSD volume, it goes through a sequence of
states. While the volume is in the optimizing state, your volume performance is in between the source
and target configuration specifications. Transitional volume performance will be no less than the lowest
of the two specifications. For more information on volume modifications, see Monitor the progress of
volume modifications in the Amazon EC2 User Guide.
Important
When you modify an instance’s storage so that it goes from one volume to four volumes, or
when you modify an instance using magnetic storage, Amazon RDS does not use the Elastic
Volumes feature. Instead, Amazon RDS provisions new volumes and transparently moves the
data from the old volume to the new volumes. This operation consumes a significant amount of
IOPS and throughput of both the old and new volumes. Depending on the size of the volume
and the amount of database workload present during the modification, this operation can
consume a high amount of IOPS, significantly increase IO latency, and take several hours to
complete, while the RDS instance remains in the Modifying state.

General Purpose SSD storage


General Purpose SSD storage offers cost-effective storage that is acceptable for most database
workloads that aren't latency sensitive.
Note
DB instances that use General Purpose SSD storage can experience much longer latency after
read replica creation, Multi-AZ conversion, and DB snapshot restoration than instances that
use Provisioned IOPS storage. If you need a DB instance with minimum latency after these
operations, we recommend using Provisioned IOPS SSD storage (p. 104).

Amazon RDS offers two types of General Purpose SSD storage: gp2 storage (p. 102) and gp3
storage (p. 103).

gp2 storage
When your applications don't need high storage performance, you can use General Purpose SSD gp2
storage. Baseline I/O performance for gp2 storage is 3 IOPS for each GiB, with a minimum of 100
IOPS. This relationship means that larger volumes have better performance. For example, baseline
performance for one 100-GiB volume is 300 IOPS. Baseline performance for one 1,000 GiB volume is
3,000 IOPS. Maximum baseline performance for one gp2 volume (5334 GiB and greater) is 16,000 IOPS.

Individual gp2 volumes below 1,000 GiB in size also have the ability to burst to 3,000 IOPS for extended
periods of time. Volume I/O credit balance determines burst performance. For more information about
volume I/O credits, see I/O credits and burst performance in the Amazon EC2 User Guide. For a more
detailed description of how baseline performance and I/O credit balance affect performance, see the
post Understanding burst vs. baseline performance with Amazon RDS and gp2 on the AWS Database
Blog.

Many workloads never deplete the burst balance. However, some workloads can exhaust the 3,000
IOPS burst storage credit balance, so you should plan your storage capacity to meet the needs of your
workloads.

For gp2 volumes larger than 1,000 GiB, the baseline performance is greater than the burst performance.
For such volumes, burst is irrelevant because the baseline performance is better than the 3,000 IOPS
burst performance. However, for DB instances of certain engines and sizes, storage is striped across four

102
Amazon Relational Database Service User Guide
General Purpose SSD storage

volumes providing four times the baseline throughput, and four times the burst IOPS of a single volume.
Storage performance for gp2 volumes on Amazon RDS DB engines, including the threshold, is shown in
the following table.

DB engine RDS Storage size Range of Baseline Range of Baseline Burst IOPS
IOPS Throughput

MariaDB, MySQL, Between 20 and 100-1197 IOPS 128-250 MiB/s 3,000


and PostgreSQL 399 GiB

MariaDB, MySQL, Between 400 and 1,200-4,005 IOPS 500-1,000 MiB/s 12,000
and PostgreSQL 1,335 GiB

MariaDB, MySQL, Between 1,336 4008-11,997 IOPS 1,000 MiB/s 12,000


and PostgreSQL and 3,999 GiB

MariaDB, MySQL, Between 4,000 12,000-64,000 1,000 MiB/s N/A*


and PostgreSQL and 65,536 GiB IOPS

Oracle Between 20 and 100-597 IOPS 128-250 MiB/s 3,000


199 GiB

Oracle Between 200 and 600-4,005 IOPS 500-1,000 MiB/s 12,000


1,335 GiB

Oracle Between 1,336 4008-11,997 IOPS 1,000 MiB/s 12,000


and 3,999 GiB

Oracle Between 4,000 12,000-64,000 1,000 MiB/s N/A*


and 65,536 GiB IOPS

SQL Server Between 20 and 100-999 IOPS 128-250 MiB/s 3,000


333 GiB

SQL Server Between 334 and 1,002-2,997 IOPS 250 MiB/s 3,000
999 GiB

SQL Server Between 1,000 3,000-16,000 250 MiB/s N/A*


and 16,384 GiB IOPS

* The baseline performance of the volume exceeds the maximum burst performance.

gp3 storage
By using General Purpose SSD gp3 storage volumes, you can customize storage performance
independently of storage capacity. Storage performance is the combination of I/O operations per second
(IOPS) and how fast the storage volume can perform reads and writes (storage throughput). On gp3
storage volumes, Amazon RDS provides a baseline storage performance of 3000 IOPS and 125 MiB/s.

For every RDS DB engine except RDS for SQL Server, when the storage size for gp3 volumes reaches a
certain threshold, the baseline storage performance increases to 12,000 IOPS and 500 MiB/s. This is
because of volume striping, where the storage uses four volumes instead of one. RDS for SQL Server
doesn't support volume striping, and therefore doesn't have a threshold value.
Note
General Purpose SSD gp3 storage is supported on Single-AZ and Multi-AZ DB instances, but
isn't supported on Multi-AZ DB clusters. For more information, see Configuring and managing a
Multi-AZ deployment (p. 492) and Multi-AZ DB cluster deployments (p. 499).

103
Amazon Relational Database Service User Guide
Provisioned IOPS storage

Storage performance for gp3 volumes on Amazon RDS DB engines, including the threshold, is shown in
the following table.

DB engine Storage size Baseline storage Range of Range of


performance Provisioned IOPS provisioned
storage
throughput

MariaDB, MySQL, Less than 400 GiB 3,000 IOPS/125 N/A N/A
and PostgreSQL MiB/s

MariaDB, MySQL, 400 GiB and 12,000 IOPS/500 12,000–64,000 500–4,000 MiB/s
and PostgreSQL higher MiB/s IOPS

Oracle Less than 200 GiB 3,000 IOPS/125 N/A N/A


MiB/s

Oracle 200 GiB and 12,000 IOPS/500 12,000–64,000 500–4,000 MiB/s


higher MiB/s IOPS

SQL Server 20 GiB–16 TiB 3,000 IOPS/125 3,000–16,000 125–1,000 MiB/s


MiB/s IOPS

For every DB engine except RDS for SQL Server, you can provision additional IOPS and storage
throughput when storage size is at or above the threshold value. For RDS for SQL Server, you can
provision additional IOPS and storage throughput for any available storage size. For all DB engines, you
pay for only the additional provisioned storage performance. For more information, see Amazon RDS
pricing.

Although the added Provisioned IOPS and storage throughput aren't dependent on the storage size, they
are related to each other. When you raise the IOPS above 32,000 for MariaDB and MySQL, the storage
throughput value automatically increases from 500 MiB/s. For example, when you set the IOPS to 40,000
on RDS for MySQL, the storage throughput must be at least 625 MiB/s. The automatic increase doesn't
happen for Oracle, PostgreSQL, and SQL Server DB instances.

Storage performance values for gp3 volumes on RDS have the following constraints:

• The maximum ratio of storage throughput to IOPS is 0.25 for all supported DB engines.
• The minimum ratio of IOPS to allocated storage (in GiB) is 0.5 on RDS for SQL Server. There is no
minimum ratio for the other supported DB engines.
• The maximum ratio of IOPS to allocated storage is 500 for all supported DB engines.
• If you're using storage autoscaling, the same ratios between IOPS and maximum storage threshold (in
GiB) also apply.

For more information on storage autoscaling, see Managing capacity automatically with Amazon RDS
storage autoscaling (p. 480).

Provisioned IOPS SSD storage


For a production application that requires fast and consistent I/O performance, we recommend
Provisioned IOPS storage. Provisioned IOPS storage is a storage type that delivers predictable
performance, and consistently low latency. Provisioned IOPS storage is optimized for online transaction
processing (OLTP) workloads that require consistent performance. Provisioned IOPS helps performance
tuning of these workloads.

104
Amazon Relational Database Service User Guide
Provisioned IOPS storage

When you create a DB instance, you specify the IOPS rate and the size of the volume. Amazon RDS
provides that IOPS rate for the DB instance until you change it.

io1 storage
For I/O-intensive workloads, you can use Provisioned IOPS SSD io1 storage and achieve up to 256,000
I/O operations per second (IOPS). The throughput of io1 volumes varies based on the amount of IOPS
provisioned per volume and on the size of the IO operations being executed. For more information about
throughput of io1 volumes, see Provisioned IOPS volumes in the Amazon EC2 User Guide.

The following table shows the range of Provisioned IOPS and and maximum throughput for each
database engine and storage size range.

Database engine Range of storage Range of Provisioned IOPS Maximum


size throughput

MariaDB, MySQL, and Between 100 and 1,000–19,950 IOPS 500 MiB/s
PostgreSQL 399 GiB

MariaDB, MySQL, and Between 400 and 1,000–256,000 IOPS 4,000 MiB/s
PostgreSQL 65,536 GiB

Oracle Between 100 and 1,000–9,950 IOPS 500 MiB/s


199 GiB

Oracle Between 200 and 1,000–256,000 IOPS 4,000 MiB/s


65,536 GiB

SQL Server Between 20 and 1,000–64,000 IOPS 1,000 MiB/s


16,384 GiB

Note
For SQL Server, the maximum 64,000 IOPS is guaranteed only on Nitro-based instances that are
on the m5*, m6i, r5*, r6i, and z1d instance types. Other instance types guarantee performance
up to 32,000 IOPS.
For Oracle, you can provision the maximum 256,000 IOPS only on the r5b instance type.

The IOPS and storage size ranges have the following constraints:

• The ratio of IOPS to allocated storage (in GiB) must be from 1–50 on RDS for SQL Server, and 0.5–50
on other RDS DB engines.
• If you're using storage autoscaling, the same ratios between IOPS and maximum storage threshold (in
GiB) also apply.

For more information on storage autoscaling, see Managing capacity automatically with Amazon RDS
storage autoscaling (p. 480).

Combining Provisioned IOPS storage with Multi-AZ


deployments or read replicas
For production OLTP use cases, we recommend that you use Multi-AZ deployments for enhanced fault
tolerance with Provisioned IOPS storage for fast and predictable performance.

You can also use Provisioned IOPS SSD storage with read replicas for MySQL, MariaDB or PostgreSQL.
The type of storage for a read replica is independent of that on the primary DB instance. For example,

105
Amazon Relational Database Service User Guide
Comparing SSD storage types

you might use General Purpose SSD for read replicas with a primary DB instance that uses Provisioned
IOPS SSD storage to reduce costs. However, your read replica's performance in this case might differ
from that of a configuration where both the primary DB instance and the read replicas use Provisioned
IOPS SSD storage.

Provisioned IOPS storage costs


With Provisioned IOPS storage, you are charged for the provisioned resources whether or not you use
them in a given month.

For more information about pricing, see Amazon RDS pricing.

Getting the best performance from Amazon RDS Provisioned


IOPS SSD storage
If your workload is I/O constrained, using Provisioned IOPS SSD storage can increase the number of I/O
requests that the system can process concurrently. Increased concurrency allows for decreased latency
because I/O requests spend less time in a queue. Decreased latency allows for faster database commits,
which improves response time and allows for higher database throughput.

Provisioned IOPS SSD storage provides a way to reserve I/O capacity by specifying IOPS. However, as
with any other system capacity attribute, its maximum throughput under load is constrained by the
resource that is consumed first. That resource might be network bandwidth, CPU, memory, or database
internal resources.

For more information about getting the most out of your Provisioned IOPS volumes, see Amazon EBS
volume performance.

Comparing solid-state drive (SSD) storage types


The following table shows use cases and performance characteristics for the SSD storage volumes used
by Amazon RDS.

Characteristic Provisioned IOPS (io1) General Purpose (gp3) General Purpose (gp2)

Description Consistent storage Flexibility in Provides burstable IOPS


performance (IOPS, provisioning storage,
throughput, latency) IOPS, and throughput Balances price
independently performance for a wide
Designed for latency- variety of transactional
sensitive, transactional Balances price workloads
workloads performance for a wide
variety of transactional
workloads

Use cases Transactional workloads Broad range of Broad range of


that require sustained workloads running workloads running
IOPS performance up to on medium-sized on medium-sized
256,000 IOPS relational databases relational databases
in development/test in development/test
environments environments

Latency Single-digit millisecond, Single-digit millisecond, Single-digit millisecond,


provided consistently provided consistently provided consistently
99.9% of the time 99% of the time 99% of the time

106
Amazon Relational Database Service User Guide
Magnetic storage

Characteristic Provisioned IOPS (io1) General Purpose (gp3) General Purpose (gp2)

Volume size 100 GiB–64 TiB (16 TiB 20 GiB–64 TiB (16 TiB 20 GiB–64 TiB (16 TiB
on RDS for SQL Server) on RDS for SQL Server) on RDS for SQL Server)

Maximum IOPS 256,000 (64,000 on 64,000 (16,000 on RDS 64,000 (16,000 on RDS
RDS for SQL Server) for SQL Server) for SQL Server)
Note
You can't
provision IOPS
directly on gp2
storage. IOPS
varies with
the allocated
storage size.

Maximum throughput Scales based on Provision additional 1000 MB/s (250 MB/s
Provisioned IOPS up to throughput up to 4,000 on RDS for SQL Server)
4,000 MB/s MB/s (1000 MB/s on
RDS for SQL Server

AWS CLI and RDS API io1 gp3 gp2


name

Magnetic storage
Amazon RDS also supports magnetic storage for backward compatibility. We recommend that you
use General Purpose SSD or Provisioned IOPS SSD for any new storage needs. The following are some
limitations for magnetic storage:

• Doesn't allow you to scale storage when using the SQL Server database engine.
• Doesn't support storage autoscaling.
• Doesn't support elastic volumes.
• Limited to a maximum size of 3 TiB.
• Limited to a maximum of 1,000 IOPS.

Monitoring storage performance


Amazon RDS provides several metrics that you can use to determine how your DB instance is performing.
You can view the metrics on the summary page for your instance in Amazon RDS Management Console.
You can also use Amazon CloudWatch to monitor these metrics. For more information, see Viewing
metrics in the Amazon RDS console (p. 696). Enhanced Monitoring provides more detailed I/O metrics;
for more information, see Monitoring OS metrics with Enhanced Monitoring (p. 797).

The following metrics are useful for monitoring storage for your DB instance:

• IOPS – The number of I/O operations completed each second. This metric is reported as the average
IOPS for a given time interval. Amazon RDS reports read and write IOPS separately on 1-minute
intervals. Total IOPS is the sum of the read and write IOPS. Typical values for IOPS range from zero to
tens of thousands per second.
• Latency – The elapsed time between the submission of an I/O request and its completion. This metric
is reported as the average latency for a given time interval. Amazon RDS reports read and write
latency separately at 1-minute intervals. Typical values for latency are in milliseconds (ms).

107
Amazon Relational Database Service User Guide
Factors that affect storage performance

• Throughput – The number of bytes each second that are transferred to or from disk. This metric is
reported as the average throughput for a given time interval. Amazon RDS reports read and write
throughput separately on 1-minute intervals using units of megabytes per second (MB/s). Typical
values for throughput range from zero to the I/O channel's maximum bandwidth.
• Queue Depth – The number of I/O requests in the queue waiting to be serviced. These are I/O
requests that have been submitted by the application but have not been sent to the device because
the device is busy servicing other I/O requests. Time spent waiting in the queue is a component of
latency and service time (not available as a metric). This metric is reported as the average queue depth
for a given time interval. Amazon RDS reports queue depth in 1-minute intervals. Typical values for
queue depth range from zero to several hundred.

Measured IOPS values are independent of the size of the individual I/O operation. This means that when
you measure I/O performance, make sure to look at the throughput of the instance, not simply the
number of I/O operations.

Factors that affect storage performance


System activities, database workload, and DB instance class can affect storage performance.

System activities
The following system-related activities consume I/O capacity and might reduce DB instance performance
while in progress:

• Multi-AZ standby creation


• Read replica creation
• Changing storage types

Database workload
In some cases, your database or application design results in concurrency issues, locking, or other forms
of database contention. In these cases, you might not be able to use all the provisioned bandwidth
directly. In addition, you might encounter the following workload-related situations:

• The throughput limit of the underlying instance type is reached.


• Queue depth is consistently less than 1 because your application isn't driving enough I/O operations.
• You experience query contention in the database even though some I/O capacity is unused.

In some cases, there isn't a system resource that is at or near a limit, and adding threads doesn't increase
the database transaction rate. In such cases, the bottleneck is most likely contention in the database.
The most common forms are row lock and index page lock contention, but there are many other
possibilities. If this is your situation, seek the advice of a database performance tuning expert.

DB instance class
To get the most performance out of your Amazon RDS DB instance, choose a current generation instance
type with enough bandwidth to support your storage type. For example, you can choose Amazon EBS–
optimized instances and instances with 10-gigabit network connectivity.
Important
Depending on the instance class you're using, you might see lower IOPS performance than
the maximum that you can provision with RDS. For specific information on IOPS performance
for DB instance classes, see Amazon EBS–optimized instances in the Amazon EC2 User Guide.

108
Amazon Relational Database Service User Guide
Factors that affect storage performance

We recommend that you determine the maximum IOPS for the instance class before setting a
Provisioned IOPS value for your DB instance.

We encourage you to use the latest generation of instances to get the best performance. Previous
generation DB instances can also have lower maximum storage.

Some older 32-bit file systems might have lower storage capacities. To determine the storage capacity of
your DB instance, you can use the describe-valid-db-instance-modifications AWS CLI command.

The following list shows the maximum storage that most DB instance classes can scale to for each
database engine:

• MariaDB – 64 TiB
• Microsoft SQL Server – 16 TiB
• MySQL – 64 TiB
• Oracle – 64 TiB
• PostgreSQL – 64 TiB

The following table shows some exceptions for maximum storage (in TiB). All RDS for Microsoft SQL
Server DB instances have a maximum storage of 16 TiB, so there are no entries for SQL Server.

Instance class MariaDB MySQL Oracle PostgreSQL

db.m3 – standard instance classes

db.m3.2xlarge N/A 6 N/A 6

db.m3.xlarge N/A 6 N/A 6

db.m3.large N/A 6 N/A 6

db.m3.medium N/A 32 N/A 32

db.t4g – burstable-performance instance classes

db.t4g.medium 16 16 N/A 32

db.t4g.small 16 16 N/A 16

db.t4g.micro 6 6 N/A 6

db.t3 – burstable-performance instance classes

db.t3.medium 16 16 32 32

db.t3.small 16 16 32 16

db.t3.micro 6 6 32 6

db.t2 – burstable-performance instance classes

db.t2.medium 32 32 N/A 32

db.t2.small 16 16 N/A 16

db.t2.micro 6 6 N/A 6

For more details about all instance classes supported, see Previous generation DB instances.

109
Amazon Relational Database Service User Guide
Regions, Availability Zones, and Local Zones

Regions, Availability Zones, and Local Zones


Amazon cloud computing resources are hosted in multiple locations world-wide. These locations
are composed of AWS Regions, Availability Zones, and Local Zones. Each AWS Region is a separate
geographic area. Each AWS Region has multiple, isolated locations known as Availability Zones.
Note
For information about finding the Availability Zones for an AWS Region, see Describe your
Availability Zones in the Amazon EC2 documentation.

By using Local Zones, you can place resources, such as compute and storage, in multiple locations closer
to your users. Amazon RDS enables you to place resources, such as DB instances, and data in multiple
locations. Resources aren't replicated across AWS Regions unless you do so specifically.

Amazon operates state-of-the-art, highly-available data centers. Although rare, failures can occur that
affect the availability of DB instances that are in the same location. If you host all your DB instances in
one location that is affected by such a failure, none of your DB instances will be available.

It is important to remember that each AWS Region is completely independent. Any Amazon RDS activity
you initiate (for example, creating database instances or listing available database instances) runs only in
your current default AWS Region. The default AWS Region can be changed in the console, or by setting
the AWS_DEFAULT_REGION environment variable. Or it can be overridden by using the --region
parameter with the AWS Command Line Interface (AWS CLI). For more information, see Configuring the
AWS Command Line Interface, specifically the sections about environment variables and command line
options.

Amazon RDS supports special AWS Regions called AWS GovCloud (US). These are designed to allow
US government agencies and customers to move more sensitive workloads into the cloud. The AWS
GovCloud (US) Regions address the US government's specific regulatory and compliance requirements.
For more information, see What is AWS GovCloud (US)?

To create or work with an Amazon RDS DB instance in a specific AWS Region, use the corresponding
regional service endpoint.

110
Amazon Relational Database Service User Guide
AWS Regions

AWS Regions
Each AWS Region is designed to be isolated from the other AWS Regions. This design achieves the
greatest possible fault tolerance and stability.

When you view your resources, you see only the resources that are tied to the AWS Region that you
specified. This is because AWS Regions are isolated from each other, and we don't automatically replicate
resources across AWS Regions.

Region availability
The following table shows the AWS Regions where Amazon RDS is currently available and the endpoint
for each Region.

Region Region Endpoint Protocol


Name

US East us-east-2 rds.us-east-2.amazonaws.com HTTPS


(Ohio)
rds-fips.us-east-2.api.aws HTTPS

rds.us-east-2.api.aws HTTPS

rds-fips.us-east-2.amazonaws.com HTTPS

US East (N. us-east-1 rds.us-east-1.amazonaws.com HTTPS


Virginia)
rds-fips.us-east-1.api.aws HTTPS

rds-fips.us-east-1.amazonaws.com HTTPS

rds.us-east-1.api.aws HTTPS

US us-west-1 rds.us-west-1.amazonaws.com HTTPS


West (N.
California) rds.us-west-1.api.aws HTTPS

rds-fips.us-west-1.amazonaws.com HTTPS

rds-fips.us-west-1.api.aws HTTPS

US West us-west-2 rds.us-west-2.amazonaws.com HTTPS


(Oregon)
rds-fips.us-west-2.amazonaws.com HTTPS

rds.us-west-2.api.aws HTTPS

rds-fips.us-west-2.api.aws HTTPS

Africa af-south-1 rds.af-south-1.amazonaws.com HTTPS


(Cape
Town) rds.af-south-1.api.aws HTTPS

Asia ap-east-1 rds.ap-east-1.amazonaws.com HTTPS


Pacific
(Hong rds.ap-east-1.api.aws HTTPS
Kong)

111
Amazon Relational Database Service User Guide
AWS Regions

Region Region Endpoint Protocol


Name

Asia ap- rds.ap-south-2.amazonaws.com HTTPS


Pacific south-2
(Hyderabad) rds.ap-south-2.api.aws HTTPS

Asia ap- rds.ap-southeast-3.amazonaws.com HTTPS


Pacific southeast-3
(Jakarta) rds.ap-southeast-3.api.aws HTTPS

Asia ap- rds.ap-southeast-4.amazonaws.com HTTPS


Pacific southeast-4
(Melbourne) rds.ap-southeast-4.api.aws HTTPS

Asia ap- rds.ap-south-1.amazonaws.com HTTPS


Pacific south-1
(Mumbai) rds.ap-south-1.api.aws HTTPS

Asia ap- rds.ap-northeast-3.amazonaws.com HTTPS


Pacific northeast-3
(Osaka) rds.ap-northeast-3.api.aws HTTPS

Asia ap- rds.ap-northeast-2.amazonaws.com HTTPS


Pacific northeast-2
(Seoul) rds.ap-northeast-2.api.aws HTTPS

Asia ap- rds.ap-southeast-1.amazonaws.com HTTPS


Pacific southeast-1
(Singapore) rds.ap-southeast-1.api.aws HTTPS

Asia ap- rds.ap-southeast-2.amazonaws.com HTTPS


Pacific southeast-2
(Sydney) rds.ap-southeast-2.api.aws HTTPS

Asia ap- rds.ap-northeast-1.amazonaws.com HTTPS


Pacific northeast-1
(Tokyo) rds.ap-northeast-1.api.aws HTTPS

Canada ca- rds.ca-central-1.amazonaws.com HTTPS


(Central) central-1
rds.ca-central-1.api.aws HTTPS

rds-fips.ca-central-1.api.aws HTTPS

rds-fips.ca-central-1.amazonaws.com HTTPS

Europe eu- rds.eu-central-1.amazonaws.com HTTPS


(Frankfurt) central-1
rds.eu-central-1.api.aws HTTPS

Europe eu-west-1 rds.eu-west-1.amazonaws.com HTTPS


(Ireland)
rds.eu-west-1.api.aws HTTPS

Europe eu-west-2 rds.eu-west-2.amazonaws.com HTTPS


(London)
rds.eu-west-2.api.aws HTTPS

112
Amazon Relational Database Service User Guide
Availability Zones

Region Region Endpoint Protocol


Name

Europe eu- rds.eu-south-1.amazonaws.com HTTPS


(Milan) south-1
rds.eu-south-1.api.aws HTTPS

Europe eu-west-3 rds.eu-west-3.amazonaws.com HTTPS


(Paris)
rds.eu-west-3.api.aws HTTPS

Europe eu- rds.eu-south-2.amazonaws.com HTTPS


(Spain) south-2
rds.eu-south-2.api.aws HTTPS

Europe eu-north-1 rds.eu-north-1.amazonaws.com HTTPS


(Stockholm)
rds.eu-north-1.api.aws HTTPS

Europe eu- rds.eu-central-2.amazonaws.com HTTPS


(Zurich) central-2
rds.eu-central-2.api.aws HTTPS

Israel (Tel il- rds.il-central-1.amazonaws.com HTTPS


Aviv) central-1
rds.il-central-1.api.aws HTTPS

Middle me- rds.me-south-1.amazonaws.com HTTPS


East south-1
(Bahrain) rds.me-south-1.api.aws HTTPS

Middle me- rds.me-central-1.amazonaws.com HTTPS


East (UAE) central-1
rds.me-central-1.api.aws HTTPS

South sa-east-1 rds.sa-east-1.amazonaws.com HTTPS


America
(São rds.sa-east-1.api.aws HTTPS
Paulo)

AWS us-gov- rds.us-gov-east-1.amazonaws.com HTTPS


GovCloud east-1
(US-East) rds.us-gov-east-1.api.aws HTTPS

AWS us-gov- rds.us-gov-west-1.amazonaws.com HTTPS


GovCloud west-1
(US-West) rds.us-gov-west-1.api.aws HTTPS

If you do not explicitly specify an endpoint, the US West (Oregon) endpoint is the default.

When you work with a DB instance using the AWS CLI or API operations, make sure that you specify its
regional endpoint.

Availability Zones
When you create a DB instance, you can choose an Availability Zone or have Amazon RDS choose one for
you randomly. An Availability Zone is represented by an AWS Region code followed by a letter identifier
(for example, us-east-1a).

113
Amazon Relational Database Service User Guide
Local Zones

Use the describe-availability-zones Amazon EC2 command as follows to describe the Availability Zones
within the specified Region that are enabled for your account.

aws ec2 describe-availability-zones --region region-name

For example, to describe the Availability Zones within the US East (N. Virginia) Region (us-east-1) that are
enabled for your account, run the following command:

aws ec2 describe-availability-zones --region us-east-1

You can't choose the Availability Zones for the primary and secondary DB instances in a Multi-AZ DB
deployment. Amazon RDS chooses them for you randomly. For more information about Multi-AZ
deployments, see Configuring and managing a Multi-AZ deployment (p. 492).
Note
Random selection of Availability Zones by RDS doesn't guarantee an even distribution of DB
instances among Availability Zones within a single account or DB subnet group. You can request
a specific AZ when you create or modify a Single-AZ instance, and you can use more-specific DB
subnet groups for Multi-AZ instances. For more information, see Creating an Amazon RDS DB
instance (p. 300) and Modifying an Amazon RDS DB instance (p. 401).

Local Zones
A Local Zone is an extension of an AWS Region that is geographically close to your users. You can extend
any VPC from the parent AWS Region into Local Zones. To do so, create a new subnet and assign it to the
AWS Local Zone. When you create a subnet in a Local Zone, your VPC is extended to that Local Zone. The
subnet in the Local Zone operates the same as other subnets in your VPC.

When you create a DB instance, you can choose a subnet in a Local Zone. Local Zones have their own
connections to the internet and support AWS Direct Connect. Thus, resources created in a Local Zone can
serve local users with very low-latency communications. For more information, see AWS Local Zones.

A Local Zone is represented by an AWS Region code followed by an identifier that indicates the location,
for example us-west-2-lax-1a.
Note
A Local Zone can't be included in a Multi-AZ deployment.

To use a Local Zone

1. Enable the Local Zone in the Amazon EC2 console.

For more information, see Enabling Local Zones in the Amazon EC2 User Guide for Linux Instances.
2. Create a subnet in the Local Zone.

For more information, see Creating a subnet in your VPC in the Amazon VPC User Guide.
3. Create a DB subnet group in the Local Zone.

When you create a DB subnet group, choose the Availability Zone group for the Local Zone.

For more information, see Creating a DB instance in a VPC (p. 2696).


4. Create a DB instance that uses the DB subnet group in the Local Zone.

For more information, see Creating an Amazon RDS DB instance (p. 300).

Important
Currently, the only AWS Local Zone where Amazon RDS is available is Los Angeles in the US
West (Oregon) Region.

114
Amazon Relational Database Service User Guide
Local Zones

115
Amazon Relational Database Service User Guide
Supported Amazon RDS features by Region and engine

Supported features in Amazon RDS by AWS Region


and DB engine
Support for Amazon RDS features and options varies across AWS Regions and specific versions of each
DB engine. To identify RDS DB engine version support and availability in a given AWS Region, you can
use the following sections.

Amazon RDS features are different from engine-native features and options. For more information on
engine-native features and options, see Engine-native features. (p. 162)

Topics
• Table conventions (p. 116)
• Feature quick reference (p. 116)
• Blue/Green Deployments (p. 118)
• Cross-Region automated backups (p. 118)
• Cross-Region read replicas (p. 119)
• Database activity streams (p. 121)
• Dual-stack mode (p. 125)
• Export snapshots to S3 (p. 133)
• IAM database authentication (p. 138)
• Kerberos authentication (p. 141)
• Multi-AZ DB clusters (p. 147)
• Performance Insights (p. 150)
• RDS Custom (p. 151)
• Amazon RDS Proxy (p. 155)
• Secrets Manager integration (p. 161)
• Engine-native features (p. 162)

Table conventions
The tables in the feature sections use these patterns to specify version numbers and level of availability:

• Version x.y – The specific version alone is available.


• Version x.y and higher – The specified version and all higher minor versions of its major version are
supported. For example, "version 10.11 and higher" means that versions 10.11, 10.11.1, and 10.12 are
available.
• — – The feature isn't currently available for the selected RDS DB engine or in the specified AWS
Region.

Feature quick reference


The following quick reference table lists each feature and available RDS DB engine. Region and specific
version availability appears in the later feature sections.

116
Amazon Relational Database Service User Guide
Feature quick reference

Feature RDS for MariaDB RDS for MySQL RDS for Oracle RDS for RDS for SQL
PostgreSQL Server

Blue/ Available (p. 118) Available (p. 118) – – –


Green
Deployments

Cross- – – Available (p. 119) Available (p. 119) Available (p. 119)
Region
automated
backups

Cross- Available (p. 119) Available (p. 119) Available (p. 120) Available (p. 120) Available (p. 120)
Region
read
replicas

Database – – Available (p. 121) – Available (p. 124)


activity
streams

Dual- Available (p. 125) Available (p. 127) Available (p. 128) Available (p. 129) Available (p. 131)
stack
mode

Export Available (p. 133) Available (p. 135) – Available (p. 136) –
Snapshot
to
Amazon
S3

AWS Available (p. 138) Available (p. 140) – Available (p. 140) –
Identity
and
Access
Management
(IAM)
database
authentication

Kerberos – Available (p. 141) Available (p. 142) Available (p. 143) Available (p. 145)
authentication

Multi- – Available (p. 147) – Available (p. 148) –


AZ DB
clusters

Performance
Available (p. 150) Available (p. 150) Available (p. 150) Available (p. 150) Available (p. 150)
Insights

RDS – – Available (p. 151) – Available (p. 153)


Custom

RDS Available (p. 155) Available (p. 157) – Available (p. 158) Available (p. 160)
Proxy

Secrets Available (p. 161) Available (p. 161) Available (p. 161) Available (p. 161) Available (p. 161)
Manager
integration

117
Amazon Relational Database Service User Guide
Blue/Green Deployments

Blue/Green Deployments
A blue/green deployment copies a production database environment in a separate, synchronized
staging environment. By using Amazon RDS Blue/Green Deployments, you can make changes to the
database in the staging environment without affecting the production environment. For example, you
can upgrade the major or minor DB engine version, change database parameters, or make schema
changes in the staging environment. When you are ready, you can promote the staging environment to
be the new production database environment. For more information, see Using Amazon RDS Blue/Green
Deployments for database updates (p. 566).

The Blue/Green Deployments feature is supported for the following engines:

• RDS for MariaDB version 10.2 and higher


• RDS for MySQL version 5.7 and higher
• RDS for MySQL version 8.0.15 and higher

The Blue/Green Deployments feature isn't supported with the following engines:

• RDS for SQL Server


• RDS for Oracle
• RDS for PostgreSQL

The Blue/Green Deployments feature is supported in all AWS Regions except Israel (Tel Aviv).

Cross-Region automated backups


By using backup replication in Amazon RDS, you can configure your RDS DB instance to replicate
snapshots and transaction logs to a destination Region. When backup replication is configured for a DB
instance, RDS starts a cross-Region copy of all snapshots and transaction logs when they're ready. For
more information, see Replicating automated backups to another AWS Region (p. 602).

Backup replication is available in all AWS Regions except the following:

• Africa (Cape Town)


• Asia Pacific (Hong Kong)
• Asia Pacific (Hyderabad)
• Asia Pacific (Jakarta)
• Europe (Milan)
• Europe (Spain)
• Europe (Zurich)
• Middle East (Bahrain)
• Middle East (UAE)

For more detailed information on limitations for source and destination backup Regions, see Replicating
automated backups to another AWS Region (p. 602).

Topics
• Backup replication with RDS for MariaDB (p. 119)
• Backup replication with RDS for MySQL (p. 119)
• Backup replication with RDS for Oracle (p. 119)
• Backup replication with RDS for PostgreSQL (p. 119)

118
Amazon Relational Database Service User Guide
Cross-Region read replicas

• Backup replication with RDS for SQL Server (p. 119)

Backup replication with RDS for MariaDB


Amazon RDS supports backup replication for all currently available versions of RDS for MariaDB.

Backup replication with RDS for MySQL


Amazon RDS supports backup replication for all currently available versions of RDS for MySQL.

Backup replication with RDS for Oracle


Amazon RDS supports backup replication for all currently available versions of RDS for Oracle.

Backup replication with RDS for PostgreSQL


Amazon RDS supports backup replication for all currently available versions of RDS for PostgreSQL.

Backup replication with RDS for SQL Server


Amazon RDS supports backup replication for all currently available versions of RDS for SQL Server.

Cross-Region read replicas


By using cross-Region read replicas in Amazon RDS, you can create a MariaDB, MySQL, Oracle,
PostgreSQL, or SQL Server read replica in a different Region from the source DB instance. For more
information about cross-Region read replicas, including source and destination Region considerations,
see Creating a read replica in a different AWS Region (p. 452).

Topics
• Cross-Region read replicas with RDS for MariaDB (p. 119)
• Cross-Region read replicas with RDS for MySQL (p. 119)
• Cross-Region read replicas with RDS for Oracle (p. 120)
• Cross-Region read replicas with RDS for PostgreSQL (p. 120)
• Cross-Region read replicas with RDS for SQL Server (p. 120)

Cross-Region read replicas with RDS for MariaDB


Cross-Region read replicas with RDS for MariaDB are available in all Regions for the following versions:

• RDS for MariaDB 10.11 (All available versions)


• RDS for MariaDB 10.6 (All available versions)
• RDS for MariaDB 10.5 (All available versions)
• RDS for MariaDB 10.4 (All available versions)
• RDS for MariaDB 10.3 (All available versions)

Cross-Region read replicas with RDS for MySQL


Cross-Region read replicas with RDS for MySQL are available in all Regions for the following versions:

• RDS for MySQL 8.0 (All available versions)

119
Amazon Relational Database Service User Guide
Cross-Region read replicas

• RDS for MySQL 5.7 (All available versions)

Cross-Region read replicas with RDS for Oracle


Cross-Region read replicas for RDS for Oracle are available in all Regions with the following version
limitations:

• For RDS for Oracle 21c, cross-Region read replicas aren't available.
• For RDS for Oracle 19c, cross-Region read replicas are available for instances of Oracle Database 19c
that aren't container database (CBD) instances.
• For RDS for Oracle 12c, cross-Region read replicas are available for Oracle Enterprise Edition (EE) of
Oracle Database 12c Release 1 (12.1) using 12.1.0.2.v10 and higher 12c releases.

For more information on additional requirements for cross-Region read replicas with RDS for Oracle, see
Requirements and considerations for RDS for Oracle replicas (p. 1974).

Cross-Region read replicas with RDS for PostgreSQL


Cross-Region read replicas with RDS for PostgreSQL are available in all Regions for the following
versions:

• RDS for PostgreSQL 15 (All available versions)


• RDS for PostgreSQL 14 (All available versions)
• RDS for PostgreSQL 13 (All available versions)
• RDS for PostgreSQL 12 (All available versions)
• RDS for PostgreSQL 11 (All available versions)
• RDS for PostgreSQL 10 (All available versions)

Cross-Region read replicas with RDS for SQL Server


Cross-Region read replicas with RDS for SQL Server are available in all Regions except the following:

• Africa (Cape Town)


• Asia Pacific (Hong Kong)
• Asia Pacific (Hyderabad)
• Asia Pacific (Jakarta)
• Asia Pacific (Melbourne)
• Europe (Milan)
• Europe (Spain)
• Europe (Zurich)
• Middle East (Bahrain)
• Middle East (UAE)

Cross-Region read replicas with RDS for SQL Server are available for the following versions using
Microsoft SQL Server Enterprise Edition:

• RDS for SQL Server 2019 (Version 15.00.4073.23 and higher)


• RDS for SQL Server 2017 (Version 14.00.3281.6 and higher)
• RDS for SQL Server 2016 (Version 13.00.6300.2 and higher)

120
Amazon Relational Database Service User Guide
Database activity streams

Database activity streams


By using Database Activity Streams in Amazon RDS, you can monitor and set alarms for auditing activity
in your Oracle database and SQL Server database. For more information, see Overview of Database
Activity Streams (p. 944).

Database activity streams aren't available with the following engines:

• RDS for MariaDB


• RDS for MySQL
• RDS for PostgreSQL

Topics
• Database activity streams with RDS for Oracle (p. 121)
• Database activity streams with RDS for SQL Server (p. 124)

Database activity streams with RDS for Oracle


The following Regions and engine versions are available for database activity streams with RDS for
Oracle.

For more information on additional requirements for database activity streams with RDS for Oracle, see
Overview of Database Activity Streams (p. 944).

Region RDS for Oracle 21c RDS for Oracle 19c

US East (Ohio) – Oracle Database


19.0.0.0.ru-2019-07.rur-2019-07.r1 and higher,
using either Enterprise Edition (EE) or Standard
Edition 2 (SE2)

US East (N. Virginia) – Oracle Database


19.0.0.0.ru-2019-07.rur-2019-07.r1 and higher,
using either Enterprise Edition (EE) or Standard
Edition 2 (SE2)

US West (N. California) – Oracle Database


19.0.0.0.ru-2019-07.rur-2019-07.r1 and higher,
using either Enterprise Edition (EE) or Standard
Edition 2 (SE2)

US West (Oregon) – Oracle Database


19.0.0.0.ru-2019-07.rur-2019-07.r1 and higher,
using either Enterprise Edition (EE) or Standard
Edition 2 (SE2)

Africa (Cape Town) – Oracle Database


19.0.0.0.ru-2019-07.rur-2019-07.r1 and higher,
using either Enterprise Edition (EE) or Standard
Edition 2 (SE2)

Asia Pacific (Hong Kong) – Oracle Database


19.0.0.0.ru-2019-07.rur-2019-07.r1 and higher,
using either Enterprise Edition (EE) or Standard
Edition 2 (SE2)

121
Amazon Relational Database Service User Guide
Database activity streams

Region RDS for Oracle 21c RDS for Oracle 19c

Asia Pacific (Hyderabad) – Oracle Database


19.0.0.0.ru-2019-07.rur-2019-07.r1 and higher,
using either Enterprise Edition (EE) or Standard
Edition 2 (SE2)

Asia Pacific (Jakarta) – Oracle Database


19.0.0.0.ru-2019-07.rur-2019-07.r1 and higher,
using either Enterprise Edition (EE) or Standard
Edition 2 (SE2)

Asia Pacific (Melbourne) – –

Asia Pacific (Mumbai) – Oracle Database


19.0.0.0.ru-2019-07.rur-2019-07.r1 and higher,
using either Enterprise Edition (EE) or Standard
Edition 2 (SE2)

Asia Pacific (Osaka) – Oracle Database


19.0.0.0.ru-2019-07.rur-2019-07.r1 and higher,
using either Enterprise Edition (EE) or Standard
Edition 2 (SE2)

Asia Pacific (Seoul) – Oracle Database


19.0.0.0.ru-2019-07.rur-2019-07.r1 and higher,
using either Enterprise Edition (EE) or Standard
Edition 2 (SE2)

Asia Pacific (Singapore) – Oracle Database


19.0.0.0.ru-2019-07.rur-2019-07.r1 and higher,
using either Enterprise Edition (EE) or Standard
Edition 2 (SE2)

Asia Pacific (Sydney) – Oracle Database


19.0.0.0.ru-2019-07.rur-2019-07.r1 and higher,
using either Enterprise Edition (EE) or Standard
Edition 2 (SE2)

Asia Pacific (Tokyo) – Oracle Database


19.0.0.0.ru-2019-07.rur-2019-07.r1 and higher,
using either Enterprise Edition (EE) or Standard
Edition 2 (SE2)

Canada (Central) – Oracle Database


19.0.0.0.ru-2019-07.rur-2019-07.r1 and higher,
using either Enterprise Edition (EE) or Standard
Edition 2 (SE2)

China (Beijing) – –

China (Ningxia) – –

Europe (Frankfurt) – Oracle Database


19.0.0.0.ru-2019-07.rur-2019-07.r1 and higher,
using either Enterprise Edition (EE) or Standard
Edition 2 (SE2)

122
Amazon Relational Database Service User Guide
Database activity streams

Region RDS for Oracle 21c RDS for Oracle 19c

Europe (Ireland) – Oracle Database


19.0.0.0.ru-2019-07.rur-2019-07.r1 and higher,
using either Enterprise Edition (EE) or Standard
Edition 2 (SE2)

Europe (London) – Oracle Database


19.0.0.0.ru-2019-07.rur-2019-07.r1 and higher,
using either Enterprise Edition (EE) or Standard
Edition 2 (SE2)

Europe (Milan) – Oracle Database


19.0.0.0.ru-2019-07.rur-2019-07.r1 and higher,
using either Enterprise Edition (EE) or Standard
Edition 2 (SE2)

Europe (Paris) – Oracle Database


19.0.0.0.ru-2019-07.rur-2019-07.r1 and higher,
using either Enterprise Edition (EE) or Standard
Edition 2 (SE2)

Europe (Spain) – Oracle Database


19.0.0.0.ru-2019-07.rur-2019-07.r1 and higher,
using either Enterprise Edition (EE) or Standard
Edition 2 (SE2)

Europe (Stockholm) – Oracle Database


19.0.0.0.ru-2019-07.rur-2019-07.r1 and higher,
using either Enterprise Edition (EE) or Standard
Edition 2 (SE2)

Europe (Zurich) – –

Asia Pacific (Melbourne) – –

Middle East (Bahrain) – Oracle Database


19.0.0.0.ru-2019-07.rur-2019-07.r1 and higher,
using either Enterprise Edition (EE) or Standard
Edition 2 (SE2)

Middle East (UAE) – Oracle Database


19.0.0.0.ru-2019-07.rur-2019-07.r1 and higher,
using either Enterprise Edition (EE) or Standard
Edition 2 (SE2)

South America (São Paulo) – Oracle Database


19.0.0.0.ru-2019-07.rur-2019-07.r1 and higher,
using either Enterprise Edition (EE) or Standard
Edition 2 (SE2)

AWS GovCloud (US-East) – –

AWS GovCloud (US-West) – –

123
Amazon Relational Database Service User Guide
Database activity streams

Database activity streams with RDS for SQL Server


The following Regions and engine versions are available for database activity streams with RDS for SQL
Server.

For more information on additional requirements for database activity streams with RDS for SQL Server,
see Overview of Database Activity Streams (p. 944).

Region RDS for SQL Server RDS for SQL Server RDS for SQL Server RDS for SQL Server
2019 2017 2016 2014

US East (Ohio) All available versions All available versions All available versions –

US East (N. Virginia) All available versions All available versions All available versions –

US West (N. All available versions All available versions All available versions –
California)

US West (Oregon) All available versions All available versions All available versions –

Africa (Cape Town) All available versions All available versions All available versions –

Asia Pacific (Hong All available versions All available versions All available versions –
Kong)

Asia Pacific All available versions All available versions All available versions –
(Hyderabad)

Asia Pacific (Jakarta) All available versions All available versions All available versions –

Asia Pacific – – – –
(Melbourne)

Asia Pacific (Mumbai) All available versions All available versions All available versions –

Asia Pacific (Osaka) All available versions All available versions All available versions –

Asia Pacific (Seoul) All available versions All available versions All available versions –

Asia Pacific All available versions All available versions All available versions –
(Singapore)

Asia Pacific (Sydney) All available versions All available versions All available versions –

Asia Pacific (Tokyo) All available versions All available versions All available versions –

Canada (Central) All available versions All available versions All available versions –

China (Beijing) – – – –

China (Ningxia) – – – –

Europe (Frankfurt) All available versions All available versions All available versions –

Europe (Ireland) All available versions All available versions All available versions –

Europe (London) All available versions All available versions All available versions –

Europe (Milan) All available versions All available versions All available versions –

Europe (Paris) All available versions All available versions All available versions –

124
Amazon Relational Database Service User Guide
Dual-stack mode

Region RDS for SQL Server RDS for SQL Server RDS for SQL Server RDS for SQL Server
2019 2017 2016 2014

Europe (Spain) All available versions All available versions All available versions –

Europe (Stockholm) All available versions All available versions All available versions –

Europe (Zurich) – – – –

Israel (Tel Aviv) – – – –

Middle East (Bahrain) All available versions All available versions All available versions –

Middle East (UAE) All available versions All available versions All available versions –

South America (São All available versions All available versions All available versions –
Paulo)

AWS GovCloud (US- – – – –


East)

AWS GovCloud (US- – – – –


West)

Dual-stack mode
By using dual-stack mode in RDS, resources can communicate with a DB instance over Internet Protocol
version 4 (IPv4), Internet Protocol version 6 (IPv6), or both. For more information, see Dual-stack
mode (p. 2691).

Topics
• Dual-stack mode with RDS for MariaDB (p. 125)
• Dual-stack mode with RDS for MySQL (p. 127)
• Dual-stack mode with RDS for Oracle (p. 128)
• Dual-stack mode with RDS for PostgreSQL (p. 129)
• Dual-stack mode with RDS for SQL Server (p. 131)

Dual-stack mode with RDS for MariaDB


The following Regions and engine versions are available for dual-stack mode with RDS for MariaDB.

Region RDS for MariaDB RDS for MariaDB RDS for MariaDB RDS for MariaDB RDS for MariaDB
10.11 10.6 10.5 10.4 10.3

US East (Ohio) All available All available All available All available All available
versions versions versions versions versions

US East (N. All available All available All available All available All available
Virginia) versions versions versions versions versions

US West (N. All available All available All available All available All available
California) versions versions versions versions versions

US West All available All available All available All available All available
(Oregon) versions versions versions versions versions

125
Amazon Relational Database Service User Guide
Dual-stack mode

Region RDS for MariaDB RDS for MariaDB RDS for MariaDB RDS for MariaDB RDS for MariaDB
10.11 10.6 10.5 10.4 10.3

Africa (Cape All available All available All available All available All available
Town) versions versions versions versions versions

Asia Pacific All available All available All available All available All available
(Hong Kong) versions versions versions versions versions

Asia Pacific All available All available All available All available All available
(Hyderabad) versions versions versions versions versions

Asia Pacific All available All available All available All available All available
(Jakarta) versions versions versions versions versions

Asia Pacific All available All available All available All available All available
(Melbourne) versions versions versions versions versions

Asia Pacific All available All available All available All available All available
(Mumbai) versions versions versions versions versions

Asia Pacific All available All available All available All available All available
(Osaka) versions versions versions versions versions

Asia Pacific All available All available All available All available All available
(Seoul) versions versions versions versions versions

Asia Pacific All available All available All available All available All available
(Singapore) versions versions versions versions versions

Asia Pacific All available All available All available All available All available
(Sydney) versions versions versions versions versions

Asia Pacific All available All available All available All available All available
(Tokyo) versions versions versions versions versions

Canada (Central) All available All available All available All available All available
versions versions versions versions versions

China (Beijing) All available All available All available All available All available
versions versions versions versions versions

China (Ningxia) All available All available All available All available All available
versions versions versions versions versions

Europe All available All available All available All available All available
(Frankfurt) versions versions versions versions versions

Europe (Ireland) All available All available All available All available All available
versions versions versions versions versions

Europe (London) All available All available All available All available All available
versions versions versions versions versions

Europe (Milan) All available All available All available All available All available
versions versions versions versions versions

Europe (Paris) All available All available All available All available All available
versions versions versions versions versions

126
Amazon Relational Database Service User Guide
Dual-stack mode

Region RDS for MariaDB RDS for MariaDB RDS for MariaDB RDS for MariaDB RDS for MariaDB
10.11 10.6 10.5 10.4 10.3

Europe (Spain) All available All available All available All available All available
versions versions versions versions versions

Europe All available All available All available All available All available
(Stockholm) versions versions versions versions versions

Europe (Zurich) All available All available All available All available All available
versions versions versions versions versions

Israel (Tel Aviv) – – – – –

Middle East All available All available All available All available All available
(Bahrain) versions versions versions versions versions

Middle East All available All available All available All available All available
(UAE) versions versions versions versions versions

South America All available All available All available All available All available
(São Paulo) versions versions versions versions versions

AWS GovCloud All available All available All available All available All available
(US-East) versions versions versions versions versions

AWS GovCloud All available All available All available All available All available
(US-West) versions versions versions versions versions

Dual-stack mode with RDS for MySQL


The following Regions and engine versions are available for dual-stack mode with RDS for MySQL.

Region RDS for MySQL 8.0 RDS for MySQL 5.7 RDS for MySQL 5.6

US East (Ohio) All available versions All available versions All available versions

US East (N. Virginia) All available versions All available versions All available versions

US West (N. California) All available versions All available versions All available versions

US West (Oregon) All available versions All available versions All available versions

Africa (Cape Town) All available versions All available versions All available versions

Asia Pacific (Hong Kong) All available versions All available versions All available versions

Asia Pacific (Hyderabad) All available versions All available versions –

Asia Pacific (Jakarta) All available versions All available versions All available versions

Asia Pacific (Melbourne) All available versions All available versions –

Asia Pacific (Mumbai) All available versions All available versions All available versions

Asia Pacific (Osaka) All available versions All available versions All available versions

Asia Pacific (Seoul) All available versions All available versions All available versions

127
Amazon Relational Database Service User Guide
Dual-stack mode

Region RDS for MySQL 8.0 RDS for MySQL 5.7 RDS for MySQL 5.6

Asia Pacific (Singapore) All available versions All available versions All available versions

Asia Pacific (Sydney) All available versions All available versions All available versions

Asia Pacific (Tokyo) All available versions All available versions All available versions

Canada (Central) All available versions All available versions All available versions

China (Beijing) All available versions All available versions All available versions

China (Ningxia) All available versions All available versions All available versions

Europe (Frankfurt) All available versions All available versions All available versions

Europe (Ireland) All available versions All available versions All available versions

Europe (London) All available versions All available versions All available versions

Europe (Milan) All available versions All available versions All available versions

Europe (Paris) All available versions All available versions All available versions

Europe (Spain) All available versions All available versions –

Europe (Stockholm) All available versions All available versions All available versions

Europe (Zurich) All available versions All available versions –

Israel (Tel Aviv) – – –

Middle East (Bahrain) All available versions All available versions All available versions

Middle East (UAE) All available versions All available versions –

South America (São Paulo) All available versions All available versions All available versions

AWS GovCloud (US-East) All available versions All available versions All available versions

AWS GovCloud (US-West) All available versions All available versions All available versions

Dual-stack mode with RDS for Oracle


The following Regions and engine versions are available for dual-stack mode with RDS for Oracle.

Region RDS for Oracle 21c RDS for Oracle 19c RDS for Oracle 12c

US East (Ohio) All available versions All available versions All available versions

US East (N. Virginia) All available versions All available versions All available versions

US West (N. California) All available versions All available versions All available versions

US West (Oregon) All available versions All available versions All available versions

Africa (Cape Town) All available versions All available versions All available versions

Asia Pacific (Hong Kong) All available versions All available versions All available versions

Asia Pacific (Hyderabad) – – –

128
Amazon Relational Database Service User Guide
Dual-stack mode

Region RDS for Oracle 21c RDS for Oracle 19c RDS for Oracle 12c

Asia Pacific (Jakarta) All available versions All available versions All available versions

Asia Pacific (Melbourne) – – –

Asia Pacific (Mumbai) All available versions All available versions All available versions

Asia Pacific (Osaka) All available versions All available versions All available versions

Asia Pacific (Seoul) All available versions All available versions All available versions

Asia Pacific (Singapore) All available versions All available versions All available versions

Asia Pacific (Sydney) All available versions All available versions All available versions

Asia Pacific (Tokyo) All available versions All available versions All available versions

Canada (Central) All available versions All available versions All available versions

China (Beijing) All available versions All available versions All available versions

China (Ningxia) All available versions All available versions All available versions

Europe (Frankfurt) All available versions All available versions All available versions

Europe (Ireland) All available versions All available versions All available versions

Europe (London) All available versions All available versions All available versions

Europe (Milan) All available versions All available versions All available versions

Europe (Paris) All available versions All available versions All available versions

Europe (Spain) – – –

Europe (Stockholm) All available versions All available versions All available versions

Europe (Zurich) – – –

Israel (Tel Aviv) – – –

Middle East (Bahrain) All available versions All available versions All available versions

Middle East (UAE) – – –

South America (São Paulo) All available versions All available versions All available versions

AWS GovCloud (US-East) All available versions All available versions All available versions

AWS GovCloud (US-West) All available versions All available versions All available versions

Dual-stack mode with RDS for PostgreSQL


The following Regions and engine versions are available for dual-stack mode with RDS for PostgreSQL.

129
Amazon Relational Database Service User Guide
Dual-stack mode

Region RDS for RDS for RDS for RDS for RDS for RDS for
PostgreSQL PostgreSQL PostgreSQL PostgreSQL PostgreSQL PostgreSQL
15 14 13 12 11 10

US East (Ohio) All available All available All available All available All available All available
versions versions versions versions versions versions

US East (N. All available All available All available All available All available All available
Virginia) versions versions versions versions versions versions

US West (N. All available All available All available All available All available All available
California) versions versions versions versions versions versions

US West All available All available All available All available All available All available
(Oregon) versions versions versions versions versions versions

Africa (Cape All available All available All available All available All available All available
Town) versions versions versions versions versions versions

Asia Pacific All available All available All available All available All available All available
(Hong Kong) versions versions versions versions versions versions

Asia Pacific All available All available All available All available All available All available
(Hyderabad) versions versions versions versions versions versions

Asia Pacific All available All available All available All available All available All available
(Melbourne) versions versions versions versions versions versions

Asia Pacific All available All available All available All available All available All available
(Jakarta) versions versions versions versions versions versions

Asia Pacific All available All available All available All available All available All available
(Mumbai) versions versions versions versions versions versions

Asia Pacific All available All available All available All available All available All available
(Osaka) versions versions versions versions versions versions

Asia Pacific All available All available All available All available All available All available
(Seoul) versions versions versions versions versions versions

Asia Pacific All available All available All available All available All available All available
(Singapore) versions versions versions versions versions versions

Asia Pacific All available All available All available All available All available All available
(Sydney) versions versions versions versions versions versions

Asia Pacific All available All available All available All available All available All available
(Tokyo) versions versions versions versions versions versions

Canada All available All available All available All available All available All available
(Central) versions versions versions versions versions versions

China (Beijing) All available All available All available All available All available All available
versions versions versions versions versions versions

China All available All available All available All available All available All available
(Ningxia) versions versions versions versions versions versions

Europe All available All available All available All available All available All available
(Frankfurt) versions versions versions versions versions versions

130
Amazon Relational Database Service User Guide
Dual-stack mode

Region RDS for RDS for RDS for RDS for RDS for RDS for
PostgreSQL PostgreSQL PostgreSQL PostgreSQL PostgreSQL PostgreSQL
15 14 13 12 11 10

Europe All available All available All available All available All available All available
(Ireland) versions versions versions versions versions versions

Europe All available All available All available All available All available All available
(London) versions versions versions versions versions versions

Europe (Milan) All available All available All available All available All available All available
versions versions versions versions versions versions

Europe (Paris) All available All available All available All available All available All available
versions versions versions versions versions versions

Europe (Spain) All available All available All available All available All available All available
versions versions versions versions versions versions

Europe All available All available All available All available All available All available
(Stockholm) versions versions versions versions versions versions

Europe All available All available All available All available All available All available
(Zurich) versions versions versions versions versions versions

Israel (Tel – – – – – –
Aviv)

Middle East All available All available All available All available All available All available
(Bahrain) versions versions versions versions versions versions

Middle East All available All available All available All available All available All available
(UAE) versions versions versions versions versions versions

South All available All available All available All available All available All available
America (São versions versions versions versions versions versions
Paulo)

AWS All available All available All available All available All available All available
GovCloud versions versions versions versions versions versions
(US-East)

AWS All available All available All available All available All available All available
GovCloud versions versions versions versions versions versions
(US-West)

Dual-stack mode with RDS for SQL Server


The following Regions and engine versions are available for dual-stack mode with RDS for SQL Server.

Region RDS for SQL Server RDS for SQL Server RDS for SQL Server RDS for SQL Server
2019 2017 2016 2014

US East (Ohio) All available versions All available versions All available versions –

US East (N. Virginia) All available versions All available versions All available versions –

131
Amazon Relational Database Service User Guide
Dual-stack mode

Region RDS for SQL Server RDS for SQL Server RDS for SQL Server RDS for SQL Server
2019 2017 2016 2014

US West (N. All available versions All available versions All available versions –
California)

US West (Oregon) All available versions All available versions All available versions –

Africa (Cape Town) All available versions All available versions All available versions –

Asia Pacific (Hong All available versions All available versions All available versions –
Kong)

Asia Pacific – – – –
(Hyderabad)

Asia Pacific (Jakarta) All available versions All available versions All available versions –

Asia Pacific – – – –
(Melbourne)

Asia Pacific (Mumbai) All available versions All available versions All available versions –

Asia Pacific (Osaka) All available versions All available versions All available versions –

Asia Pacific (Seoul) All available versions All available versions All available versions –

Asia Pacific All available versions All available versions All available versions –
(Singapore)

Asia Pacific (Sydney) All available versions All available versions All available versions –

Asia Pacific (Tokyo) All available versions All available versions All available versions –

Canada (Central) All available versions All available versions All available versions –

China (Beijing) All available versions All available versions All available versions –

China (Ningxia) All available versions All available versions All available versions –

Europe (Frankfurt) All available versions All available versions All available versions –

Europe (Ireland) All available versions All available versions All available versions –

Europe (London) All available versions All available versions All available versions –

Europe (Milan) All available versions All available versions All available versions –

Europe (Paris) All available versions All available versions All available versions –

Europe (Spain) – – – –

Europe (Stockholm) All available versions All available versions All available versions –

Europe (Zurich) – – – –

Israel (Tel Aviv) – – – –

Middle East (Bahrain) All available versions All available versions All available versions –

Middle East (UAE) – – – –

132
Amazon Relational Database Service User Guide
Export snapshots to S3

Region RDS for SQL Server RDS for SQL Server RDS for SQL Server RDS for SQL Server
2019 2017 2016 2014

South America (São All available versions All available versions All available versions –
Paulo)

AWS GovCloud (US- All available versions All available versions All available versions –
East)

AWS GovCloud (US- All available versions All available versions All available versions –
West)

Export snapshots to S3
You can export RDS DB snapshot data to an Amazon S3 bucket. You can export all types of DB snapshots
—including manual snapshots, automated system snapshots, and snapshots created by AWS Backup.
After the data is exported, you can analyze the exported data directly through tools like Amazon Athena
or Amazon Redshift Spectrum. For more information, see Exporting DB snapshot data to Amazon
S3 (p. 642).

Topics
• Export snapshots to S3 with RDS for MariaDB (p. 133)
• Export snapshots to S3 with RDS for MySQL (p. 135)
• Export snapshots to S3 with RDS for PostgreSQL (p. 136)

Export snapshots to S3 with RDS for MariaDB


The following Regions and engine versions are available for exporting snapshots to S3 with RDS for
MariaDB.

Region RDS for MariaDB RDS for MariaDB RDS for MariaDB RDS for MariaDB RDS for MariaDB
10.11 10.6 10.5 10.4 10.3

US East (Ohio) All available All available All available All available All available
versions versions versions versions versions

US East (N. All available All available All available All available All available
Virginia) versions versions versions versions versions

US West (N. All available All available All available All available All available
California) versions versions versions versions versions

US West All available All available All available All available All available
(Oregon) versions versions versions versions versions

Africa (Cape All available All available All available All available All available
Town) versions versions versions versions versions

Asia Pacific All available All available All available All available All available
(Hong Kong) versions versions versions versions versions

Asia Pacific – – – – –
(Hyderabad)

133
Amazon Relational Database Service User Guide
Export snapshots to S3

Region RDS for MariaDB RDS for MariaDB RDS for MariaDB RDS for MariaDB RDS for MariaDB
10.11 10.6 10.5 10.4 10.3

Asia Pacific – – – – –
(Jakarta)

Asia Pacific – – – – –
(Melbourne)

Asia Pacific All available All available All available All available All available
(Mumbai) versions versions versions versions versions

Asia Pacific All available All available All available All available All available
(Osaka) versions versions versions versions versions

Asia Pacific All available All available All available All available All available
(Seoul) versions versions versions versions versions

Asia Pacific All available All available All available All available All available
(Singapore) versions versions versions versions versions

Asia Pacific All available All available All available All available All available
(Sydney) versions versions versions versions versions

Asia Pacific All available All available All available All available All available
(Tokyo) versions versions versions versions versions

Canada (Central) All available All available All available All available All available
versions versions versions versions versions

China (Beijing) All available All available All available All available All available
versions versions versions versions versions

China (Ningxia) All available All available All available All available All available
versions versions versions versions versions

Europe All available All available All available All available All available
(Frankfurt) versions versions versions versions versions

Europe (Ireland) All available All available All available All available All available
versions versions versions versions versions

Europe (London) All available All available All available All available All available
versions versions versions versions versions

Europe (Milan) All available All available All available All available All available
versions versions versions versions versions

Europe (Paris) All available All available All available All available All available
versions versions versions versions versions

Europe (Spain) – – – – –

Europe All available All available All available All available All available
(Stockholm) versions versions versions versions versions

Europe (Zurich) – – – – –

Israel (Tel Aviv) – – – – –

134
Amazon Relational Database Service User Guide
Export snapshots to S3

Region RDS for MariaDB RDS for MariaDB RDS for MariaDB RDS for MariaDB RDS for MariaDB
10.11 10.6 10.5 10.4 10.3

Middle East All available All available All available All available All available
(Bahrain) versions versions versions versions versions

Middle East – – – – –
(UAE)

South America All available All available All available All available All available
(São Paulo) versions versions versions versions versions

AWS GovCloud – – – – –
(US-East)

AWS GovCloud – – – – –
(US-West)

Export snapshots to S3 with RDS for MySQL


TThe following Regions and engine versions are available for exporting snapshots to S3 with RDS for
MySQL.

Region RDS for MySQL 8.0 RDS for MySQL 5.7

US East (Ohio) All available versions All available versions

US East (N. Virginia) All available versions All available versions

US West (N. California) All available versions All available versions

US West (Oregon) All available versions All available versions

Africa (Cape Town) All available versions All available versions

Asia Pacific (Hong Kong) All available versions All available versions

Asia Pacific (Hyderabad) – –

Asia Pacific (Jakarta) – –

Asia Pacific (Melbourne) – –

Asia Pacific (Mumbai) All available versions All available versions

Asia Pacific (Osaka) All available versions All available versions

Asia Pacific (Seoul) All available versions All available versions

Asia Pacific (Singapore) All available versions All available versions

Asia Pacific (Sydney) All available versions All available versions

Asia Pacific (Tokyo) All available versions All available versions

Canada (Central) All available versions All available versions

China (Beijing) All available versions All available versions

China (Ningxia) All available versions All available versions

135
Amazon Relational Database Service User Guide
Export snapshots to S3

Region RDS for MySQL 8.0 RDS for MySQL 5.7

Europe (Frankfurt) All available versions All available versions

Europe (Ireland) All available versions All available versions

Europe (London) All available versions All available versions

Europe (Milan) All available versions All available versions

Europe (Paris) All available versions All available versions

Europe (Spain) – –

Europe (Stockholm) All available versions All available versions

Europe (Zurich) – –

Israel (Tel Aviv) – –

Middle East (Bahrain) All available versions All available versions

Middle East (UAE) – –

South America (São Paulo) All available versions All available versions

AWS GovCloud (US-East) – –

AWS GovCloud (US-West) – –

Export snapshots to S3 with RDS for PostgreSQL


The following Regions and engine versions are available for exporting snapshots to S3 with RDS for
PostgreSQL.

Region RDS for RDS for RDS for RDS for RDS for RDS for
PostgreSQL PostgreSQL PostgreSQL PostgreSQL PostgreSQL PostgreSQL
15 14 13 12 11 10

US East (Ohio) All available All available All available All available All available All available
versions versions versions versions versions versions

US East (N. All available All available All available All available All available All available
Virginia) versions versions versions versions versions versions

US West (N. All available All available All available All available All available All available
California) versions versions versions versions versions versions

US West All available All available All available All available All available All available
(Oregon) versions versions versions versions versions versions

Africa (Cape All available All available All available All available All available All available
Town) versions versions versions versions versions versions

Asia Pacific All available All available All available All available All available All available
(Hong Kong) versions versions versions versions versions versions

Asia Pacific – – – – – –
(Hyderabad)

136
Amazon Relational Database Service User Guide
Export snapshots to S3

Region RDS for RDS for RDS for RDS for RDS for RDS for
PostgreSQL PostgreSQL PostgreSQL PostgreSQL PostgreSQL PostgreSQL
15 14 13 12 11 10

Asia Pacific – – – – – –
(Jakarta)

Asia Pacific – – – – – –
(Melbourne)

Asia Pacific All available All available All available All available All available All available
(Mumbai) versions versions versions versions versions versions

Asia Pacific All available All available All available All available All available All available
(Osaka) versions versions versions versions versions versions

Asia Pacific All available All available All available All available All available All available
(Seoul) versions versions versions versions versions versions

Asia Pacific All available All available All available All available All available All available
(Singapore) versions versions versions versions versions versions

Asia Pacific All available All available All available All available All available All available
(Sydney) versions versions versions versions versions versions

Asia Pacific All available All available All available All available All available All available
(Tokyo) versions versions versions versions versions versions

Canada All available All available All available All available All available All available
(Central) versions versions versions versions versions versions

China (Beijing) All available All available All available All available All available All available
versions versions versions versions versions versions

China All available All available All available All available All available All available
(Ningxia) versions versions versions versions versions versions

Europe All available All available All available All available All available All available
(Frankfurt) versions versions versions versions versions versions

Europe All available All available All available All available All available All available
(Ireland) versions versions versions versions versions versions

Europe All available All available All available All available All available All available
(London) versions versions versions versions versions versions

Europe (Milan) All available All available All available All available All available All available
versions versions versions versions versions versions

Europe (Paris) All available All available All available All available All available All available
versions versions versions versions versions versions

Europe (Spain) – – – – – –

Europe All available All available All available All available All available All available
(Stockholm) versions versions versions versions versions versions

Europe – – – – – –
(Zurich)

137
Amazon Relational Database Service User Guide
IAM database authentication

Region RDS for RDS for RDS for RDS for RDS for RDS for
PostgreSQL PostgreSQL PostgreSQL PostgreSQL PostgreSQL PostgreSQL
15 14 13 12 11 10

Israel (Tel – – – – – –
Aviv)

Middle East All available All available All available All available All available All available
(Bahrain) versions versions versions versions versions versions

Middle East – – – – – –
(UAE)

South All available All available All available All available All available All available
America (São versions versions versions versions versions versions
Paulo)

AWS – – – – – –
GovCloud
(US-East)

AWS – – – – – –
GovCloud
(US-West)

IAM database authentication


By using IAM database authentication in Amazon RDS, you can authenticate without a password when
you connect to a DB instance. Instead, you use an authentication token. For more information, see IAM
database authentication for MariaDB, MySQL, and PostgreSQL (p. 2642).

IAM database authentication isn't available with the following engines:

• RDS for Oracle


• RDS for SQL Server

Topics
• IAM database authentication with RDS for MariaDB (p. 138)
• IAM database authentication with RDS for MySQL (p. 140)
• IAM database authentication with RDS for PostgreSQL (p. 140)

IAM database authentication with RDS for MariaDB


The following Regions and engine versions are available for IAM database authentication with RDS for
MariaDB.

Region RDS for MariaDB RDS for MariaDB RDS for MariaDB RDS for MariaDB RDS for MariaDB
10.11 10.6 10.5 10.4 10.3

US East (Ohio) All available All available – – –


versions versions

US East (N. All available All available – – –


Virginia) versions versions

138
Amazon Relational Database Service User Guide
IAM database authentication

Region RDS for MariaDB RDS for MariaDB RDS for MariaDB RDS for MariaDB RDS for MariaDB
10.11 10.6 10.5 10.4 10.3

US West (N. All available All available – – –


California) versions versions

US West All available All available – – –


(Oregon) versions versions

Africa (Cape All available All available – – –


Town) versions versions

Asia Pacific All available All available – – –


(Hong Kong) versions versions

Asia Pacific – – – – –
(Hyderabad)

Asia Pacific All available All available – – –


(Jakarta) versions versions

Asia Pacific – – – – –
(Melbourne)

Asia Pacific All available All available – – –


(Mumbai) versions versions

Asia Pacific All available All available – – –


(Osaka) versions versions

Asia Pacific All available All available – – –


(Seoul) versions versions

Asia Pacific All available All available – – –


(Singapore) versions versions

Asia Pacific All available All available – – –


(Sydney) versions versions

Asia Pacific All available All available – – –


(Tokyo) versions versions

Canada (Central) All available All available – – –


versions versions

China (Beijing) All available All available – – –


versions versions

China (Ningxia) All available All available – – –


versions versions

Europe All available All available – – –


(Frankfurt) versions versions

Europe (Ireland) All available All available – – –


versions versions

Europe (London) All available All available – – –


versions versions

139
Amazon Relational Database Service User Guide
IAM database authentication

Region RDS for MariaDB RDS for MariaDB RDS for MariaDB RDS for MariaDB RDS for MariaDB
10.11 10.6 10.5 10.4 10.3

Europe (Milan) All available All available – – –


versions versions

Europe (Paris) All available All available – – –


versions versions

Europe (Spain) – – – – –

Europe All available All available – – –


(Stockholm) versions versions

Europe (Zurich) – – – – –

Israel (Tel Aviv) – – – – –

Middle East All available All available – – –


(Bahrain) versions versions

Middle East – – – – –
(UAE)

South America All available All available – – –


(São Paulo) versions versions

AWS GovCloud All available All available – – –


(US-East) versions versions

AWS GovCloud All available All available – – –


(US-West) versions versions

IAM database authentication with RDS for MySQL


IAM database authentication with RDS for MySQL is available in all Regions for the following versions:

• RDS for MySQL 8.0 – All available versions


• RDS for MySQL 5.7 – All available versions

IAM database authentication with RDS for PostgreSQL


IAM database authentication with RDS for PostgreSQL is available in all Regions for the following
versions:

• RDS for PostgreSQL 15 – All available versions


• RDS for PostgreSQL 14 – All available versions
• RDS for PostgreSQL 13 – All available versions
• RDS for PostgreSQL 12 – All available versions
• RDS for PostgreSQL 11 – All available versions
• RDS for PostgreSQL 10 – All available versions

140
Amazon Relational Database Service User Guide
Kerberos authentication

Kerberos authentication
By using Kerberos authentication in Amazon RDS, you can support external authentication of database
users using Kerberos and Microsoft Active Directory. Using Kerberos and Active Directory provides the
benefits of single sign-on and centralized authentication of database users. For more information, see
Kerberos authentication (p. 2567).

Kerberos authentication isn't available with the following engines:

• RDS for MariaDB

Topics
• Kerberos authentication with RDS for MySQL (p. 141)
• Kerberos authentication with RDS for Oracle (p. 142)
• Kerberos authentication with RDS for PostgreSQL (p. 143)
• Kerberos authentication with RDS for SQL Server (p. 145)

Kerberos authentication with RDS for MySQL


The following Regions and engine versions are available for Kerberos authentication with RDS for
MySQL.

Region RDS for MySQL 8.0 RDS for MySQL 5.7 RDS for MySQL 5.6

US East (Ohio) All versions All versions All versions

US East (N. Virginia) All versions All versions All versions

US West (N. California) All versions All versions All versions

US West (Oregon) All versions All versions All versions

Africa (Cape Town) – – –

Asia Pacific (Hong Kong) – – –

Asia Pacific (Hyderabad) – – –

Asia Pacific (Jakarta) – – –

Asia Pacific (Melbourne) – – –

Asia Pacific (Mumbai) All versions All versions All versions

Asia Pacific (Osaka) – – –

Asia Pacific (Seoul) All versions All versions All versions

Asia Pacific (Singapore) All versions All versions All versions

Asia Pacific (Sydney) All versions All versions All versions

Asia Pacific (Tokyo) All versions All versions All versions

Canada (Central) All versions All versions All versions

China (Beijing) All versions All versions All versions

141
Amazon Relational Database Service User Guide
Kerberos authentication

Region RDS for MySQL 8.0 RDS for MySQL 5.7 RDS for MySQL 5.6

China (Ningxia) All versions All versions All versions

Europe (Frankfurt) All versions All versions All versions

Europe (Ireland) All versions All versions All versions

Europe (London) All versions All versions All versions

Europe (Milan) – – –

Europe (Paris) – – –

Europe (Spain) – – –

Europe (Stockholm) All versions All versions All versions

Europe (Zurich) – – –

Israel (Tel Aviv) – – –

Middle East (Bahrain) – – –

Middle East (UAE) – – –

South America (São Paulo) All versions All versions All versions

AWS GovCloud (US-East) – – –

AWS GovCloud (US-West) – – –

Kerberos authentication with RDS for Oracle


The following Regions and engine versions are available for Kerberos authentication with RDS for Oracle.

Region RDS for Oracle 21c RDS for Oracle 19c RDS for Oracle 12c

US East (Ohio) All versions All versions All versions

US East (N. Virginia) All versions All versions All versions

US West (N. California) All versions All versions All versions

US West (Oregon) All versions All versions All versions

Africa (Cape Town) – – –

Asia Pacific (Hong Kong) – – –

Asia Pacific (Hyderabad) – – –

Asia Pacific (Jakarta) – – –

Asia Pacific (Melbourne) – – –

Asia Pacific (Mumbai) All versions All versions All versions

Asia Pacific (Osaka) – – –

Asia Pacific (Seoul) All versions All versions All versions

142
Amazon Relational Database Service User Guide
Kerberos authentication

Region RDS for Oracle 21c RDS for Oracle 19c RDS for Oracle 12c

Asia Pacific (Singapore) All versions All versions All versions

Asia Pacific (Sydney) All versions All versions All versions

Asia Pacific (Tokyo) All versions All versions All versions

Canada (Central) All versions All versions All versions

China (Beijing) – – –

China (Ningxia) – – –

Europe (Frankfurt) All versions All versions All versions

Europe (Ireland) All versions All versions All versions

Europe (London) All versions All versions All versions

Europe (Milan) – – –

Europe (Paris) – – –

Europe (Spain) – – –

Europe (Stockholm) All versions All versions All versions

Europe (Zurich) – – –

Israel (Tel Aviv) – – –

Middle East (Bahrain) – – –

Middle East (UAE) – – –

South America (São Paulo) All versions All versions All versions

AWS GovCloud (US-East) All versions All versions All versions

AWS GovCloud (US-West) All versions All versions All versions

Kerberos authentication with RDS for PostgreSQL


The following Regions and engine versions are available for Kerberos authentication with RDS for
PostgreSQL.

Region RDS for RDS for RDS for RDS for RDS for RDS for
PostgreSQL PostgreSQL PostgreSQL PostgreSQL PostgreSQL PostgreSQL
15 14 13 12 11 10

US East (Ohio) All versions All versions All versions All versions All versions All versions

US East (N. All versions All versions All versions All versions All versions All versions
Virginia)

US West (N. All versions All versions All versions All versions All versions All versions
California)

143
Amazon Relational Database Service User Guide
Kerberos authentication

Region RDS for RDS for RDS for RDS for RDS for RDS for
PostgreSQL PostgreSQL PostgreSQL PostgreSQL PostgreSQL PostgreSQL
15 14 13 12 11 10

US West All versions All versions All versions All versions All versions All versions
(Oregon)

Africa (Cape – – – – – –
Town)

Asia Pacific – – – – – –
(Hong Kong)

Asia Pacific – – – – – –
(Hyderabad)

Asia Pacific – – – – – –
(Jakarta)

Asia Pacific – – – – – –
(Melbourne)

Asia Pacific All versions All versions All versions All versions All versions All versions
(Mumbai)

Asia Pacific – – – – – –
(Osaka)

Asia Pacific All versions All versions All versions All versions All versions All versions
(Seoul)

Asia Pacific All versions All versions All versions All versions All versions All versions
(Singapore)

Asia Pacific All versions All versions All versions All versions All versions All versions
(Sydney)

Asia Pacific All versions All versions All versions All versions All versions All versions
(Tokyo)

Canada All versions All versions All versions All versions All versions All versions
(Central)

China (Beijing) All versions All versions All versions All versions All versions All versions

China All versions All versions All versions All versions All versions All versions
(Ningxia)

Europe All versions All versions All versions All versions All versions All versions
(Frankfurt)

Europe All versions All versions All versions All versions All versions All versions
(Ireland)

Europe All versions All versions All versions All versions All versions All versions
(London)

Europe (Milan) – – – – – –

Europe (Paris) All versions All versions All versions All versions All versions All versions

144
Amazon Relational Database Service User Guide
Kerberos authentication

Region RDS for RDS for RDS for RDS for RDS for RDS for
PostgreSQL PostgreSQL PostgreSQL PostgreSQL PostgreSQL PostgreSQL
15 14 13 12 11 10

Europe (Spain) – – – – – –

Europe All versions All versions All versions All versions All versions All versions
(Stockholm)

Europe – – – – – –
(Zurich)

Israel (Tel – – – – – –
Aviv)

Middle East – – – – – –
(Bahrain)

Middle East – – – – – –
(UAE)

South All versions All versions All versions All versions All versions All versions
America (São
Paulo)

AWS – – – – – –
GovCloud
(US-East)

AWS – – – – – –
GovCloud
(US-West)

Kerberos authentication with RDS for SQL Server


The following Regions and engine versions are available for Kerberos authentication with RDS for SQL
Server.

Region RDS for SQL Server RDS for SQL Server RDS for SQL Server RDS for SQL Server
2019 2017 2016 2014

US East (Ohio) All versions All versions All versions All versions

US East (N. Virginia) All versions All versions All versions All versions

US West (N. All versions All versions All versions All versions
California)

US West (Oregon) All versions All versions All versions All versions

Africa (Cape Town) All versions All versions All versions All versions

Asia Pacific (Hong All versions All versions All versions All versions
Kong)

Asia Pacific All versions All versions All versions All versions
(Hyderabad)

145
Amazon Relational Database Service User Guide
Kerberos authentication

Region RDS for SQL Server RDS for SQL Server RDS for SQL Server RDS for SQL Server
2019 2017 2016 2014

Asia Pacific (Jakarta) All versions All versions All versions All versions

Asia Pacific All versions All versions All versions All versions
(Melbourne)

Asia Pacific (Mumbai) All versions All versions All versions All versions

Asia Pacific (Osaka) All versions All versions All versions All versions

Asia Pacific (Seoul) All versions All versions All versions All versions

Asia Pacific All versions All versions All versions All versions
(Singapore)

Asia Pacific (Sydney) All versions All versions All versions All versions

Asia Pacific (Tokyo) All versions All versions All versions All versions

Canada (Central) All versions All versions All versions All versions

China (Beijing) All versions All versions All versions All versions

China (Ningxia) All versions All versions All versions All versions

Europe (Frankfurt) All versions All versions All versions All versions

Europe (Ireland) All versions All versions All versions All versions

Europe (London) All versions All versions All versions All versions

Europe (Milan) All versions All versions All versions All versions

Europe (Paris) All versions All versions All versions All versions

Europe (Spain) All versions All versions All versions All versions

Europe (Stockholm) All versions All versions All versions All versions

Europe (Zurich) All versions All versions All versions All versions

Israel (Tel Aviv) – – – –

Middle East (Bahrain) All versions All versions All versions All versions

Middle East (UAE) All versions All versions All versions All versions

South America (São All versions All versions All versions All versions
Paulo)

AWS GovCloud (US- All versions All versions All versions All versions
East)

AWS GovCloud (US- All versions All versions All versions All versions
West)

146
Amazon Relational Database Service User Guide
Multi-AZ DB clusters

Multi-AZ DB clusters
A Multi-AZ DB cluster deployment in Amazon RDS provides a high availability deployment mode of
Amazon RDS with two readable standby DB instances. A Multi-AZ DB cluster has a writer DB instance
and two reader DB instances in three separate Availability Zones in the same Region. Multi-AZ DB
clusters provide high availability, increased capacity for read workloads, and lower write latency
when compared to Multi-AZ DB instance deployments. For more information, see Multi-AZ DB cluster
deployments (p. 499).

Multi-AZ DB clusters aren't available with the following engines:

• RDS for MariaDB


• RDS for Oracle
• RDS for SQL Server

Topics
• Multi-AZ DB clusters with RDS for MySQL (p. 147)
• Multi-AZ DB clusters with RDS for PostgreSQL (p. 148)

Multi-AZ DB clusters with RDS for MySQL


The following Regions and engine versions are available for Multi-AZ DB clusters with RDS for MySQL.

Region RDS for MySQL 8.0

US East (Ohio) Version 8.0.28 and higher

US East (N. Virginia) Version 8.0.28 and higher

US West (N. California) –

US West (Oregon) Version 8.0.28 and higher

Africa (Cape Town) Version 8.0.28 and higher

Asia Pacific (Hong Kong) Version 8.0.28 and higher

Asia Pacific (Hyderabad) –

Asia Pacific (Jakarta) Version 8.0.28 and higher

Asia Pacific (Melbourne) –

Asia Pacific (Mumbai) Version 8.0.28 and higher

Asia Pacific (Osaka) Version 8.0.28 and higher

Asia Pacific (Seoul) Version 8.0.28 and higher

Asia Pacific (Singapore) Version 8.0.28 and higher

Asia Pacific (Sydney) Version 8.0.28 and higher

Asia Pacific (Tokyo) Version 8.0.28 and higher

Canada (Central) Version 8.0.28 and higher

147
Amazon Relational Database Service User Guide
Multi-AZ DB clusters

Region RDS for MySQL 8.0

China (Beijing) Version 8.0.28 and higher

China (Ningxia) Version 8.0.28 and higher

Europe (Frankfurt) Version 8.0.28 and higher

Europe (Ireland) Version 8.0.28 and higher

Europe (London) Version 8.0.28 and higher

Europe (Milan) Version 8.0.28 and higher

Europe (Paris) Version 8.0.28 and higher

Europe (Spain) –

Europe (Stockholm) Version 8.0.28 and higher

Europe (Zurich) –

Israel (Tel Aviv) –

Middle East (Bahrain) Version 8.0.28 and higher

Middle East (UAE) –

South America (São Paulo) Version 8.0.28 and higher

AWS GovCloud (US-East) –

AWS GovCloud (US-West) –

You can also list the available versions in a Region for the db.r5d.large DB instance class by running the
following AWS CLI command.

For Linux, macOS, or Unix:

aws rds describe-orderable-db-instance-options \


--engine mysql \
--db-instance-class db.r5d.large \
--query '*[]|[?SupportsClusters == `true`].[EngineVersion]' \
--output text

For Windows:

aws rds describe-orderable-db-instance-options ^


--engine mysql ^
--db-instance-class db.r5d.large ^
--query "*[]|[?SupportsClusters == `true`].[EngineVersion]" ^
--output text

You can change the DB instance class to show the available engine versions for it.

Multi-AZ DB clusters with RDS for PostgreSQL


The following Regions and engine versions are available for Multi-AZ DB clusters with RDS for
PostgreSQL.

148
Amazon Relational Database Service User Guide
Multi-AZ DB clusters

Region RDS for PostgreSQL 15 RDS for PostgreSQL 14 RDS for PostgreSQL 13

US East (Ohio) All PostgreSQL 15 versions Version 14.5 and higher Version 13.4 and version
13.7 and higher

US East (N. Virginia) All PostgreSQL 15 versions Version 14.5 and higher Version 13.4 and version
13.7 and higher

US West (N. California) – – –

US West (Oregon) All PostgreSQL 15 versions Version 14.5 and higher Version 13.4 and version
13.7 and higher

Africa (Cape Town) All PostgreSQL 15 versions Version 14.5 and higher Version 13.4 and version
13.7 and higher

Asia Pacific (Hong Kong) All PostgreSQL 15 versions Version 14.5 and higher Version 13.4 and version
13.7 and higher

Asia Pacific (Hyderabad) – – –

Asia Pacific (Jakarta) All PostgreSQL 15 versions Version 14.5 and higher Version 13.4 and version
13.7 and higher

Asia Pacific (Melbourne) – – –

Asia Pacific (Mumbai) All PostgreSQL 15 versions Version 14.5 and higher Version 13.4 and version
13.7 and higher

Asia Pacific (Osaka) All PostgreSQL 15 versions Version 14.5 and higher Version 13.4 and version
13.7 and higher

Asia Pacific (Seoul) All PostgreSQL 15 versions Version 14.5 and higher Version 13.4 and version
13.7 and higher

Asia Pacific (Singapore) All PostgreSQL 15 versions Version 14.5 and higher Version 13.4 and version
13.7 and higher

Asia Pacific (Sydney) All PostgreSQL 15 versions Version 14.5 and higher Version 13.4 and version
13.7 and higher

Asia Pacific (Tokyo) All PostgreSQL 15 versions Version 14.5 and higher Version 13.4 and version
13.7 and higher

Canada (Central) All PostgreSQL 15 versions Version 14.5 and higher Version 13.4 and version
13.7 and higher

China (Beijing) All PostgreSQL 15 versions Version 14.5 and higher Version 13.4 and version
13.7 and higher

China (Ningxia) All PostgreSQL 15 versions Version 14.5 and higher Version 13.4 and version
13.7 and higher

Europe (Frankfurt) All PostgreSQL 15 versions Version 14.5 and higher Version 13.4 and version
13.7 and higher

Europe (Ireland) All PostgreSQL 15 versions Version 14.5 and higher Version 13.4 and version
13.7 and higher

Europe (London) All PostgreSQL 15 versions Version 14.5 and higher Version 13.4 and version
13.7 and higher

149
Amazon Relational Database Service User Guide
Performance Insights

Region RDS for PostgreSQL 15 RDS for PostgreSQL 14 RDS for PostgreSQL 13

Europe (Milan) All PostgreSQL 15 versions Version 14.5 and higher Version 13.4 and version
13.7 and higher

Europe (Paris) All PostgreSQL 15 versions Version 14.5 and higher Version 13.4 and version
13.7 and higher

Europe (Spain) – – –

Europe (Stockholm) All PostgreSQL 15 versions Version 14.5 and higher Version 13.4 and version
13.7 and higher

Europe (Zurich) – – –

Israel (Tel Aviv) – – –

Middle East (Bahrain) All PostgreSQL 15 versions Version 14.5 and higher Version 13.4 and version
13.7 and higher

Middle East (UAE) – – –

South America (São Paulo) All PostgreSQL 15 versions Version 14.5 and higher Version 13.4 and version
13.7 and higher

AWS GovCloud (US-East) – – –

AWS GovCloud (US-West) – – –

You can also list the available versions in a Region for the db.r5d.large DB instance class by running the
following AWS CLI command.

For Linux, macOS, or Unix:

aws rds describe-orderable-db-instance-options \


--engine postgres \
--db-instance-class db.r5d.large \
--query '*[]|[?SupportsClusters == `true`].[EngineVersion]' \
--output text

For Windows:

aws rds describe-orderable-db-instance-options ^


--engine postgres ^
--db-instance-class db.r5d.large ^
--query "*[]|[?SupportsClusters == `true`].[EngineVersion]" ^
--output text

You can change the DB instance class to show the available engine versions for it.

Performance Insights
Performance Insights in Amazon RDS expands on existing Amazon RDS monitoring features to illustrate
and help you analyze your database performance. With the Performance Insights dashboard, you can
visualize the database load on your Amazon RDS DB instance. You can also filter the load by waits, SQL
statements, hosts, or users. For more information, see Monitoring DB load with Performance Insights on
Amazon RDS (p. 720).

Performance Insights is available for all RDS DB engines and all versions.

150
Amazon Relational Database Service User Guide
RDS Custom

Performance Insights is available in all AWS Regions.

For the region, DB engine, and instance class support information for Performance Insights features, see
Amazon RDS DB engine, Region, and instance class support for Performance Insights features (p. 725).

RDS Custom
Amazon RDS Custom automates database administration tasks and operations. By using RDS Custom,
as a database administrator you can access and customize your database environment and operating
system. With RDS Custom, you can customize to meet the requirements of legacy, custom, and packaged
applications. For more information, see Working with Amazon RDS Custom (p. 978).

RDS Custom is supported for the following DB engines only:

• RDS for Oracle


• RDS for SQL Server

Topics
• RDS Custom for Oracle (p. 151)
• RDS Custom for SQL Server (p. 153)

RDS Custom for Oracle


The following Regions and engine versions are available for RDS Custom for Oracle.

Region RDS for Oracle 19c RDS for Oracle 18c RDS for Oracle 12c

US East (Ohio) 19c with the January 2021 18c with the January 2021 12.1 and 12.2 with the
or higher RU/RUR or higher RU/RUR January 2021 or higher RU/
RUR

US East (N. Virginia) 19c with the January 2021 18c with the January 2021 12.1 and 12.2 with the
or higher RU/RUR or higher RU/RUR January 2021 or higher RU/
RUR

US West (N. – – –
California)

US West (Oregon) 19c with the January 2021 18c with the January 2021 12.1 and 12.2 with the
or higher RU/RUR or higher RU/RUR January 2021 or higher RU/
RUR

Africa (Cape Town) – – –

Asia Pacific (Hong – – –


Kong)

Asia Pacific (Jakarta) – – –

Asia Pacific – – –
(Melbourne)

Asia Pacific (Mumbai) 19c with the January 2021 18c with the January 2021 12.1 and 12.2 with the
or higher RU/RUR or higher RU/RUR January 2021 or higher RU/
RUR

151
Amazon Relational Database Service User Guide
RDS Custom

Region RDS for Oracle 19c RDS for Oracle 18c RDS for Oracle 12c

Asia Pacific (Osaka) 19c with the January 2021 18c with the January 2021 12.1 and 12.2 with the
or higher RU/RUR or higher RU/RUR January 2021 or higher RU/
RUR

Asia Pacific (Seoul) 19c with the January 2021 18c with the January 2021 12.1 and 12.2 with the
or higher RU/RUR or higher RU/RUR January 2021 or higher RU/
RUR

Asia Pacific 19c with the January 2021 18c with the January 2021 12.1 and 12.2 with the
(Singapore) or higher RU/RUR or higher RU/RUR January 2021 or higher RU/
RUR

Asia Pacific (Sydney) 19c with the January 2021 18c with the January 2021 12.1 and 12.2 with the
or higher RU/RUR or higher RU/RUR January 2021 or higher RU/
RUR

Asia Pacific (Tokyo) 19c with the January 2021 18c with the January 2021 12.1 and 12.2 with the
or higher RU/RUR or higher RU/RUR January 2021 or higher RU/
RUR

Canada (Central) 19c with the January 2021 18c with the January 2021 12.1 and 12.2 with the
or higher RU/RUR or higher RU/RUR January 2021 or higher RU/
RUR

China (Beijing) – – –

China (Ningxia) – – –

Europe (Frankfurt) 19c with the January 2021 18c with the January 2021 12.1 and 12.2 with the
or higher RU/RUR or higher RU/RUR January 2021 or higher RU/
RUR

Europe (Ireland) 19c with the January 2021 18c with the January 2021 12.1 and 12.2 with the
or higher RU/RUR or higher RU/RUR January 2021 or higher RU/
RUR

Europe (London) 19c with the January 2021 18c with the January 2021 12.1 and 12.2 with the
or higher RU/RUR or higher RU/RUR January 2021 or higher RU/
RUR

Europe (Milan) – – –

Europe (Paris) – – –

Europe (Stockholm) 19c with the January 2021 18c with the January 2021 12.1 and 12.2 with the
or higher RU/RUR or higher RU/RUR January 2021 or higher RU/
RUR

Israel (Tel Aviv) – – –

Middle East (Bahrain) – – –

Middle East (UAE) – – –

South America (São 19c with the January 2021 18c with the January 2021 12.1 and 12.2 with the
Paulo) or higher RU/RUR or higher RU/RUR January 2021 or higher RU/
RUR

152
Amazon Relational Database Service User Guide
RDS Custom

Region RDS for Oracle 19c RDS for Oracle 18c RDS for Oracle 12c

AWS GovCloud (US- – – –


East)

AWS GovCloud (US- – – –


West)

RDS Custom for SQL Server


You can deploy RDS Custom for SQL Server by using either an RDS provided engine version (RPEV) or a
custom engine version (CEV):

• If you use an RPEV, it includes the default Amazon Machine Image (AMI) and SQL Server installation. If
you customize or modify the operating system (OS), your changes might not persist during patching,
snapshot restore, or automatic recovery.
• If you use a CEV, you choose your own AMI with either pre-installed Microsoft SQL Server or SQL
Server that you install using your own media. When using an AWS provided CEV, you choose the latest
Amazon EC2 image (AMI) available by AWS, which has the cumulative update (CU) supported by RDS
Custom for SQL Server. With a CEV, you can customize both the OS and SQL Server configuration to
meet your enterprise needs.

The following AWS Regions and DB engine versions are available for RDS Custom for SQL Server. The
engine version support depends on whether you're using RDS Custom for SQL Server with an RPEV, AWS
provided CEV, or customer-provided CEV.

Region RPEV AWS provided CEV Customer-provided CEV

US East (Ohio) Enterprise, Standard, or Enterprise, Standard, or Enterprise or Standard SQL


Web SQL Server 2019 with Web SQL Server 2019 with Server 2019 with CU17,
CU8, CU17, CU18, CU20 CU17, CU18, CU20 CU18, CU20

US East (N. Virginia) Enterprise, Standard, or Enterprise, Standard, or Enterprise or Standard SQL
Web SQL Server 2019 with Web SQL Server 2019 with Server 2019 with CU17,
CU8, CU17, CU18, CU20 CU17, CU18, CU20 CU18, CU20

US West (N. California) – – –

US West (Oregon) Enterprise, Standard, or Enterprise, Standard, or Enterprise or Standard SQL


Web SQL Server 2019 with Web SQL Server 2019 with Server 2019 with CU17,
CU8, CU17, CU18, CU20 CU17, CU18, CU20 CU18, CU20

Africa (Cape Town) – – –

Asia Pacific (Hong – – –


Kong)

Asia Pacific (Hyderabad) – – –

Asia Pacific (Jakarta) – – –

Asia Pacific (Melbourne) – – –

Asia Pacific (Mumbai) Enterprise, Standard, or Enterprise, Standard, or Enterprise or Standard SQL
Web SQL Server 2019 with Web SQL Server 2019 with Server 2019 with CU17,
CU8, CU17, CU18, CU20 CU17, CU18, CU20 CU18, CU20

153
Amazon Relational Database Service User Guide
RDS Custom

Region RPEV AWS provided CEV Customer-provided CEV

Asia Pacific (Osaka) – – –

Asia Pacific (Seoul) Enterprise, Standard, or Enterprise, Standard, or Enterprise or Standard SQL
Web SQL Server 2019 with Web SQL Server 2019 with Server 2019 with CU17,
CU8, CU17, CU18, CU20 CU17, CU18, CU20 CU18, CU20

Asia Pacific (Singapore) Enterprise, Standard, or Enterprise, Standard, or Enterprise or Standard SQL
Web SQL Server 2019 with Web SQL Server 2019 with Server 2019 with CU17,
CU8, CU17, CU18, CU20 CU17, CU18, CU20 CU18, CU20

Asia Pacific (Sydney) Enterprise, Standard, or Enterprise, Standard, or Enterprise or Standard SQL
Web SQL Server 2019 with Web SQL Server 2019 with Server 2019 with CU17,
CU8, CU17, CU18, CU20 CU17, CU18, CU20 CU18, CU20

Asia Pacific (Tokyo) Enterprise, Standard, or Enterprise, Standard, or Enterprise or Standard SQL
Web SQL Server 2019 with Web SQL Server 2019 with Server 2019 with CU17,
CU8, CU17, CU18, CU20 CU17, CU18, CU20 CU18, CU20

Canada (Central) Enterprise, Standard, or Enterprise, Standard, or Enterprise or Standard SQL


Web SQL Server 2019 with Web SQL Server 2019 with Server 2019 with CU17,
CU8, CU17, CU18, CU20 CU17, CU18, CU20 CU18, CU20

China (Beijing) – – –

China (Ningxia) – – –

Europe (Frankfurt) Enterprise, Standard, or Enterprise, Standard, or Enterprise or Standard SQL


Web SQL Server 2019 with Web SQL Server 2019 with Server 2019 with CU17,
CU8, CU17, CU18, CU20 CU17, CU18, CU20 CU18, CU20

Europe (Ireland) Enterprise, Standard, or Enterprise, Standard, or Enterprise or Standard SQL


Web SQL Server 2019 with Web SQL Server 2019 with Server 2019 with CU17,
CU8, CU17, CU18, CU20 CU17, CU18, CU20 CU18, CU20

Europe (London) Enterprise, Standard, or Enterprise, Standard, or Enterprise or Standard SQL


Web SQL Server 2019 with Web SQL Server 2019 with Server 2019 with CU17,
CU8, CU17, CU18, CU20 CU17, CU18, CU20 CU18, CU20

Europe (Milan) – – –

Europe (Paris) – – –

Europe (Spain) – – –

Europe (Stockholm) Enterprise, Standard, or Enterprise, Standard, or Enterprise or Standard SQL


Web SQL Server 2019 with Web SQL Server 2019 with Server 2019 with CU17,
CU8, CU17, CU18, CU20 CU17, CU18, CU20 CU18, CU20

Europe (Zurich) – – –

Israel (Tel Aviv) – – –

Middle East (Bahrain) – – –

Middle East (UAE) – – –

South America (São Enterprise, Standard, or Enterprise, Standard, or Enterprise or Standard SQL
Paulo) Web SQL Server 2019 with Web SQL Server 2019 with Server 2019 with CU17,
CU8, CU17, CU18, CU20 CU17, CU18, CU20 CU18, CU20

154
Amazon Relational Database Service User Guide
Amazon RDS Proxy

Region RPEV AWS provided CEV Customer-provided CEV

AWS GovCloud (US- – – –


East)

AWS GovCloud (US- – – –


West)

Amazon RDS Proxy


Amazon RDS Proxy is a fully managed, highly available database proxy that makes applications more
scalable by pooling and sharing established database connections. For more information, see Using
Amazon RDS Proxy (p. 1199).

RDS Proxy isn't available with RDS for Oracle.

Topics
• RDS Proxy with RDS for MariaDB (p. 155)
• RDS Proxy with RDS for MySQL (p. 157)
• RDS Proxy with RDS for PostgreSQL (p. 158)
• RDS Proxy with RDS for SQL Server (p. 160)

RDS Proxy with RDS for MariaDB


The following Regions and engine versions are available for RDS Proxy with RDS for MariaDB.

Region RDS for MariaDB RDS for MariaDB RDS for MariaDB RDS for MariaDB RDS for MariaDB
10.11 10.6 10.5 10.4 10.3

US East (Ohio) All available All available All available All available All available
versions versions versions versions versions

US East (N. All available All available All available All available All available
Virginia) versions versions versions versions versions

US West (N. All available All available All available All available All available
California) versions versions versions versions versions

US West All available All available All available All available All available
(Oregon) versions versions versions versions versions

Africa (Cape All available All available All available All available All available
Town) versions versions versions versions versions

Asia Pacific All available All available All available All available All available
(Hong Kong) versions versions versions versions versions

Asia Pacific – – – – –
(Hyderabad)

Asia Pacific All available All available All available All available All available
(Jakarta) versions versions versions versions versions

Asia Pacific – – – – –
(Melbourne)

155
Amazon Relational Database Service User Guide
Amazon RDS Proxy

Region RDS for MariaDB RDS for MariaDB RDS for MariaDB RDS for MariaDB RDS for MariaDB
10.11 10.6 10.5 10.4 10.3

Asia Pacific All available All available All available All available All available
(Mumbai) versions versions versions versions versions

Asia Pacific All available All available All available All available All available
(Osaka) versions versions versions versions versions

Asia Pacific All available All available All available All available All available
(Seoul) versions versions versions versions versions

Asia Pacific All available All available All available All available All available
(Singapore) versions versions versions versions versions

Asia Pacific All available All available All available All available All available
(Sydney) versions versions versions versions versions

Asia Pacific All available All available All available All available All available
(Tokyo) versions versions versions versions versions

Canada (Central) All available All available All available All available All available
versions versions versions versions versions

China (Beijing) All available All available All available All available All available
versions versions versions versions versions

China (Ningxia) All available All available All available All available All available
versions versions versions versions versions

Europe All available All available All available All available All available
(Frankfurt) versions versions versions versions versions

Europe (Ireland) All available All available All available All available All available
versions versions versions versions versions

Europe (London) All available All available All available All available All available
versions versions versions versions versions

Europe (Milan) All available All available All available All available All available
versions versions versions versions versions

Europe (Paris) All available All available All available All available All available
versions versions versions versions versions

Europe (Spain) – – – – –

Europe All available All available All available All available All available
(Stockholm) versions versions versions versions versions

Europe (Zurich) All available All available – – –


versions versions

Israel (Tel Aviv) – – – – –

Middle East All available All available All available All available All available
(Bahrain) versions versions versions versions versions

Middle East – – – – –
(UAE)

156
Amazon Relational Database Service User Guide
Amazon RDS Proxy

Region RDS for MariaDB RDS for MariaDB RDS for MariaDB RDS for MariaDB RDS for MariaDB
10.11 10.6 10.5 10.4 10.3

South America All available All available All available All available All available
(São Paulo) versions versions versions versions versions

AWS GovCloud – – – – –
(US-East)

AWS GovCloud – – – – –
(US-West)

RDS Proxy with RDS for MySQL


The following Regions and engine versions are available for RDS Proxy with RDS for MySQL.

Region RDS for MySQL 8.0 RDS for MySQL 5.7

US East (Ohio) All available versions All available versions

US East (N. Virginia) All available versions All available versions

US West (N. California) All available versions All available versions

US West (Oregon) All available versions All available versions

Africa (Cape Town) All available versions All available versions

Asia Pacific (Hong Kong) All available versions All available versions

Asia Pacific (Hyderabad) – –

Asia Pacific (Jakarta) All available versions All available versions

Asia Pacific (Melbourne) – –

Asia Pacific (Mumbai) All available versions All available versions

Asia Pacific (Osaka) All available versions All available versions

Asia Pacific (Seoul) All available versions All available versions

Asia Pacific (Singapore) All available versions All available versions

Asia Pacific (Sydney) All available versions All available versions

Asia Pacific (Tokyo) All available versions All available versions

Canada (Central) All available versions All available versions

China (Beijing) All available versions All available versions

China (Ningxia) All available versions All available versions

Europe (Frankfurt) All available versions All available versions

Europe (Ireland) All available versions All available versions

Europe (London) All available versions All available versions

157
Amazon Relational Database Service User Guide
Amazon RDS Proxy

Region RDS for MySQL 8.0 RDS for MySQL 5.7

Europe (Milan) All available versions All available versions

Europe (Paris) All available versions All available versions

Europe (Spain) – –

Europe (Stockholm) All available versions All available versions

Europe (Zurich) – –

Israel (Tel Aviv) – –

Middle East (Bahrain) All available versions All available versions

Middle East (UAE) – –

South America (São Paulo) All available versions All available versions

AWS GovCloud (US-East) – –

AWS GovCloud (US-West) – –

RDS Proxy with RDS for PostgreSQL


The following Regions and engine versions are available for RDS Proxy with RDS for PostgreSQL.

Region RDS for RDS for RDS for RDS for RDS for RDS for
PostgreSQL PostgreSQL PostgreSQL PostgreSQL PostgreSQL PostgreSQL
15 14 13 12 11 10

US East (Ohio) All available All available All available All available All available All available
versions versions versions versions versions versions

US East (N. All available All available All available All available All available All available
Virginia) versions versions versions versions versions versions

US West (N. All available All available All available All available All available All available
California) versions versions versions versions versions versions

US West All available All available All available All available All available All available
(Oregon) versions versions versions versions versions versions

Africa (Cape All available All available All available All available All available All available
Town) versions versions versions versions versions versions

Asia Pacific All available All available All available All available All available All available
(Hong Kong) versions versions versions versions versions versions

Asia Pacific – – – – – –
(Hyderabad)

Asia Pacific All available All available All available All available All available All available
(Jakarta) versions versions versions versions versions versions

Asia Pacific – – – – – –
(Melbourne)

158
Amazon Relational Database Service User Guide
Amazon RDS Proxy

Region RDS for RDS for RDS for RDS for RDS for RDS for
PostgreSQL PostgreSQL PostgreSQL PostgreSQL PostgreSQL PostgreSQL
15 14 13 12 11 10

Asia Pacific All available All available All available All available All available All available
(Mumbai) versions versions versions versions versions versions

Asia Pacific All available All available All available All available All available All available
(Osaka) versions versions versions versions versions versions

Asia Pacific All available All available All available All available All available All available
(Seoul) versions versions versions versions versions versions

Asia Pacific All available All available All available All available All available All available
(Singapore) versions versions versions versions versions versions

Asia Pacific All available All available All available All available All available All available
(Sydney) versions versions versions versions versions versions

Asia Pacific All available All available All available All available All available All available
(Tokyo) versions versions versions versions versions versions

Canada All available All available All available All available All available All available
(Central) versions versions versions versions versions versions

China (Beijing) All available All available All available All available All available All available
versions versions versions versions versions versions

China All available All available All available All available All available All available
(Ningxia) versions versions versions versions versions versions

Europe All available All available All available All available All available All available
(Frankfurt) versions versions versions versions versions versions

Europe All available All available All available All available All available All available
(Ireland) versions versions versions versions versions versions

Europe All available All available All available All available All available All available
(London) versions versions versions versions versions versions

Europe (Milan) All available All available All available All available All available All available
versions versions versions versions versions versions

Europe (Paris) All available All available All available All available All available All available
versions versions versions versions versions versions

Europe (Spain) – – – – – –

Europe All available All available All available All available All available All available
(Stockholm) versions versions versions versions versions versions

Europe – – – – – –
(Zurich)

Israel (Tel – – – – – –
Aviv)

Middle East All available All available All available All available All available All available
(Bahrain) versions versions versions versions versions versions

159
Amazon Relational Database Service User Guide
Amazon RDS Proxy

Region RDS for RDS for RDS for RDS for RDS for RDS for
PostgreSQL PostgreSQL PostgreSQL PostgreSQL PostgreSQL PostgreSQL
15 14 13 12 11 10

Middle East – – – – – –
(UAE)

South All available All available All available All available All available All available
America (São versions versions versions versions versions versions
Paulo)

AWS – – – – – –
GovCloud
(US-East)

AWS – – – – – –
GovCloud
(US-West)

RDS Proxy with RDS for SQL Server


The following Regions and engine versions are available for RDS Proxy with RDS for SQL Server.

Region RDS for SQL Server RDS for SQL Server RDS for SQL Server RDS for SQL Server
2019 2017 2016 2014

US East (Ohio) All available versions All available versions All available versions All available versions

US East (N. Virginia) All available versions All available versions All available versions All available versions

US West (N. All available versions All available versions All available versions All available versions
California)

US West (Oregon) All available versions All available versions All available versions All available versions

Africa (Cape Town) All available versions All available versions All available versions All available versions

Asia Pacific (Hong All available versions All available versions All available versions All available versions
Kong)

Asia Pacific – – – –
(Hyderabad)

Asia Pacific (Jakarta) All available versions All available versions All available versions All available versions

Asia Pacific – – – –
(Melbourne)

Asia Pacific (Mumbai) All available versions All available versions All available versions All available versions

Asia Pacific (Osaka) All available versions All available versions All available versions All available versions

Asia Pacific (Seoul) All available versions All available versions All available versions All available versions

Asia Pacific All available versions All available versions All available versions All available versions
(Singapore)

Asia Pacific (Sydney) All available versions All available versions All available versions All available versions

160
Amazon Relational Database Service User Guide
Secrets Manager integration

Region RDS for SQL Server RDS for SQL Server RDS for SQL Server RDS for SQL Server
2019 2017 2016 2014

Asia Pacific (Tokyo) All available versions All available versions All available versions All available versions

Canada (Central) All available versions All available versions All available versions All available versions

China (Beijing) All available versions All available versions All available versions All available versions

China (Ningxia) All available versions All available versions All available versions All available versions

Europe (Frankfurt) All available versions All available versions All available versions All available versions

Europe (Ireland) All available versions All available versions All available versions All available versions

Europe (London) All available versions All available versions All available versions All available versions

Europe (Milan) All available versions All available versions All available versions All available versions

Europe (Paris) All available versions All available versions All available versions All available versions

Europe (Spain) – – – –

Europe (Stockholm) All available versions All available versions All available versions All available versions

Europe (Zurich) – – – –

Israel (Tel Aviv) – – – –

Middle East (Bahrain) All available versions All available versions All available versions All available versions

Middle East (UAE) – – – –

South America (São All available versions All available versions All available versions All available versions
Paulo)

AWS GovCloud (US- – – – –


East)

AWS GovCloud (US- – – – –


West)

Secrets Manager integration


With AWS Secrets Manager, you can replace hard-coded credentials in your code, including database
passwords, with an API call to Secrets Manager to retrieve the secret programmatically. For more
information about Secrets Manager, see AWS Secrets Manager User Guide.

You can specify that Amazon RDS manages the master user password in Secrets Manager for an Amazon
RDS DB instance or Multi-AZ DB cluster. RDS generates the password, stores it in Secrets Manager,
and rotates it regularly. For more information, see Password management with Amazon RDS and AWS
Secrets Manager (p. 2568).

Secrets Manager integration is supported for all RDS DB engines and all versions.

Secrets Manager integration is supported in all AWS Regions except the following:

• Israel (Tel Aviv)


• AWS GovCloud (US-East)
• AWS GovCloud (US-West)

161
Amazon Relational Database Service User Guide
Engine-native features

Engine-native features
Amazon RDS database engines also support many of the most common engine-native features and
functionality. These features are different than the Amazon RDS-native features listed on this page.
Some engine-native features might have limited support or restricted privileges.

For more information on engine-native features, see:

• MariaDB feature support on Amazon RDS (p. 1256)


• MySQL feature support on Amazon RDS (p. 1624)
• RDS for Oracle features (p. 1786)
• Working with PostgreSQL features supported by Amazon RDS for PostgreSQL (p. 2158)
• Microsoft SQL Server features on Amazon RDS (p. 1364)

162
Amazon Relational Database Service User Guide
DB instance billing for Amazon RDS

DB instance billing for Amazon RDS


Amazon RDS instances are billed based on the following components:

• DB instance hours (per hour) – Based on the DB instance class of the DB instance (for example,
db.t2.small or db.m4.large). Pricing is listed on a per-hour basis, but bills are calculated down to the
second and show times in decimal form. RDS usage is billed in 1-second increments, with a minimum
of 10 minutes. For more information, see DB instance classes (p. 11).
• Storage (per GiB per month) – Storage capacity that you have provisioned to your DB instance. If you
scale your provisioned storage capacity within the month, your bill is prorated. For more information,
see Amazon RDS DB instance storage (p. 101).
• Input/output (I/O) requests (per 1 million requests) – Total number of storage I/O requests that you
have made in a billing cycle, for Amazon RDS magnetic storage only.
• Provisioned IOPS (per IOPS per month) – Provisioned IOPS rate, regardless of IOPS consumed, for
Amazon RDS Provisioned IOPS (SSD) and General Purpose (SSD) gp3 storage. Provisioned storage for
EBS volumes are billed in 1-second increments, with a minimum of 10 minutes.
• Backup storage (per GiB per month) – Backup storage is the storage that is associated with automated
database backups and any active database snapshots that you have taken. Increasing your backup
retention period or taking additional database snapshots increases the backup storage consumed by
your database. Per second billing doesn't apply to backup storage (metered in GB-month).

For more information, see Backing up and restoring (p. 590).


• Data transfer (per GB) – Data transfer in and out of your DB instance from or to the internet and other
AWS Regions.

Amazon RDS provides the following purchasing options to enable you to optimize your costs based on
your needs:

• On-Demand instances – Pay by the hour for the DB instance hours that you use. Pricing is listed on a
per-hour basis, but bills are calculated down to the second and show times in decimal form. RDS usage
is now billed in 1-second increments, with a minimum of 10 minutes.
• Reserved instances – Reserve a DB instance for a one-year or three-year term and get a significant
discount compared to the on-demand DB instance pricing. With Reserved Instance usage, you can
launch, delete, start, or stop multiple instances within an hour and get the Reserved Instance benefit
for all of the instances.

For Amazon RDS pricing information, see the Amazon RDS pricing page.

Topics
• On-Demand DB instances for Amazon RDS (p. 164)
• Reserved DB instances for Amazon RDS (p. 165)

163
Amazon Relational Database Service User Guide
On-Demand DB instances

On-Demand DB instances for Amazon RDS


Amazon RDS on-demand DB instances are billed based on the class of the DB instance (for example,
db.t3.small or db.m5.large). For Amazon RDS pricing information, see the Amazon RDS product page.

Billing starts for a DB instance as soon as the DB instance is available. Pricing is listed on a per-hour
basis, but bills are calculated down to the second and show times in decimal form. Amazon RDS usage
is billed in one-second increments, with a minimum of 10 minutes. In the case of billable configuration
change, such as scaling compute or storage capacity, you're charged a 10-minute minimum. Billing
continues until the DB instance terminates, which occurs when you delete the DB instance or if the DB
instance fails.

If you no longer want to be charged for your DB instance, you must stop or delete it to avoid being billed
for additional DB instance hours. For more information about the DB instance states for which you are
billed, see Viewing Amazon RDS DB instance status (p. 684).

Stopped DB instances
While your DB instance is stopped, you're charged for provisioned storage, including Provisioned IOPS.
You are also charged for backup storage, including storage for manual snapshots and automated
backups within your specified retention window. You aren't charged for DB instance hours.

Multi-AZ DB instances
If you specify that your DB instance should be a Multi-AZ deployment, you're billed according to the
Multi-AZ pricing posted on the Amazon RDS pricing page.

164
Amazon Relational Database Service User Guide
Reserved DB instances

Reserved DB instances for Amazon RDS


Using reserved DB instances, you can reserve a DB instance for a one- or three-year term. Reserved DB
instances provide you with a significant discount compared to on-demand DB instance pricing. Reserved
DB instances are not physical instances, but rather a billing discount applied to the use of certain on-
demand DB instances in your account. Discounts for reserved DB instances are tied to instance type and
AWS Region.

The general process for working with reserved DB instances is: First get information about available
reserved DB instance offerings, then purchase a reserved DB instance offering, and finally get
information about your existing reserved DB instances.

Overview of reserved DB instances


When you purchase a reserved DB instance in Amazon RDS, you purchase a commitment to getting a
discounted rate, on a specific DB instance type, for the duration of the reserved DB instance. To use an
Amazon RDS reserved DB instance, you create a new DB instance just like you do for an on-demand
instance.

The new DB instance that you create must have the same specifications as the reserved DB instance for
the following:

• AWS Region
• DB engine
• DB instance type
• Edition (for RDS for Oracle and RDS for SQL Server)
• License type (license-included or bring-your-own-license)
• Deployment model (Single-AZ or Multi-AZ)

If the specifications of the new DB instance match an existing reserved DB instance for your account, you
are billed at the discounted rate offered for the reserved DB instance. Otherwise, the DB instance is billed
at an on-demand rate.

You can modify a DB instance that you're using as a reserved DB instance. If the modification is within
the specifications of the reserved DB instance, part or all of the discount still applies to the modified
DB instance. If the modification is outside the specifications, such as changing the instance class, the
discount no longer applies. For more information, see Size-flexible reserved DB instances (p. 166).

Topics
• Offering types (p. 165)
• Size-flexible reserved DB instances (p. 166)
• Reserved DB instance billing example (p. 168)
• Reserved DB instances for a Multi-AZ DB cluster (p. 168)
• Deleting a reserved DB instance (p. 169)

For more information about reserved DB instances, including pricing, see Amazon RDS reserved
instances.

Offering types
Reserved DB instances are available in three varieties—No Upfront, Partial Upfront, and All Upfront—
that let you optimize your Amazon RDS costs based on your expected usage.

165
Amazon Relational Database Service User Guide
Reserved DB instances

No Upfront

This option provides access to a reserved DB instance without requiring an upfront payment. Your
No Upfront reserved DB instance bills a discounted hourly rate for every hour within the term,
regardless of usage, and no upfront payment is required. This option is only available as a one-year
reservation.
Partial Upfront

This option requires a part of the reserved DB instance to be paid upfront. The remaining hours in
the term are billed at a discounted hourly rate, regardless of usage. This option is the replacement
for the previous Heavy Utilization option.
All Upfront

Full payment is made at the start of the term, with no other costs incurred for the remainder of the
term regardless of the number of hours used.

If you are using consolidated billing, all the accounts in the organization are treated as one account. This
means that all accounts in the organization can receive the hourly cost benefit of reserved DB instances
that are purchased by any other account. For more information about consolidated billing, see Amazon
RDS reserved DB instances in the AWS Billing and Cost Management User Guide.

Size-flexible reserved DB instances


When you purchase a reserved DB instance, one thing that you specify is the instance class, for example
db.r5.large. For more information about DB instance classes, see DB instance classes (p. 11).

If you have a DB instance, and you need to scale it to larger capacity, your reserved DB instance is
automatically applied to your scaled DB instance. That is, your reserved DB instances are automatically
applied across all DB instance class sizes. Size-flexible reserved DB instances are available for DB
instances with the same AWS Region and database engine. Size-flexible reserved DB instances can only
scale in their instance class type. For example, a reserved DB instance for a db.r5.large can apply to a
db.r5.xlarge, but not to a db.r6g.large, because db.r5 and db.r6g are different instance class types.

Reserved DB instance benefits also apply for both Multi-AZ and Single-AZ configurations. Flexibility
means that you can move freely between configurations within the same DB instance class type. For
example, you can move from a Single-AZ deployment running on one large DB instance (four normalized
units per hour) to a Multi-AZ deployment running on two small DB instances (2*2 = 4 normalized units
per hour).

Size-flexible reserved DB instances are available for the following Amazon RDS database engines:

• MariaDB
• MySQL
• Oracle, Bring Your Own License
• PostgreSQL

For details about using size-flexible reserved instances with Aurora, see Reserved DB instances for
Aurora.

You can compare usage for different reserved DB instance sizes by using normalized units per hour. For
example, one unit of usage on two db.r3.large DB instances is equivalent to eight normalized units per
hour of usage on one db.r3.small. The following table shows the number of normalized units per hour
for each DB instance size.

166
Amazon Relational Database Service User Guide
Reserved DB instances

Instance size Single-AZ normalized Multi-AZ DB instance Multi-AZ DB cluster


units per hour normalized units per normalized units per
(deployment with one hour (deployment with hour (deployment with
DB instance) one DB instance and one DB instance and
one standby) two standbys)

micro 0.5 1 1.5

small 1 2 3

medium 2 4 6

large 4 8 12

xlarge 8 16 24

2xlarge 16 32 48

4xlarge 32 64 96

6xlarge 48 96 144

8xlarge 64 128 192

10xlarge 80 160 240

12xlarge 96 192 288

16xlarge 128 256 384

24xlarge 192 384 576

32xlarge 256 512 768

For example, suppose that you purchase a db.t2.medium reserved DB instance, and you have two
running db.t2.small DB instances in your account in the same AWS Region. In this case, the billing
benefit is applied in full to both instances.

Alternatively, if you have one db.t2.large instance running in your account in the same AWS Region,
the billing benefit is applied to 50 percent of the usage of the DB instance.

167
Amazon Relational Database Service User Guide
Reserved DB instances

Reserved DB instance billing example


The price for a reserved DB instance doesn't provide a discount for the costs associated with storage,
backups, and I/O. It provides a discount only on the hourly, on-demand instance usage. The following
example illustrates the total cost per month for a reserved DB instance:

• An RDS for MySQL reserved Single-AZ db.r5.large DB instance class in US East (N. Virginia) with the No
Upfront option at a cost of $0.12 for the instance, or $90 per month
• 400 GiB of General Purpose SSD (gp2) storage at a cost of 0.115 per GiB per month, or $45.60 per
month
• 600 GiB of backup storage at $0.095, or $19 per month (400 GiB free)

Add all of these charges ($90 + $45.60 + $19) with the reserved DB instance, and the total cost per
month is $154.60.

If you choose to use an on-demand DB instance instead of a reserved DB instance, an RDS for MySQL
Single-AZ db.r5.large DB instance class in US East (N. Virginia) costs $0.1386 per hour, or $101.18 per
month. So, for an on-demand DB instance, add all of these options ($101.18 + $45.60 + $19), and the
total cost per month is $165.78. You save a little over $11 per month by using the reserved DB instance.
Note
The prices in this example are sample prices and might not match actual prices. For Amazon RDS
pricing information, see Amazon RDS pricing.

Reserved DB instances for a Multi-AZ DB cluster


To purchase the equivalent reserved DB instances for a Multi-AZ DB cluster, you can do one of the
following:

• Reserve three Single-AZ DB instances that are the same size as the instances in the cluster.
• Reserve one Multi-AZ DB instance and one Single-AZ DB instance that are the same size as the DB
instances in the cluster.

For example, suppose that you have one cluster consisting of three db.m6gd.large DB instances.
In this case, you can either purchase three db.m6gd.large Single-AZ reserved DB instances, or one
db.m6gd.large Multi-AZ reserved DB instance and one db.m6gd.large Single-AZ reserved DB instance.
Either of these options reserves the maximum reserved instance discount for the Multi-AZ DB cluster.

168
Amazon Relational Database Service User Guide
Reserved DB instances

Alternately, you can use size-flexible DB instances and purchase a larger DB instance to cover smaller DB
instances in one or more clusters. For example, if you have two clusters with six total db.m6gd.large DB
instances, you can purchase three db.m6gd.xl Single-AZ reserved DB instances. Doing so reserves all six
DB instances in the two clusters. For more information, see Size-flexible reserved DB instances (p. 166).

You might reserve DB instances that are the same size as the DB instances in the cluster, but reserve
fewer DB instances than the total number of DB instances in the cluster. However, if you do so,
the cluster is only partially reserved. For example, suppose that you have one cluster with three
db.m6gd.large DB instances, and you purchase one db.m6gd.large Multi-AZ reserved DB instance.
In this case, the cluster is only partially reserved, because only two of the three instances in the
cluster are covered by reserved DB instances. The remaining DB instance is charged at the on-demand
db.m6gd.large hourly rate.

For more information about Multi-AZ DB clusters, see Multi-AZ DB cluster deployments (p. 499).

Deleting a reserved DB instance


The terms for a reserved DB instance involve a one-year or three-year commitment. You can't cancel a
reserved DB instance. However, you can delete a DB instance that is covered by a reserved DB instance
discount. The process for deleting a DB instance that is covered by a reserved DB instance discount is the
same as for any other DB instance.

You're billed for the upfront costs regardless of whether you use the resources.

If you delete a DB instance that is covered by a reserved DB instance discount, you can launch another DB
instance with compatible specifications. In this case, you continue to get the discounted rate during the
reservation term (one or three years).

Working with reserved DB instances


You can use the AWS Management Console, the AWS CLI, and the RDS API to work with reserved DB
instances.

Console
You can use the AWS Management Console to work with reserved DB instances as shown in the following
procedures.

To get pricing and information about available reserved DB instance offerings

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Reserved instances.
3. Choose Purchase Reserved DB Instance.
4. For Product description, choose the DB engine and licensing type.
5. For DB instance class, choose the DB instance class.
6. For Deployment Option, choose whether you want a Single-AZ or Multi-AZ DB instance
deployment.
Note
To purchase the equivalent reserved DB instances for a Multi-AZ DB cluster deployment,
either purchase three Single-AZ reserved DB instances, or one Multi-AZ and one Single-AZ
reserved DB instance. For more information, see Reserved DB instances for a Multi-AZ DB
cluster (p. 168).
7. For Term, choose the length of time to reserve the the DB instance.
8. For Offering type, choose the offering type.

After you select the offering type, you can see the pricing information.

169
Amazon Relational Database Service User Guide
Reserved DB instances

Important
Choose Cancel to avoid purchasing the reserved DB instance and incurring any charges.

After you have information about the available reserved DB instance offerings, you can use the
information to purchase an offering as shown in the following procedure.

To purchase a reserved DB instance

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Reserved instances.
3. Choose Purchase reserved DB instance.
4. For Product description, choose the DB engine and licensing type.
5. For DB instance class, choose the DB instance class.
6. For Multi-AZ deployment, choose whether you want a Single-AZ or Multi-AZ DB instance
deployment.
Note
To purchase the equivalent reserved DB instances for a Multi-AZ DB cluster deployment,
either purchase three Single-AZ reserved DB instances, or one Multi-AZ and one Single-AZ
reserved DB instance. For more information, see Reserved DB instances for a Multi-AZ DB
cluster (p. 168).
7. For Term, choose the length of time you want the DB instance reserved.
8. For Offering type, choose the offering type.

After you choose the offering type, you can see the pricing information.
9. (Optional) You can assign your own identifier to the reserved DB instances that you purchase to help
you track them. For Reserved Id, type an identifier for your reserved DB instance.
10. Choose Submit.

Your reserved DB instance is purchased, then displayed in the Reserved instances list.

After you have purchased reserved DB instances, you can get information about your reserved DB
instances as shown in the following procedure.

To get information about reserved DB instances for your AWS account

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the Navigation pane, choose Reserved instances.

The reserved DB instances for your account appear. To see detailed information about a particular
reserved DB instance, choose that instance in the list. You can then see detailed information about
that instance in the detail pane at the bottom of the console.

AWS CLI

You can use the AWS CLI to work with reserved DB instances as shown in the following examples.

Example of getting available reserved DB instance offerings

To get information about available reserved DB instance offerings, call the AWS CLI command
describe-reserved-db-instances-offerings.

170
Amazon Relational Database Service User Guide
Reserved DB instances

aws rds describe-reserved-db-instances-offerings

This call returns output similar to the following:

OFFERING OfferingId Class Multi-AZ Duration Fixed


Price Usage Price Description Offering Type
OFFERING 438012d3-4052-4cc7-b2e3-8d3372e0e706 db.r3.large y 1y 1820.00
USD 0.368 USD mysql Partial Upfront
OFFERING 649fd0c8-cf6d-47a0-bfa6-060f8e75e95f db.r3.small n 1y 227.50
USD 0.046 USD mysql Partial Upfront
OFFERING 123456cd-ab1c-47a0-bfa6-12345667232f db.r3.small n 1y 162.00
USD 0.00 USD mysql All Upfront
Recurring Charges: Amount Currency Frequency
Recurring Charges: 0.123 USD Hourly
OFFERING 123456cd-ab1c-37a0-bfa6-12345667232d db.r3.large y 1y 700.00
USD 0.00 USD mysql All Upfront
Recurring Charges: Amount Currency Frequency
Recurring Charges: 1.25 USD Hourly
OFFERING 123456cd-ab1c-17d0-bfa6-12345667234e db.r3.xlarge n 1y 4242.00
USD 2.42 USD mysql No Upfront

After you have information about the available reserved DB instance offerings, you can use the
information to purchase an offering.

To purchase a reserved DB instance, use the AWS CLI command purchase-reserved-db-instances-


offering with the following parameters:

• --reserved-db-instances-offering-id – The ID of the offering that you want to purchase. See


the preceding example to get the offering ID.
• --reserved-db-instance-id – You can assign your own identifier to the reserved DB instances
that you purchase to help track them.

Example of purchasing a reserved DB instance

The following example purchases the reserved DB instance offering with ID 649fd0c8-cf6d-47a0-
bfa6-060f8e75e95f, and assigns the identifier of MyReservation.

For Linux, macOS, or Unix:

aws rds purchase-reserved-db-instances-offering \


--reserved-db-instances-offering-id 649fd0c8-cf6d-47a0-bfa6-060f8e75e95f \
--reserved-db-instance-id MyReservation

For Windows:

aws rds purchase-reserved-db-instances-offering ^


--reserved-db-instances-offering-id 649fd0c8-cf6d-47a0-bfa6-060f8e75e95f ^
--reserved-db-instance-id MyReservation

The command returns output similar to the following:

RESERVATION ReservationId Class Multi-AZ Start Time Duration


Fixed Price Usage Price Count State Description Offering Type
RESERVATION MyReservation db.r3.small y 2011-12-19T00:30:23.247Z 1y
455.00 USD 0.092 USD 1 payment-pending mysql Partial Upfront

171
Amazon Relational Database Service User Guide
Reserved DB instances

After you have purchased reserved DB instances, you can get information about your reserved DB
instances.

To get information about reserved DB instances for your AWS account, call the AWS CLI command
describe-reserved-db-instances, as shown in the following example.

Example of getting your reserved DB instances

aws rds describe-reserved-db-instances

The command returns output similar to the following:

RESERVATION ReservationId Class Multi-AZ Start Time Duration


Fixed Price Usage Price Count State Description Offering Type
RESERVATION MyReservation db.r3.small y 2011-12-09T23:37:44.720Z 1y
455.00 USD 0.092 USD 1 retired mysql Partial Upfront

RDS API

You can use the RDS API to work with reserved DB instances:

• To get information about available reserved DB instance offerings, call the Amazon RDS API operation
DescribeReservedDBInstancesOfferings.
• After you have information about the available reserved DB instance offerings, you can use the
information to purchase an offering. Call the PurchaseReservedDBInstancesOffering RDS API
operation with the following parameters:
• --reserved-db-instances-offering-id – The ID of the offering that you want to purchase.
• --reserved-db-instance-id – You can assign your own identifier to the reserved DB instances
that you purchase to help track them.
• After you have purchased reserved DB instances, you can get information about your reserved DB
instances. Call the DescribeReservedDBInstances RDS API operation.

Viewing the billing for your reserved DB instances


You can view the billing for your reserved DB instances in the Billing Dashboard in the AWS Management
Console.

To view reserved DB instance billing

1. Sign in to the AWS Management Console.


2. From the account menu at the upper right, choose Billing Dashboard.
3. Choose Bill Details at the upper right of the dashboard.
4. Under AWS Service Charges, expand Relational Database Service.
5. Expand the AWS Region where your reserved DB instances are, for example US West (Oregon).

Your reserved DB instances and their hourly charges for the current month are shown under Amazon
Relational Database Service for Database Engine Reserved Instances.

The reserved DB instance in this example was purchased All Upfront, so there are no hourly charges.
6. Choose the Cost Explorer (bar graph) icon next to the Reserved Instances heading.

The Cost Explorer displays the Monthly EC2 running hours costs and usage graph.

172
Amazon Relational Database Service User Guide
Reserved DB instances

7. Clear the Usage Type Group filter to the right of the graph.
8. Choose the time period and time unit for which you want to examine usage costs.

The following example shows usage costs for on-demand and reserved DB instances for the year to
date by month.

The reserved DB instance costs from January through June 2021 are monthly charges for a Partial
Upfront instance, while the cost in August 2021 is a one-time charge for an All Upfront instance.

The reserved instance discount for the Partial Upfront instance expired in June 2021, but the DB
instance wasn't deleted. After the expiration date, it was simply charged at the on-demand rate.

173
Amazon Relational Database Service User Guide
Sign up for an AWS account

Setting up for Amazon RDS


Before you use Amazon Relational Database Service for the first time, complete the following tasks.

Topics
• Sign up for an AWS account (p. 174)
• Create an administrative user (p. 174)
• Grant programmatic access (p. 175)
• Determine requirements (p. 176)
• Provide access to your DB instance in your VPC by creating a security group (p. 177)

If you already have an AWS account, know your Amazon RDS requirements, and prefer to use the
defaults for IAM and VPC security groups, skip ahead to Getting started with Amazon RDS (p. 180).

Sign up for an AWS account


If you do not have an AWS account, complete the following steps to create one.

To sign up for an AWS account

1. Open https://portal.aws.amazon.com/billing/signup.
2. Follow the online instructions.

Part of the sign-up procedure involves receiving a phone call and entering a verification code on the
phone keypad.

When you sign up for an AWS account, an AWS account root user is created. The root user has access
to all AWS services and resources in the account. As a security best practice, assign administrative
access to an administrative user, and use only the root user to perform tasks that require root user
access.

AWS sends you a confirmation email after the sign-up process is complete. At any time, you can view
your current account activity and manage your account by going to https://aws.amazon.com/ and
choosing My Account.

Create an administrative user


After you sign up for an AWS account, create an administrative user so that you don't use the root user
for everyday tasks.

Secure your AWS account root user

1. Sign in to the AWS Management Console as the account owner by choosing Root user and entering
your AWS account email address. On the next page, enter your password.

For help signing in by using root user, see Signing in as the root user in the AWS Sign-In User Guide.

174
Amazon Relational Database Service User Guide
Grant programmatic access

2. Turn on multi-factor authentication (MFA) for your root user.

For instructions, see Enable a virtual MFA device for your AWS account root user (console) in the IAM
User Guide.

Create an administrative user

• For your daily administrative tasks, grant administrative access to an administrative user in AWS IAM
Identity Center (successor to AWS Single Sign-On).

For instructions, see Getting started in the AWS IAM Identity Center (successor to AWS Single Sign-On)
User Guide.

Sign in as the administrative user

• To sign in with your IAM Identity Center user, use the sign-in URL that was sent to your email
address when you created the IAM Identity Center user.

For help signing in using an IAM Identity Center user, see Signing in to the AWS access portal in the
AWS Sign-In User Guide.

Grant programmatic access


Users need programmatic access if they want to interact with AWS outside of the AWS Management
Console. The way to grant programmatic access depends on the type of user that's accessing AWS.

To grant users programmatic access, choose one of the following options.

Which user needs To By


programmatic access?

Workforce identity Use temporary credentials to Following the instructions for


sign programmatic requests to the interface that you want to
(Users managed in IAM Identity the AWS CLI, AWS SDKs, or AWS use.
Center) APIs.
• For the AWS CLI, see
Configuring the AWS CLI to
use AWS IAM Identity Center
(successor to AWS Single Sign-
On) in the AWS Command Line
Interface User Guide.
• For AWS SDKs, tools, and
AWS APIs, see IAM Identity
Center authentication in the
AWS SDKs and Tools Reference
Guide.

IAM Use temporary credentials to Following the instructions in


sign programmatic requests to Using temporary credentials
the AWS CLI, AWS SDKs, or AWS with AWS resources in the IAM
APIs. User Guide.

IAM (Not recommended) Following the instructions for


Use long-term credentials to the interface that you want to
sign programmatic requests to use.

175
Amazon Relational Database Service User Guide
Determine requirements

Which user needs To By


programmatic access?
the AWS CLI, AWS SDKs, or AWS • For the AWS CLI, see
APIs. Authenticating using IAM
user credentials in the AWS
Command Line Interface User
Guide.
• For AWS SDKs and tools, see
Authenticate using long-term
credentials in the AWS SDKs
and Tools Reference Guide.
• For AWS APIs, see Managing
access keys for IAM users in
the IAM User Guide.

Determine requirements
The basic building block of Amazon RDS is the DB instance. In a DB instance, you create your databases.
A DB instance provides a network address called an endpoint. Your applications use this endpoint to
connect to your DB instance. When you create a DB instance, you specify details like storage, memory,
database engine and version, network configuration, security, and maintenance periods. You control
network access to a DB instance through a security group.

Before you create a DB instance and a security group, you must know your DB instance and network
needs. Here are some important things to consider:

• Resource requirements – What are the memory and processor requirements for your application or
service? You use these settings to help you determine what DB instance class to use. For specifications
about DB instance classes, see DB instance classes (p. 11).
• VPC, subnet, and security group – Your DB instance will most likely be in a virtual private cloud
(VPC). To connect to your DB instance, you need to set up security group rules. These rules are set up
differently depending on what kind of VPC you use and how you use it. For example, you can use: a
default VPC or a user-defined VPC.

The following list describes the rules for each VPC option:
• Default VPC – If your AWS account has a default VPC in the current AWS Region, that VPC is
configured to support DB instances. If you specify the default VPC when you create the DB instance,
do the following:
• Make sure to create a VPC security group that authorizes connections from the application or
service to the Amazon RDS DB instance. Use the Security Group option on the VPC console or
the AWS CLI to create VPC security groups. For information, see Step 3: Create a VPC security
group (p. 2700).
• Specify the default DB subnet group. If this is the first DB instance you have created in this AWS
Region, Amazon RDS creates the default DB subnet group when it creates the DB instance.
• User-defined VPC – If you want to specify a user-defined VPC when you create a DB instance, be
aware of the following:
• Make sure to create a VPC security group that authorizes connections from the application or
service to the Amazon RDS DB instance. Use the Security Group option on the VPC console or
the AWS CLI to create VPC security groups. For information, see Step 3: Create a VPC security
group (p. 2700).
• The VPC must meet certain requirements in order to host DB instances, such as having at least two
subnets, each in a separate Availability Zone. For information, see Amazon VPC VPCs and Amazon
RDS (p. 2688).

176
Amazon Relational Database Service User Guide
Provide access to your DB instance

• Make sure to specify a DB subnet group that defines which subnets in that VPC can be used by the
DB instance. For information, see the DB subnet group section in Working with a DB instance in a
VPC (p. 2689).
• High availability – Do you need failover support? On Amazon RDS, a Multi-AZ deployment creates
a primary DB instance and a secondary standby DB instance in another Availability Zone for failover
support. We recommend Multi-AZ deployments for production workloads to maintain high availability.
For development and test purposes, you can use a deployment that isn't Multi-AZ. For more
information, see Configuring and managing a Multi-AZ deployment (p. 492).
• IAM policies – Does your AWS account have policies that grant the permissions needed to perform
Amazon RDS operations? If you are connecting to AWS using IAM credentials, your IAM account must
have IAM policies that grant the permissions required to perform Amazon RDS operations. For more
information, see Identity and access management for Amazon RDS (p. 2606).
• Open ports – What TCP/IP port does your database listen on? The firewalls at some companies might
block connections to the default port for your database engine. If your company firewall blocks the
default port, choose another port for the new DB instance. When you create a DB instance that listens
on a port you specify, you can change the port by modifying the DB instance.
• AWS Region – What AWS Region do you want your database in? Having your database in close
proximity to your application or web service can reduce network latency. For more information, see
Regions, Availability Zones, and Local Zones (p. 110).
• DB disk subsystem – What are your storage requirements? Amazon RDS provides three storage types:
• General Purpose (SSD)
• Provisioned IOPS (PIOPS)
• Magnetic (also known as standard storage)

For more information on Amazon RDS storage, see Amazon RDS DB instance storage (p. 101).

When you have the information you need to create the security group and the DB instance, continue to
the next step.

Provide access to your DB instance in your VPC by


creating a security group
VPC security groups provide access to DB instances in a VPC. They act as a firewall for the associated
DB instance, controlling both inbound and outbound traffic at the DB instance level. DB instances are
created by default with a firewall and a default security group that protect the DB instance.

Before you can connect to your DB instance, you must add rules to a security group that enable you
to connect. Use your network and configuration information to create rules to allow access to your DB
instance.

For example, suppose that you have an application that accesses a database on your DB instance in a
VPC. In this case, you must add a custom TCP rule that specifies the port range and IP addresses that
your application uses to access the database. If you have an application on an Amazon EC2 instance, you
can use the security group that you set up for the Amazon EC2 instance.

You can configure connectivity between an Amazon EC2 instance a DB instance when you create
the DB instance. For more information, see Configure automatic network connectivity with an EC2
instance (p. 300).
Tip
You can set up network connectivity between an Amazon EC2 instance and a DB instance
automatically when you create the DB instance. For more information, see Configure automatic
network connectivity with an EC2 instance (p. 300).

177
Amazon Relational Database Service User Guide
Provide access to your DB instance

For information about common scenarios for accessing a DB instance, see Scenarios for accessing a DB
instance in a VPC (p. 2701).

To create a VPC security group

1. Sign in to the AWS Management Console and open the Amazon VPC console at https://
console.aws.amazon.com/vpc.
Note
Make sure you are in the VPC console, not the RDS console.
2. In the upper-right corner of the AWS Management Console, choose the AWS Region where you want
to create your VPC security group and DB instance. In the list of Amazon VPC resources for that AWS
Region, you should see at least one VPC and several subnets. If you don't, you don't have a default
VPC in that AWS Region.
3. In the navigation pane, choose Security Groups.
4. Choose Create security group.

The Create security group page appears.


5. In Basic details, enter the Security group name and Description. For VPC, choose the VPC that you
want to create your DB instance in.
6. In Inbound rules, choose Add rule.

a. For Type, choose Custom TCP.


b. For Port range, enter the port value to use for your DB instance.
c. For Source, choose a security group name or type the IP address range (CIDR value) from where
you access the DB instance. If you choose My IP, this allows access to the DB instance from the
IP address detected in your browser.
7. If you need to add more IP addresses or different port ranges, choose Add rule and enter the
information for the rule.
8. (Optional) In Outbound rules, add rules for outbound traffic. By default, all outbound traffic is
allowed.
9. Choose Create security group.

You can use the VPC security group that you just created as the security group for your DB instance when
you create it.
Note
If you use a default VPC, a default subnet group spanning all of the VPC's subnets is created
for you. When you create a DB instance, you can select the default VPC and use default for DB
Subnet Group.

After you have completed the setup requirements, you can create a DB instance using your requirements
and security group. To do so, follow the instructions in Creating an Amazon RDS DB instance (p. 300).
For information about getting started by creating a DB instance that uses a specific DB engine, see the
relevant documentation in the following table.

Database engine Documentation

MariaDB Creating and connecting to a MariaDB DB instance (p. 181)

Microsoft SQL Server Creating and connecting to a Microsoft SQL Server DB instance (p. 194)

MySQL Creating and connecting to a MySQL DB instance (p. 209)

Oracle Creating and connecting to an Oracle DB instance (p. 222)

178
Amazon Relational Database Service User Guide
Provide access to your DB instance

Database engine Documentation

PostgreSQL Creating and connecting to a PostgreSQL DB instance (p. 235)

Note
If you can't connect to a DB instance after you create it, see the troubleshooting information in
Can't connect to Amazon RDS DB instance (p. 2727).

179
Amazon Relational Database Service User Guide

Getting started with Amazon RDS


In the following examples, you can find how to create and connect to a DB instance using Amazon
Relational Database Service (Amazon RDS). You can create a DB instance that uses MariaDB, MySQL,
Microsoft SQL Server, Oracle, or PostgreSQL.
Important
Before you can create or connect to a DB instance, make sure to complete the tasks in Setting
up for Amazon RDS (p. 174).

Creating a DB instance and connecting to a database on a DB instance is slightly different for each of the
DB engines. Choose one of the following DB engines that you want to use for detailed information on
creating and connecting to the DB instance. After you have created and connected to your DB instance,
there are instructions to help you delete the DB instance.

Topics
• Creating and connecting to a MariaDB DB instance (p. 181)
• Creating and connecting to a Microsoft SQL Server DB instance (p. 194)
• Creating and connecting to a MySQL DB instance (p. 209)
• Creating and connecting to an Oracle DB instance (p. 222)
• Creating and connecting to a PostgreSQL DB instance (p. 235)
• Tutorial: Create a web server and an Amazon RDS DB instance (p. 249)
• Tutorial: Using a Lambda function to access an Amazon RDS database (p. 273)

180
Amazon Relational Database Service User Guide
Creating and connecting to a MariaDB DB instance

Creating and connecting to a MariaDB DB instance


This tutorial creates an EC2 instance and an RDS for MariaDB DB instance. The tutorial shows you how
to access the DB instance from the EC2 instance using a standard MySQL client. As a best practice, this
tutorial creates a private DB instance in a virtual private cloud (VPC). In most cases, other resources in
the same VPC, such as EC2 instances, can access the DB instance, but resources outside of the VPC can't
access it.

After you complete the tutorial, there is a public and private subnet in each Availability Zone in your VPC.
In one Availability Zone, the EC2 instance is in the public subnet, and the DB instance is in the private
subnet.
Important
There's no charge for creating an AWS account. However, by completing this tutorial, you might
incur costs for the AWS resources you use. You can delete these resources after you complete
the tutorial if they are no longer needed.

The following diagram shows the configuration when the tutorial is complete.

This tutorial uses Easy create to create a DB instance running MariaDB with the AWS Management
Console. With Easy create, you specify only the DB engine type, DB instance size, and DB instance
identifier. Easy create uses the default settings for the other configuration options. The DB instance
created by Easy create is private.

When you use Standard create instead of Easy create, you can specify more configuration options when
you create a DB instance, including ones for availability, security, backups, and maintenance. To create
a public DB instance, you must use Standard create. For information about creating DB instances with
Standard create, see Creating an Amazon RDS DB instance (p. 300).

Topics
• Prerequisites (p. 182)
• Step 1: Create an EC2 instance (p. 182)
• Step 2: Create a MariaDB DB instance (p. 185)
• Step 3: Connect to a MariaDB DB instance (p. 190)
• Step 4: Delete the EC2 instance and DB instance (p. 193)
• (Optional) Connect your DB instance to a Lambda function (p. 193)

181
Amazon Relational Database Service User Guide
Prerequisites

Prerequisites
Before you begin, complete the steps in the following sections:

• Sign up for an AWS account (p. 174)


• Create an administrative user (p. 174)

Step 1: Create an EC2 instance


Create an Amazon EC2 instance that you will use to connect to your database.

To create an EC2 instance

1. Sign in to the AWS Management Console and open the Amazon EC2 console at https://
console.aws.amazon.com/ec2/.
2. In the upper-right corner of the AWS Management Console, choose the AWS Region in which you
want to create the EC2 instance.
3. Choose EC2 Dashboard, and then choose Launch instance, as shown in the following image.

182
Amazon Relational Database Service User Guide
Step 1: Create an EC2 instance

The Launch an instance page opens.


4. Choose the following settings on the Launch an instance page.

a. Under Name and tags, for Name, enter ec2-database-connect.


b. Under Application and OS Images (Amazon Machine Image), choose Amazon Linux, and then
choose the Amazon Linux 2023 AMI. Keep the default selections for the other choices.

c. Under Instance type, choose t2.micro.


d. Under Key pair (login), choose a Key pair name to use an existing key pair. To create a new key
pair for the Amazon EC2 instance, choose Create new key pair and then use the Create key pair
window to create it.

For more information about creating a new key pair, see Create a key pair in the Amazon EC2
User Guide for Linux Instances.
e. For Allow SSH traffic in Network settings, choose the source of SSH connections to the EC2
instance.

You can choose My IP if the displayed IP address is correct for SSH connections. Otherwise, you
can determine the IP address to use to connect to EC2 instances in your VPC using Secure Shell
(SSH). To determine your public IP address, in a different browser window or tab, you can use
the service at https://checkip.amazonaws.com. An example of an IP address is 192.0.2.1/32.

183
Amazon Relational Database Service User Guide
Step 1: Create an EC2 instance

In many cases, you might connect through an internet service provider (ISP) or from behind your
firewall without a static IP address. If so, make sure to determine the range of IP addresses used
by client computers.
Warning
If you use 0.0.0.0/0 for SSH access, you make it possible for all IP addresses to access
your public EC2 instances using SSH. This approach is acceptable for a short time in a
test environment, but it's unsafe for production environments. In production, authorize
only a specific IP address or range of addresses to access your EC2 instances using SSH.

The following image shows an example of the Network settings section.

f. Leave the default values for the remaining sections.


g. Review a summary of your EC2 instance configuration in the Summary panel, and when you're
ready, choose Launch instance.
5. On the Launch Status page, note the identifier for your new EC2 instance, for example:
i-1234567890abcdef0.

184
Amazon Relational Database Service User Guide
Step 2: Create a MariaDB DB instance

6. Choose the EC2 instance identifier to open the list of EC2 instances, and then select your EC2
instance.
7. In the Details tab, note the following values, which you need when you connect using SSH:

a. In Instance summary, note the value for Public IPv4 DNS.

b. In Instance details, note the value for Key pair name.

8. Wait until the Instance state for your EC2 instance has a status of Running before continuing.

Step 2: Create a MariaDB DB instance


The basic building block of Amazon RDS is the DB instance. This environment is where you run your
MariaDB databases.

185
Amazon Relational Database Service User Guide
Step 2: Create a MariaDB DB instance

In this example, you use Easy create to create a DB instance running the MariaDB database engine with a
db.t3.micro DB instance class.

To create a MariaDB DB instance with Easy create

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the upper-right corner of the Amazon RDS console, choose the AWS Region in which you want to
create the DB instance.
3. In the navigation pane, choose Databases.
4. Choose Create database and make sure that Easy create is chosen.

5. In Configuration, choose MariaDB.


6. For DB instance size, choose Free tier.
7. For DB instance identifier, enter database-test1.
8. For Master username, enter a name for the master user, or keep the default name.

The Create database page should look similar to the following image.

186
Amazon Relational Database Service User Guide
Step 2: Create a MariaDB DB instance

9. To use an automatically generated master password for the DB instance, select Auto generate a
password.

To enter your master password, make sure Auto generate a password is cleared, and then enter the
same password in Master password and Confirm password.
10. To set up a connection with the EC2 instance you created previously, open Set up EC2 connection -
optional.

Select Connect to an EC2 compute resource. Choose the EC2 instance you created previously.

187
Amazon Relational Database Service User Guide
Step 2: Create a MariaDB DB instance

11. Open View default settings for Easy create.

188
Amazon Relational Database Service User Guide
Step 2: Create a MariaDB DB instance

You can examine the default settings used with Easy create. The Editable after database is created
column shows which options you can change after you create the database.

• If a setting has No in that column, and you want a different setting, you can use Standard create
to create the DB instance.
• If a setting has Yes in that column, and you want a different setting, you can either use Standard
create to create the DB instance, or modify the DB instance after you create it to change the
setting.
12. Choose Create database.

To view the master username and password for the DB instance, choose View credential details.

You can use the username and password that appears to connect to the DB instance as the master
user.

189
Amazon Relational Database Service User Guide
Step 3: Connect to a MariaDB DB instance

Important
You can't view the master user password again. If you don't record it, you might have to
change it.
If you need to change the master user password after the DB instance is available, you can
modify the DB instance to do so. For more information about modifying a DB instance, see
Modifying an Amazon RDS DB instance (p. 401).
13. In the Databases list, choose the name of the new MariaDB DB instance to show its details.

The DB instance has a status of Creating until it is ready to use.

When the status changes to Available, you can connect to the DB instance. Depending on the DB
instance class and the amount of storage, it can take up to 20 minutes before the new instance is
available.

Step 3: Connect to a MariaDB DB instance


You can use any standard SQL client application to connect to the DB instance. In this example, you
connect to a MariaDB DB instance using the mysql command-line client.

To connect to a MariaDB DB instance

1. Find the endpoint (DNS name) and port number for your DB instance.

a. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
b. In the upper-right corner of the Amazon RDS console, choose the AWS Region for the DB
instance.
c. In the navigation pane, choose Databases.
d. Choose the MariaDB DB instance name to display its details.
e. On the Connectivity & security tab, copy the endpoint. Also note the port number. You need
both the endpoint and the port number to connect to the DB instance.

190
Amazon Relational Database Service User Guide
Step 3: Connect to a MariaDB DB instance

2. Connect to the EC2 instance that you created earlier by following the steps in Connect to your Linux
instance in the Amazon EC2 User Guide for Linux Instances.

We recommend that you connect to your EC2 instance using SSH. If the SSH client utility is installed
on Windows, Linux, or Mac, you can connect to the instance using the following command format:

ssh -i location_of_pem_file ec2-user@ec2-instance-public-dns-name

For example, assume that ec2-database-connect-key-pair.pem is


stored in /dir1 on Linux, and the public IPv4 DNS for your EC2 instance is
ec2-12-345-678-90.compute-1.amazonaws.com. Your SSH command would look as follows:

191
Amazon Relational Database Service User Guide
Step 3: Connect to a MariaDB DB instance

ssh -i /dir1/ec2-database-connect-key-pair.pem ec2-


[email protected]

3. Get the latest bug fixes and security updates by updating the software on your EC2 instance. To do
this, use the following command.
Note
The -y option installs the updates without asking for confirmation. To examine updates
before installing, omit this option.

sudo dnf update -y

4. Install the mysql command-line client from MariaDB.

To install the MariaDB command-line client on Amazon Linux 2023, run the following command:

sudo dnf install mariadb105

5. Connect to the MariaDB DB instance. For example, enter the following command. This action lets
you connect to the MariaDB DB instance using the MySQL client.

Substitute the DB instance endpoint (DNS name) for endpoint, and substitute the master username
that you used for admin. Provide the master password that you used when prompted for a
password.

mysql -h endpoint -P 3306 -u admin -p

After you enter the password for the user, you should see output similar to the following.

Welcome to the MariaDB monitor. Commands end with ; or \g.


Your MariaDB connection id is 156
Server version: 10.6.10-MariaDB-log managed by https://aws.amazon.com/rds/

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]>

For more information about connecting to a MariaDB DB instance, see Connecting to a DB instance
running the MariaDB database engine (p. 1269). If you can't connect to your DB instance, see Can't
connect to Amazon RDS DB instance (p. 2727).

For security, it is a best practice to use encrypted connections. Only use an unencrypted MariaDB
connection when the client and server are in the same VPC and the network is trusted. For
information about using encrypted connections, see Connecting from the MySQL command-line
client with SSL/TLS (encrypted) (p. 1276).
6. Run SQL commands.

For example, the following SQL command shows the current date and time:

SELECT CURRENT_TIMESTAMP;

192
Amazon Relational Database Service User Guide
Step 4: Delete the EC2 instance and DB instance

Step 4: Delete the EC2 instance and DB instance


After you connect to and explore the sample EC2 instance and DB instance that you created, delete them
so you're no longer charged for them.

To delete the EC2 instance

1. Sign in to the AWS Management Console and open the Amazon EC2 console at https://
console.aws.amazon.com/ec2/.
2. In the navigation pane, choose Instances.
3. Select the EC2 instance, and choose Instance state, Terminate instance.
4. Choose Terminate when prompted for confirmation.

For more information about deleting an EC2 instance, see Terminate your instance in the Amazon EC2
User Guide for Linux Instances.

To delete the DB instance with no final DB snapshot

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases.
3. Choose the DB instance you want to delete.
4. For Actions, choose Delete.
5. Clear Create final snapshot? and Retain automated backups.
6. Complete the acknowledgement and choose Delete.

(Optional) Connect your DB instance to a Lambda


function
You can also connect your RDS for MariaDB DB instance to a Lambda serverless compute resource.
Lambda functions allow you to run code without provisioning or managing infrastructure. A Lambda
function also allows you to automatically respond to code execution requests at any scale, from a dozen
events a day to hundreds of per second. For more information, see Automatically connecting a Lambda
function and a DB instance (p. 392).

193
Amazon Relational Database Service User Guide
Creating and connecting to a
Microsoft SQL Server DB instance

Creating and connecting to a Microsoft SQL Server


DB instance
This tutorial creates an EC2 instance and an RDS for Microsoft SQL Server DB instance. The tutorial
shows you how to access the DB instance from the EC2 instance using the Microsoft SQL Server
Management Studio client. As a best practice, this tutorial creates a private DB instance in a virtual
private cloud (VPC). In most cases, other resources in the same VPC, such as EC2 instances, can access the
DB instance, but resources outside of the VPC can't access it.

After you complete the tutorial, there is a public and private subnet in each Availability Zone in your VPC.
In one Availability Zone, the EC2 instance is in the public subnet, and the DB instance is in the private
subnet.
Important
There's no charge for creating an AWS account. However, by completing this tutorial, you might
incur costs for the AWS resources you use. You can delete these resources after you complete
the tutorial if they are no longer needed.

The following diagram shows the configuration when the tutorial is complete.

This tutorial uses Easy create to create a DB instance running Microsoft SQL Server with the AWS
Management Console. With Easy create, you specify only the DB engine type, DB instance size, and DB
instance identifier. Easy create uses the default settings for the other configuration options. The DB
instance created by Easy create is private.

When you use Standard create instead of Easy create, you can specify more configuration options when
you create a DB instance, including ones for availability, security, backups, and maintenance. To create
a public DB instance, you must use Standard create. For information about creating DB instances with
Standard create, see Creating an Amazon RDS DB instance (p. 300).

Topics
• Prerequisites (p. 195)
• Step 1: Create an EC2 instance (p. 195)
• Step 2: Create a SQL Server DB instance (p. 199)
• Step 3: Connect to your SQL Server DB instance (p. 204)

194
Amazon Relational Database Service User Guide
Prerequisites

• Step 4: Explore your sample SQL Server DB instance (p. 206)


• Step 5: Delete the EC2 instance and DB instance (p. 208)
• (Optional) Connect your DB instance to a Lambda function (p. 208)

Prerequisites
Before you begin, complete the steps in the following sections:

• Sign up for an AWS account (p. 174)


• Create an administrative user (p. 174)

Step 1: Create an EC2 instance


Create an Amazon EC2 instance that you will use to connect to your database.

To create an EC2 instance

1. Sign in to the AWS Management Console and open the Amazon EC2 console at https://
console.aws.amazon.com/ec2/.
2. In the upper-right corner of the AWS Management Console, choose the AWS Region you used for the
database previously.
3. Choose EC2 Dashboard, and then choose Launch instance, as shown in the following image.

195
Amazon Relational Database Service User Guide
Step 1: Create an EC2 instance

The Launch an instance page opens.


4. Choose the following settings on the Launch an instance page.

a. Under Name and tags, for Name, enter ec2-database-connect.


b. Under Application and OS Images (Amazon Machine Image), choose Windows, and then
choose the Microsoft Windows Server 2022 Base. Keep the default selections for the other
choices.

196
Amazon Relational Database Service User Guide
Step 1: Create an EC2 instance

c. Under Instance type, choose t2.micro.


d. Under Key pair (login), choose a Key pair name to use an existing key pair. To create a new key
pair for the Amazon EC2 instance, choose Create new key pair and then use the Create key pair
window to create it.

For more information about creating a new key pair, see Create a key pair in the Amazon EC2
User Guide for Windows Instances.
e. For Firewall (security groups) in Network settings, choose Allow RDP traffic from to connect
to the EC2 instance.

You can choose My IP if the displayed IP address is correct for RDP connections. Otherwise,
you can determine the IP address to use to connect to EC2 instances in your VPC using RDP. To
determine your public IP address, in a different browser window or tab, you can use the service
at https://checkip.amazonaws.com. An example of an IP address is 192.0.2.1/32.

In many cases, you might connect through an internet service provider (ISP) or from behind your
firewall without a static IP address. If so, make sure to determine the range of IP addresses used
by client computers.
Warning
If you use 0.0.0.0/0 for RDP access, you make it possible for all IP addresses to access
your public EC2 instances using RDP. This approach is acceptable for a short time in a
test environment, but it's unsafe for production environments. In production, authorize
only a specific IP address or range of addresses to access your EC2 instances using RDP.

197
Amazon Relational Database Service User Guide
Step 1: Create an EC2 instance

The following image shows an example of the Network settings section.

f. Keep the default values for the remaining sections.


g. Review a summary of your EC2 instance configuration in the Summary panel, and when you're
ready, choose Launch instance.
5. On the Launch Status page, note the identifier for your new EC2 instance, for example:
i-1234567890abcdef0.

198
Amazon Relational Database Service User Guide
Step 2: Create a SQL Server DB instance

6. Choose the EC2 instance identifier to open the list of EC2 instances.
7. Wait until the Instance state for your EC2 instance has a status of Running before continuing.

Step 2: Create a SQL Server DB instance


The basic building block of Amazon RDS is the DB instance. This environment is where you run your SQL
Server databases.

In this example, you use Easy create to create a DB instance running the SQL Server database engine
with a db.t2.micro DB instance class.

To create a Microsoft SQL Server DB instance with Easy create

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the upper-right corner of the Amazon RDS console, choose the AWS Region in which you want to
create the DB instance.
3. In the navigation pane, choose Databases.
4. Choose Create database and make sure that Easy create is chosen.

199
Amazon Relational Database Service User Guide
Step 2: Create a SQL Server DB instance

5. In Configuration, choose Microsoft SQL Server.


6. For Edition, choose SQL Server Express Edition.
7. For DB instance size, choose Free tier.
8. For DB instance identifier, enter database-test1.

The Create database page should look similar to the following image.

200
Amazon Relational Database Service User Guide
Step 2: Create a SQL Server DB instance

9. For Master username, enter a name for the master user, or keep the default name.
10. To set up a connection with the EC2 instance you created previously, open Set up EC2 connection -
optional.

Select Connect to an EC2 compute resource. Choose the EC2 instance you created previously.

201
Amazon Relational Database Service User Guide
Step 2: Create a SQL Server DB instance

11. To use an automatically generated master password for the DB instance, select the Auto generate a
password box.

To enter your master password, clear the Auto generate a password box, and then enter the same
password in Master password and Confirm password.
12. Open View default settings for Easy create.

202
Amazon Relational Database Service User Guide
Step 2: Create a SQL Server DB instance

You can examine the default settings used with Easy create. The Editable after database is created
column shows which options you can change after you create the database.

• If a setting has No in that column, and you want a different setting, you can use Standard create
to create the DB instance.
• If a setting has Yes in that column, and you want a different setting, you can either use Standard
create to create the DB instance, or modify the DB instance after you create it to change the
setting.
13. Choose Create database.

To view the master username and password for the DB instance, choose View credential details.

You can use the username and password that appears to connect to the DB instance as the master
user.

203
Amazon Relational Database Service User Guide
Step 3: Connecting to your SQL Server DB instance

Important
You can't view the master user password again. If you don't record it, you might have to
change it.
If you need to change the master user password after the DB instance is available, you can
modify the DB instance to do so. For more information about modifying a DB instance, see
Modifying an Amazon RDS DB instance (p. 401).
14. In the Databases list, choose the name of the new SQL Server DB instance to show its details.

The DB instance has a status of Creating until it is ready to use.

When the status changes to Available, you can connect to the DB instance. Depending on the DB
instance class and the amount of storage, it can take up to 20 minutes before the new instance is
available.

Step 3: Connect to your SQL Server DB instance


In the following procedure, you connect to your DB instance by using Microsoft SQL Server Management
Studio (SSMS).

To connect to an RDS for SQL Server DB instance using SSMS

1. Find the endpoint (DNS name) and port number for your DB instance.

a. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
b. In the upper-right corner of the Amazon RDS console, choose the AWS Region for the DB
instance.
c. In the navigation pane, choose Databases.
d. Choose the SQL Server DB instance name to display its details.
e. On the Connectivity tab, copy the endpoint. Also, note the port number. You need both the
endpoint and the port number to connect to the DB instance.

204
Amazon Relational Database Service User Guide
Step 3: Connecting to your SQL Server DB instance

2. Connect to the EC2 instance that you created earlier by following the steps in Connect to your
Microsoft Windows instance in the Amazon EC2 User Guide for Windows Instances.
3. Install the SQL Server Management Studio (SSMS) client from Microsoft.

205
Amazon Relational Database Service User Guide
Step 4: Exploring your sample DB instance

To download a standalone version of SSMS to your EC2 instance, see Download SQL Server
Management Studio (SSMS) in the Microsoft documentation.

a. Use the Start menu to open Internet Explorer.


b. Use Internet Explorer to download and install a standalone version of SSMS. If you are
prompted that the site isn't trusted, add the site to the list of trusted sites.
4. Start SQL Server Management Studio (SSMS).

The Connect to Server dialog box appears.


5. Provide the following information for your sample DB instance:

a. For Server type, choose Database Engine.


b. For Server name, enter the DNS name, followed by a comma and the port number (the default
port is 1433). For example, your server name should look as follows:

database-test1.0123456789012.us-west-2.rds.amazonaws.com,1433

c. For Authentication, choose SQL Server Authentication.


d. For Login, enter the username that you chose to use for your sample DB instance. This is also
known as the master username.
e. For Password, enter the password that you chose earlier for your sample DB instance. This is
also known as the master user password.
6. Choose Connect.

After a few moments, SSMS connects to your DB instance. For security, it is a best practice to use
encrypted connections. Only use an unencrypted SQL Server connection when the client and server
are in the same VPC and the network is trusted. For information about using encrypted connections,
see Using SSL with a Microsoft SQL Server DB instance (p. 1456)

For more information about connecting to a Microsoft SQL Server DB instance, see Connecting to a DB
instance running the Microsoft SQL Server database engine (p. 1380).

For information about connection issues, see Can't connect to Amazon RDS DB instance (p. 2727).

Step 4: Explore your sample SQL Server DB instance


You can explore your sample DB instance by using Microsoft SQL Server Management Studio (SSMS).

To explore a DB instance using SSMS

1. Your SQL Server DB instance comes with SQL Server's standard built-in system databases (master,
model, msdb, and tempdb). To explore the system databases, do the following:

a. In SSMS, on the View menu, choose Object Explorer.


b. Expand your DB instance, expand Databases, and then expand System Databases as shown.

206
Amazon Relational Database Service User Guide
Step 4: Exploring your sample DB instance

Your SQL Server DB instance also comes with a database named rdsadmin. Amazon RDS uses this
database to store the objects that it uses to manage your database. The rdsadmin database also
includes stored procedures that you can run to perform advanced tasks.
2. Start creating your own databases and running queries against your DB instance and databases as
usual. To run a test query against your sample DB instance, do the following:

a. In SSMS, on the File menu, point to New and then choose Query with Current Connection.
b. Enter the following SQL query:

select @@VERSION

c. Run the query. SSMS returns the SQL Server version of your Amazon RDS DB instance.

207
Amazon Relational Database Service User Guide
Step 5: Delete the EC2 instance and DB instance

Step 5: Delete the EC2 instance and DB instance


After you connect to and explore the sample EC2 instance and DB instance that you created, delete them
so you're no longer charged for them.

To delete the EC2 instance

1. Sign in to the AWS Management Console and open the Amazon EC2 console at https://
console.aws.amazon.com/ec2/.
2. In the navigation pane, choose Instances.
3. Select the EC2 instance, and choose Instance state, Terminate instance.
4. Choose Terminate when prompted for confirmation.

For more information about deleting an EC2 instance, see Terminate your instance in the User Guide for
Windows Instances.

To delete the DB instance with no final DB snapshot

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases.
3. Choose the DB instance that you want to delete.
4. For Actions, choose Delete.
5. Clear Create final snapshot? and Retain automated backups.
6. Complete the acknowledgement and choose Delete.

(Optional) Connect your DB instance to a Lambda


function
You can also connect your RDS for SQL Server DB instance to a Lambda serverless compute resource.
Lambda functions allow you to run code without provisioning or managing infrastructure. A Lambda
function also allows you to automatically respond to code execution requests at any scale, from a dozen
events a day to hundreds of per second. For more information, see Automatically connecting a Lambda
function and a DB instance (p. 392).

208
Amazon Relational Database Service User Guide
Creating and connecting to a MySQL DB instance

Creating and connecting to a MySQL DB instance


This tutorial creates an EC2 instance and an RDS for MySQL DB instance. The tutorial shows you how
to access the DB instance from the EC2 instance using a standard MySQL client. As a best practice, this
tutorial creates a private DB instance in a virtual private cloud (VPC). In most cases, other resources in
the same VPC, such as EC2 instances, can access the DB instance, but resources outside of the VPC can't
access it.

After you complete the tutorial, there is a public and private subnet in each Availability Zone in your VPC.
In one Availability Zone, the EC2 instance is in the public subnet, and the DB instance is in the private
subnet.
Important
There's no charge for creating an AWS account. However, by completing this tutorial, you might
incur costs for the AWS resources you use. You can delete these resources after you complete
the tutorial if they are no longer needed.

The following diagram shows the configuration when the tutorial is complete.

This tutorial uses Easy create to create a DB instance running MySQL with the AWS Management
Console. With Easy create, you specify only the DB engine type, DB instance size, and DB instance
identifier. Easy create uses the default settings for the other configuration options. The DB instance
created by Easy create is private.

When you use Standard create instead of Easy create, you can specify more configuration options when
you create a DB instance, including ones for availability, security, backups, and maintenance. To create
a public DB instance, you must use Standard create. For information about creating DB instances with
Standard create, see Creating an Amazon RDS DB instance (p. 300).

Topics
• Prerequisites (p. 210)
• Step 1: Create an EC2 instance (p. 210)
• Step 2: Create a MySQL DB instance (p. 213)
• Step 3: Connect to a MySQL DB instance (p. 218)
• Step 4: Delete the EC2 instance and DB instance (p. 221)
• (Optional) Connect your DB instance to a Lambda function (p. 221)

209
Amazon Relational Database Service User Guide
Prerequisites

Prerequisites
Before you begin, complete the steps in the following sections:

• Sign up for an AWS account (p. 174)


• Create an administrative user (p. 174)

Step 1: Create an EC2 instance


Create an Amazon EC2 instance that you will use to connect to your database.

To create an EC2 instance

1. Sign in to the AWS Management Console and open the Amazon EC2 console at https://
console.aws.amazon.com/ec2/.
2. In the upper-right corner of the AWS Management Console, choose the AWS Region in which you
want to create the EC2 instance.
3. Choose EC2 Dashboard, and then choose Launch instance, as shown in the following image.

210
Amazon Relational Database Service User Guide
Step 1: Create an EC2 instance

The Launch an instance page opens.


4. Choose the following settings on the Launch an instance page.

a. Under Name and tags, for Name, enter ec2-database-connect.


b. Under Application and OS Images (Amazon Machine Image), choose Amazon Linux, and then
choose the Amazon Linux 2023 AMI. Keep the default selections for the other choices.

c. Under Instance type, choose t2.micro.


d. Under Key pair (login), choose a Key pair name to use an existing key pair. To create a new key
pair for the Amazon EC2 instance, choose Create new key pair and then use the Create key pair
window to create it.

For more information about creating a new key pair, see Create a key pair in the Amazon EC2
User Guide for Linux Instances.
e. For Allow SSH traffic in Network settings, choose the source of SSH connections to the EC2
instance.

You can choose My IP if the displayed IP address is correct for SSH connections. Otherwise, you
can determine the IP address to use to connect to EC2 instances in your VPC using Secure Shell
(SSH). To determine your public IP address, in a different browser window or tab, you can use
the service at https://checkip.amazonaws.com. An example of an IP address is 192.0.2.1/32.

211
Amazon Relational Database Service User Guide
Step 1: Create an EC2 instance

In many cases, you might connect through an internet service provider (ISP) or from behind your
firewall without a static IP address. If so, make sure to determine the range of IP addresses used
by client computers.
Warning
If you use 0.0.0.0/0 for SSH access, you make it possible for all IP addresses to access
your public EC2 instances using SSH. This approach is acceptable for a short time in a
test environment, but it's unsafe for production environments. In production, authorize
only a specific IP address or range of addresses to access your EC2 instances using SSH.

The following image shows an example of the Network settings section.

f. Leave the default values for the remaining sections.


g. Review a summary of your EC2 instance configuration in the Summary panel, and when you're
ready, choose Launch instance.
5. On the Launch Status page, note the identifier for your new EC2 instance, for example:
i-1234567890abcdef0.

212
Amazon Relational Database Service User Guide
Step 2: Create a MySQL DB instance

6. Choose the EC2 instance identifier to open the list of EC2 instances, and then select your EC2
instance.
7. In the Details tab, note the following values, which you need when you connect using SSH:

a. In Instance summary, note the value for Public IPv4 DNS.

b. In Instance details, note the value for Key pair name.

8. Wait until the Instance state for your EC2 instance has a status of Running before continuing.

Step 2: Create a MySQL DB instance


The basic building block of Amazon RDS is the DB instance. This environment is where you run your
MySQL databases.

213
Amazon Relational Database Service User Guide
Step 2: Create a MySQL DB instance

In this example, you use Easy create to create a DB instance running the MySQL database engine with a
db.t3.micro DB instance class.

To create a MySQL DB instance with Easy create

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the upper-right corner of the Amazon RDS console, choose the AWS Region you used for the EC2
instance previously.
3. In the navigation pane, choose Databases.
4. Choose Create database and make sure that Easy create is chosen.

5. In Configuration, choose MySQL.


6. For DB instance size, choose Free tier.
7. For DB instance identifier, enter database-test1.
8. For Master username, enter a name for the master user, or keep the default name.

The Create database page should look similar to the following image.

214
Amazon Relational Database Service User Guide
Step 2: Create a MySQL DB instance

9. To use an automatically generated master password for the DB instance, select Auto generate a
password.

To enter your master password, make sure Auto generate a password is cleared, and then enter the
same password in Master password and Confirm password.
10. To set up a connection with the EC2 instance you created previously, open Set up EC2 connection -
optional.

Select Connect to an EC2 compute resource. Choose the EC2 instance you created previously.

215
Amazon Relational Database Service User Guide
Step 2: Create a MySQL DB instance

11. (Optional) Open View default settings for Easy create.

216
Amazon Relational Database Service User Guide
Step 2: Create a MySQL DB instance

You can examine the default settings used with Easy create. The Editable after database is created
column shows which options you can change after you create the database.

• If a setting has No in that column, and you want a different setting, you can use Standard create
to create the DB instance.
• If a setting has Yes in that column, and you want a different setting, you can either use Standard
create to create the DB instance, or modify the DB instance after you create it to change the
setting.
12. Choose Create database.

To view the master username and password for the DB instance, choose View credential details.

You can use the username and password that appears to connect to the DB instance as the master
user.

217
Amazon Relational Database Service User Guide
Step 3: Connect to a MySQL DB instance

Important
You can't view the master user password again. If you don't record it, you might have to
change it.
If you need to change the master user password after the DB instance is available, you can
modify the DB instance to do so. For more information about modifying a DB instance, see
Modifying an Amazon RDS DB instance (p. 401).
13. In the Databases list, choose the name of the new MySQL DB instance to show its details.

The DB instance has a status of Creating until it is ready to use.

When the status changes to Available, you can connect to the DB instance. Depending on the DB
instance class and the amount of storage, it can take up to 20 minutes before the new instance is
available.

Step 3: Connect to a MySQL DB instance


You can use any standard SQL client application to connect to the DB instance. In this example, you
connect to a MySQL DB instance using the mysql command-line client.

To connect to a MySQL DB instance

1. Find the endpoint (DNS name) and port number for your DB instance.

a. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
b. In the upper-right corner of the Amazon RDS console, choose the AWS Region for the DB
instance.
c. In the navigation pane, choose Databases.
d. Choose the MySQL DB instance name to display its details.
e. On the Connectivity & security tab, copy the endpoint. Also, note the port number. You need
both the endpoint and the port number to connect to the DB instance.

218
Amazon Relational Database Service User Guide
Step 3: Connect to a MySQL DB instance

2. Connect to the EC2 instance that you created earlier by following the steps in Connect to your Linux
instance in the Amazon EC2 User Guide for Linux Instances.

We recommend that you connect to your EC2 instance using SSH. If the SSH client utility is installed
on Windows, Linux, or Mac, you can connect to the instance using the following command format:

ssh -i location_of_pem_file ec2-user@ec2-instance-public-dns-name

For example, assume that ec2-database-connect-key-pair.pem is


stored in /dir1 on Linux, and the public IPv4 DNS for your EC2 instance is
ec2-12-345-678-90.compute-1.amazonaws.com. Your SSH command would look as follows:

219
Amazon Relational Database Service User Guide
Step 3: Connect to a MySQL DB instance

ssh -i /dir1/ec2-database-connect-key-pair.pem ec2-


[email protected]

3. Get the latest bug fixes and security updates by updating the software on your EC2 instance. To do
this, use the following command.
Note
The -y option installs the updates without asking for confirmation. To examine updates
before installing, omit this option.

sudo dnf update -y

4. To install the mysql command-line client from MariaDB on Amazon Linux 2023, run the following
command:

sudo dnf install mariadb105

5. Connect to the MySQL DB instance. For example, enter the following command. This action lets you
connect to the MySQL DB instance using the MySQL client.

Substitute the DB instance endpoint (DNS name) for endpoint, and substitute the master username
that you used for admin. Provide the master password that you used when prompted for a
password.

mysql -h endpoint -P 3306 -u admin -p

After you enter the password for the user, you should see output similar to the following.

Welcome to the MariaDB monitor. Commands end with ; or \g.


Your MySQL connection id is 3082
Server version: 8.0.28 Source distribution

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MySQL [(none)]>

For more information about connecting to a MySQL DB instance, see Connecting to a DB instance
running the MySQL database engine (p. 1630). If you can't connect to your DB instance, see Can't
connect to Amazon RDS DB instance (p. 2727).

For security, it is a best practice to use encrypted connections. Only use an unencrypted MySQL
connection when the client and server are in the same VPC and the network is trusted. For
information about using encrypted connections, see Connecting from the MySQL command-line
client with SSL/TLS (encrypted) (p. 1640).
6. Run SQL commands.

For example, the following SQL command shows the current date and time:

SELECT CURRENT_TIMESTAMP;

220
Amazon Relational Database Service User Guide
Step 4: Delete the EC2 instance and DB instance

Step 4: Delete the EC2 instance and DB instance


After you connect to and explore the sample EC2 instance and DB instance that you created, delete them
so you're no longer charged for them.

To delete the EC2 instance

1. Sign in to the AWS Management Console and open the Amazon EC2 console at https://
console.aws.amazon.com/ec2/.
2. In the navigation pane, choose Instances.
3. Select the EC2 instance, and choose Instance state, Terminate instance.
4. Choose Terminate when prompted for confirmation.

For more information about deleting an EC2 instance, see Terminate your instance in the Amazon EC2
User Guide for Linux Instances.

To delete the DB instance with no final DB snapshot

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases.
3. Choose the DB instance that you want to delete.
4. For Actions, choose Delete.
5. Clear Create final snapshot? and Retain automated backups.
6. Complete the acknowledgement and choose Delete.

(Optional) Connect your DB instance to a Lambda


function
You can also connect your RDS for MySQL DB instance to a Lambda serverless compute resource.
Lambda functions allow you to run code without provisioning or managing infrastructure. A Lambda
function also allows you to automatically respond to code execution requests at any scale, from a dozen
events a day to hundreds of per second. For more information, see Automatically connecting a Lambda
function and a DB instance (p. 392).

221
Amazon Relational Database Service User Guide
Creating and connecting to an Oracle DB instance

Creating and connecting to an Oracle DB instance


This tutorial creates an EC2 instance and an RDS for Oracle DB instance. The tutorial shows you how
to access the DB instance from the EC2 instance using a standard Oracle client. As a best practice, this
tutorial creates a private DB instance in a virtual private cloud (VPC). In most cases, other resources in
the same VPC, such as EC2 instances, can access the DB instance, but resources outside of the VPC can't
access it.

After you complete the tutorial, there is a public and private subnet in each Availability Zone in your VPC.
In one Availability Zone, the EC2 instance is in the public subnet, and the DB instance is in the private
subnet.
Important
There's no charge for creating an AWS account. However, by completing this tutorial, you might
incur costs for the AWS resources you use. You can delete these resources after you complete
the tutorial if they are no longer needed.

The following diagram shows the configuration when the tutorial is complete.

This tutorial uses Easy create to create a DB instance running Oracle with the AWS Management
Console. With Easy create, you specify only the DB engine type, DB instance size, and DB instance
identifier. Easy create uses the default settings for the other configuration options. The DB instance
created by Easy create is private.

When you use Standard create instead of Easy create, you can specify more configuration options when
you create a DB instance, including ones for availability, security, backups, and maintenance. To create
a public DB instance, you must use Standard create. For information about creating DB instances with
Standard create, see Creating an Amazon RDS DB instance (p. 300).

Topics
• Prerequisites (p. 223)
• Step 1: Create an EC2 instance (p. 223)
• Step 2: Create an Oracle DB instance (p. 226)
• Step 3: Connect your SQL client to an Oracle DB instance (p. 231)
• Step 4: Delete the EC2 instance and DB instance (p. 234)
• (Optional) Connect your DB instance to a Lambda function (p. 234)

222
Amazon Relational Database Service User Guide
Prerequisites

Prerequisites
Before you begin, complete the steps in the following sections:

• Sign up for an AWS account (p. 174)


• Create an administrative user (p. 174)

Step 1: Create an EC2 instance


Create an Amazon EC2 instance that you will use to connect to your database.

To create an EC2 instance

1. Sign in to the AWS Management Console and open the Amazon EC2 console at https://
console.aws.amazon.com/ec2/.
2. In the upper-right corner of the AWS Management Console, choose the AWS Region in which you
want to create the EC2 instance.
3. Choose EC2 Dashboard, and then choose Launch instance, as shown in the following image.

223
Amazon Relational Database Service User Guide
Step 1: Create an EC2 instance

The Launch an instance page opens.


4. Choose the following settings on the Launch an instance page.

a. Under Name and tags, for Name, enter ec2-database-connect.


b. Under Application and OS Images (Amazon Machine Image), choose Amazon Linux, and then
choose the Amazon Linux 2023 AMI. Keep the default selections for the other choices.

c. Under Instance type, choose t2.micro.


d. Under Key pair (login), choose a Key pair name to use an existing key pair. To create a new key
pair for the Amazon EC2 instance, choose Create new key pair and then use the Create key pair
window to create it.

For more information about creating a new key pair, see Create a key pair in the Amazon EC2
User Guide for Linux Instances.
e. For Allow SSH traffic in Network settings, choose the source of SSH connections to the EC2
instance.

You can choose My IP if the displayed IP address is correct for SSH connections. Otherwise, you
can determine the IP address to use to connect to EC2 instances in your VPC using Secure Shell
(SSH). To determine your public IP address, in a different browser window or tab, you can use
the service at https://checkip.amazonaws.com. An example of an IP address is 192.0.2.1/32.

224
Amazon Relational Database Service User Guide
Step 1: Create an EC2 instance

In many cases, you might connect through an internet service provider (ISP) or from behind your
firewall without a static IP address. If so, make sure to determine the range of IP addresses used
by client computers.
Warning
If you use 0.0.0.0/0 for SSH access, you make it possible for all IP addresses to access
your public EC2 instances using SSH. This approach is acceptable for a short time in a
test environment, but it's unsafe for production environments. In production, authorize
only a specific IP address or range of addresses to access your EC2 instances using SSH.

The following image shows an example of the Network settings section.

f. Leave the default values for the remaining sections.


g. Review a summary of your EC2 instance configuration in the Summary panel, and when you're
ready, choose Launch instance.
5. On the Launch Status page, note the identifier for your new EC2 instance, for example:
i-1234567890abcdef0.

225
Amazon Relational Database Service User Guide
Step 2: Create an Oracle DB instance

6. Choose the EC2 instance identifier to open the list of EC2 instances, and then select your EC2
instance.
7. In the Details tab, note the following values, which you need when you connect using SSH:

a. In Instance summary, note the value for Public IPv4 DNS.

b. In Instance details, note the value for Key pair name.

8. Wait until the Instance state for your EC2 instance has a status of Running before continuing.

Step 2: Create an Oracle DB instance


The basic building block of Amazon RDS is the DB instance. This environment is where you run your
Oracle databases.

226
Amazon Relational Database Service User Guide
Step 2: Create an Oracle DB instance

In this example, you use Easy create to create a DB instance running the Oracle database engine with a
db.m5.large DB instance class.

To create an Oracle DB instance with Easy create

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the upper-right corner of the Amazon RDS console, choose the AWS Region in which you want to
create the DB instance.
3. In the navigation pane, choose Databases.
4. Choose Create database and make sure that Easy create is chosen.

5. In Configuration, choose Oracle.


6. For DB instance size, choose Dev/Test.
7. For DB instance identifier, enter database-test1.
8. For Master username, enter a name for the master user, or keep the default name.

The Create database page should look similar to the following image.

227
Amazon Relational Database Service User Guide
Step 2: Create an Oracle DB instance

228
Amazon Relational Database Service User Guide
Step 2: Create an Oracle DB instance

9. To use an automatically generated master password for the DB instance, select Auto generate a
password.

To enter your master password, make sure Auto generate a password is cleared, and then enter the
same password in Master password and Confirm password.
10. To set up a connection with the EC2 instance you created previously, open Set up EC2 connection -
optional.

Select Connect to an EC2 compute resource. Choose the EC2 instance you created previously.

11. Open View default settings for Easy create.

229
Amazon Relational Database Service User Guide
Step 2: Create an Oracle DB instance

You can examine the default settings used with Easy create. The Editable after database is created
column shows which options you can change after you create the database.

• If a setting has No in that column, and you want a different setting, you can use Standard create
to create the DB instance.
• If a setting has Yes in that column, and you want a different setting, you can either use Standard
create to create the DB instance, or modify the DB instance after you create it to change the
setting.
12. Choose Create database.

To view the master username and password for the DB instance, choose View credential details.

You can use the username and password that appears to connect to the DB instance as the master
user.

230
Amazon Relational Database Service User Guide
Step 3: Connect your SQL client to an Oracle DB instance

Important
You can't view the master user password again. If you don't record it, you might have to
change it.
If you need to change the master user password after the DB instance is available, you can
modify the DB instance to do so. For more information about modifying a DB instance, see
Modifying an Amazon RDS DB instance (p. 401).
13. In the Databases list, choose the name of the new Oracle DB instance to show its details.

The DB instance has a status of Creating until it is ready to use.

When the status changes to Available, you can connect to the DB instance. Depending on the DB
instance class and the amount of storage, it can take up to 20 minutes before the new instance is
available. While the DB instance is being created, you can move on to the next step and create an
EC2 instance.

Step 3: Connect your SQL client to an Oracle DB


instance
You can use any standard SQL client application to connect to your DB instance. In this example, you
connect to an Oracle DB instance using the Oracle command-line client.

To connect to an Oracle DB instance

1. Find the endpoint (DNS name) and port number for your DB instance.

a. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
b. In the upper-right corner of the Amazon RDS console, choose the AWS Region for the DB
instance.
c. In the navigation pane, choose Databases.
d. Choose the Oracle DB instance name to display its details.
e. On the Connectivity & security tab, copy the endpoint. Also, note the port number. You need
both the endpoint and the port number to connect to the DB instance.

231
Amazon Relational Database Service User Guide
Step 3: Connect your SQL client to an Oracle DB instance

2. Connect to the EC2 instance that you created earlier by following the steps in Connect to your Linux
instance in the Amazon EC2 User Guide for Linux Instances.

We recommend that you connect to your EC2 instance using SSH. If the SSH client utility is installed
on Windows, Linux, or Mac, you can connect to the instance using the following command format:

ssh -i location_of_pem_file ec2-user@ec2-instance-public-dns-name

For example, assume that ec2-database-connect-key-pair.pem is


stored in /dir1 on Linux, and the public IPv4 DNS for your EC2 instance is
ec2-12-345-678-90.compute-1.amazonaws.com. Your SSH command would look as follows:

ssh -i /dir1/ec2-database-connect-key-pair.pem ec2-


[email protected]

3. Get the latest bug fixes and security updates by updating the software on your EC2 instance. To do
so, use the following command.
Note
The -y option installs the updates without asking for confirmation. To examine updates
before installing, omit this option.

sudo dnf update -y

4. In a web browser, go to https://www.oracle.com/database/technologies/instant-client/linux-


x86-64-downloads.html.
5. For the latest database version that appears on the web page, copy the .rpm links (not the .zip links)
for the Instant Client Basic Package and SQL*Plus Package. For example, the following links are for
Oracle Database version 21.9:

232
Amazon Relational Database Service User Guide
Step 3: Connect your SQL client to an Oracle DB instance

• https://download.oracle.com/otn_software/linux/instantclient/219000/oracle-instantclient-
basic-21.9.0.0.0-1.el8.x86_64.rpm
• https://download.oracle.com/otn_software/linux/instantclient/219000/oracle-instantclient-
sqlplus-21.9.0.0.0-1.el8.x86_64.rpm
6. In your SSH session, run the wget command to the download the .rpm files from the links that you
obtained in the previous step. The following example downloads the .rpm files for Oracle Database
version 21.9:

wget https://download.oracle.com/otn_software/linux/instantclient/219000/oracle-
instantclient-basic-21.9.0.0.0-1.el8.x86_64.rpm
wget https://download.oracle.com/otn_software/linux/instantclient/219000/oracle-
instantclient-sqlplus-21.9.0.0.0-1.el8.x86_64.rpm

7. Install the packages by running the dnf command as follows:

sudo dnf install oracle-instantclient-*.rpm

8. Start SQL*Plus and connect to the Oracle DB instance. For example, enter the following command.

Substitute the DB instance endpoint (DNS name) for oracle-db-instance-endpoint and


substitute the master user name that you used for admin. When you use Easy create for Oracle,
the database name is DATABASE. Provide the master password that you used when prompted for a
password.

sqlplus admin@oracle-db-instance-endpoint:1521/DATABASE

After you enter the password for the user, you should see output similar to the following.

SQL*Plus: Release 21.0.0.0.0 - Production on Wed Mar 1 16:41:28 2023


Version 21.9.0.0.0

Copyright (c) 1982, 2022, Oracle. All rights reserved.

Enter password:
Last Successful login time: Wed Mar 01 2023 16:30:52 +00:00

Connected to:
Oracle Database 19c Standard Edition 2 Release 19.0.0.0.0 - Production
Version 19.18.0.0.0

SQL>

For more information about connecting to an RDS for Oracle DB instance, see Connecting to your
RDS for Oracle DB instance (p. 1806). If you can't connect to your DB instance, see Can't connect to
Amazon RDS DB instance (p. 2727).

For security, it is a best practice to use encrypted connections. Only use an unencrypted
Oracle connection when the client and server are in the same VPC and the network is
trusted. For information about using encrypted connections, see Securing Oracle DB instance
connections (p. 1816).
9. Run SQL commands.

For example, the following SQL command shows the current date:

SELECT SYSDATE FROM DUAL;

233
Amazon Relational Database Service User Guide
Step 4: Delete the EC2 instance and DB instance

Step 4: Delete the EC2 instance and DB instance


After you connect to and explore the sample EC2 instance and DB instance that you created, delete them
so you're no longer charged for them.

To delete the EC2 instance

1. Sign in to the AWS Management Console and open the Amazon EC2 console at https://
console.aws.amazon.com/ec2/.
2. In the navigation pane, choose Instances.
3. Select the EC2 instance, and choose Instance state, Terminate instance.
4. Choose Terminate when prompted for confirmation.

For more information about deleting an EC2 instance, see Terminate your instance in the Amazon EC2
User Guide for Linux Instances.

To delete the DB instance with no final DB snapshot

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases.
3. Choose the DB instance that you want to delete.
4. For Actions, choose Delete.
5. Clear Create final snapshot? and Retain automated backups.
6. Complete the acknowledgement and choose Delete.

(Optional) Connect your DB instance to a Lambda


function
You can also connect your RDS for Oracle DB instance to a Lambda serverless compute resource. Lambda
functions allow you to run code without provisioning or managing infrastructure. A Lambda function
also allows you to automatically respond to code execution requests at any scale, from a dozen events a
day to hundreds of per second. For more information, see Automatically connecting a Lambda function
and a DB instance (p. 392).

234
Amazon Relational Database Service User Guide
Creating and connecting to a PostgreSQL DB instance

Creating and connecting to a PostgreSQL DB


instance
This tutorial creates an EC2 instance and an RDS for PostgreSQL DB instance. The tutorial shows you how
to access the DB instance from the EC2 instance using a standard PostgreSQL client. As a best practice,
this tutorial creates a private DB instance in a virtual private cloud (VPC). In most cases, other resources
in the same VPC, such as EC2 instances, can access the DB instance, but resources outside of the VPC
can't access it.

After you complete the tutorial, there is a public and private subnet in each Availability Zone in your VPC.
In one Availability Zone, the EC2 instance is in the public subnet, and the DB instance is in the private
subnet.
Important
There's no charge for creating an AWS account. However, by completing this tutorial, you might
incur costs for the AWS resources you use. You can delete these resources after you complete
the tutorial if they are no longer needed.

The following diagram shows the configuration when the tutorial is complete.

This tutorial uses Easy create to create a DB instance running PostgreSQL with the AWS Management
Console. With Easy create, you specify only the DB engine type, DB instance size, and DB instance
identifier. Easy create uses the default settings for the other configuration options. The DB instance
created by Easy create is private.

When you use Standard create instead of Easy create, you can specify more configuration options when
you create a DB instance, including ones for availability, security, backups, and maintenance. To create
a public DB instance, you must use Standard create. For information about creating DB instances with
Standard create, see Creating an Amazon RDS DB instance (p. 300).

Topics
• Prerequisites (p. 236)
• Step 1: Create an EC2 instance (p. 236)
• Step 2: Create a PostgreSQL DB instance (p. 240)
• Step 3: Connect to a PostgreSQL DB instance (p. 245)

235
Amazon Relational Database Service User Guide
Prerequisites

• Step 4: Delete the EC2 instance and DB instance (p. 248)


• (Optional) Connect your DB instance to a Lambda function (p. 248)

Prerequisites
Before you begin, complete the steps in the following sections:

• Sign up for an AWS account (p. 174)


• Create an administrative user (p. 174)

Step 1: Create an EC2 instance


Create an Amazon EC2 instance that you will use to connect to your database.

To create an EC2 instance

1. Sign in to the AWS Management Console and open the Amazon EC2 console at https://
console.aws.amazon.com/ec2/.
2. In the upper-right corner of the AWS Management Console, choose the AWS Region in which you
want to create the EC2 instance.
3. Choose EC2 Dashboard, and then choose Launch instance, as shown in the following image.

236
Amazon Relational Database Service User Guide
Step 1: Create an EC2 instance

The Launch an instance page opens.


4. Choose the following settings on the Launch an instance page.

a. Under Name and tags, for Name, enter ec2-database-connect.


b. Under Application and OS Images (Amazon Machine Image), choose Amazon Linux, and then
choose the Amazon Linux 2023 AMI. Keep the default selections for the other choices.

237
Amazon Relational Database Service User Guide
Step 1: Create an EC2 instance

c. Under Instance type, choose t2.micro.


d. Under Key pair (login), choose a Key pair name to use an existing key pair. To create a new key
pair for the Amazon EC2 instance, choose Create new key pair and then use the Create key pair
window to create it.

For more information about creating a new key pair, see Create a key pair in the Amazon EC2
User Guide for Linux Instances.
e. For Allow SSH traffic in Network settings, choose the source of SSH connections to the EC2
instance.

You can choose My IP if the displayed IP address is correct for SSH connections. Otherwise, you
can determine the IP address to use to connect to EC2 instances in your VPC using Secure Shell
(SSH). To determine your public IP address, in a different browser window or tab, you can use
the service at https://checkip.amazonaws.com. An example of an IP address is 192.0.2.1/32.

In many cases, you might connect through an internet service provider (ISP) or from behind your
firewall without a static IP address. If so, make sure to determine the range of IP addresses used
by client computers.
Warning
If you use 0.0.0.0/0 for SSH access, you make it possible for all IP addresses to access
your public EC2 instances using SSH. This approach is acceptable for a short time in a
test environment, but it's unsafe for production environments. In production, authorize
only a specific IP address or range of addresses to access your EC2 instances using SSH.

238
Amazon Relational Database Service User Guide
Step 1: Create an EC2 instance

The following image shows an example of the Network settings section.

f. Leave the default values for the remaining sections.


g. Review a summary of your EC2 instance configuration in the Summary panel, and when you're
ready, choose Launch instance.
5. On the Launch Status page, note the identifier for your new EC2 instance, for example:
i-1234567890abcdef0.

239
Amazon Relational Database Service User Guide
Step 2: Create a PostgreSQL DB instance

6. Choose the EC2 instance identifier to open the list of EC2 instances, and then select your EC2
instance.
7. In the Details tab, note the following values, which you need when you connect using SSH:

a. In Instance summary, note the value for Public IPv4 DNS.

b. In Instance details, note the value for Key pair name.

8. Wait until the Instance state for your EC2 instance has a status of Running before continuing.

Step 2: Create a PostgreSQL DB instance


The basic building block of Amazon RDS is the DB instance. This environment is where you run your
PostgreSQL databases.

240
Amazon Relational Database Service User Guide
Step 2: Create a PostgreSQL DB instance

In this example, you use Easy create to create a DB instance running the PostgreSQL database engine
with a db.t3.micro DB instance class.

To create a PostgreSQL DB instance with Easy create

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the upper-right corner of the Amazon RDS console, choose the AWS Region in which you want to
create the DB instance.
3. In the navigation pane, choose Databases.
4. Choose Create database and make sure that Easy create is chosen.

5. In Configuration, choose PostgreSQL.


6. For DB instance size, choose Free tier.
7. For DB instance identifier, enter database-test1.
8. For Master username, enter a name for the master user, or keep the default name (postgres).

The Create database page should look similar to the following image.

241
Amazon Relational Database Service User Guide
Step 2: Create a PostgreSQL DB instance

9. To use an automatically generated master password for the DB instance, select Auto generate a
password.

To enter your master password, make sure Auto generate a password is cleared, and then enter the
same password in Master password and Confirm password.
10. To set up a connection with the EC2 instance you created previously, open Set up EC2 connection -
optional.

Select Connect to an EC2 compute resource. Choose the EC2 instance you created previously.

242
Amazon Relational Database Service User Guide
Step 2: Create a PostgreSQL DB instance

11. Open View default settings for Easy create.

243
Amazon Relational Database Service User Guide
Step 2: Create a PostgreSQL DB instance

You can examine the default settings used with Easy create. The Editable after database is created
column shows which options you can change after you create the database.

• If a setting has No in that column, and you want a different setting, you can use Standard create
to create the DB instance.
• If a setting has Yes in that column, and you want a different setting, you can either use Standard
create to create the DB instance, or modify the DB instance after you create it to change the
setting.
12. Choose Create database.

To view the master username and password for the DB instance, choose View credential details.

You can use the username and password that appears to connect to the DB instance as the master
user.

244
Amazon Relational Database Service User Guide
Step 3: Connect to a PostgreSQL DB instance

Important
You can't view the master user password again. If you don't record it, you might have to
change it.
If you need to change the master user password after the DB instance is available, you can
modify the DB instance to do so. For more information about modifying a DB instance, see
Modifying an Amazon RDS DB instance (p. 401).
13. In the Databases list, choose the name of the new PostgreSQL DB instance to show its details.

The DB instance has a status of Creating until it is ready to use.

When the status changes to Available, you can connect to the DB instance. Depending on the DB
instance class and the amount of storage, it can take up to 20 minutes before the new instance is
available.

Step 3: Connect to a PostgreSQL DB instance


You can connect to the DB instance using pgadmin or psql. This example explains how to connect to a
PostgreSQL DB instance using the psql command-line client.

To connect to a PostgreSQL DB instance using psql

1. Find the endpoint (DNS name) and port number for your DB instance.

a. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
b. In the upper-right corner of the Amazon RDS console, choose the AWS Region for the DB
instance.
c. In the navigation pane, choose Databases.
d. Choose the PostgreSQL DB instance name to display its details.
e. On the Connectivity & security tab, copy the endpoint. Also note the port number. You need
both the endpoint and the port number to connect to the DB instance.

245
Amazon Relational Database Service User Guide
Step 3: Connect to a PostgreSQL DB instance

2. Connect to the EC2 instance that you created earlier by following the steps in Connect to your Linux
instance in the Amazon EC2 User Guide for Linux Instances.

We recommend that you connect to your EC2 instance using SSH. If the SSH client utility is installed
on Windows, Linux, or Mac, you can connect to the instance using the following command format:

ssh -i location_of_pem_file ec2-user@ec2-instance-public-dns-name

246
Amazon Relational Database Service User Guide
Step 3: Connect to a PostgreSQL DB instance

For example, assume that ec2-database-connect-key-pair.pem is


stored in /dir1 on Linux, and the public IPv4 DNS for your EC2 instance is
ec2-12-345-678-90.compute-1.amazonaws.com. Your SSH command would look as follows:

ssh -i /dir1/ec2-database-connect-key-pair.pem ec2-


[email protected]

3. Get the latest bug fixes and security updates by updating the software on your EC2 instance. To do
this, use the following command.
Note
The -y option installs the updates without asking for confirmation. To examine updates
before installing, omit this option.

sudo dnf update -y

4. To install the psql command-line client from PostgreSQL on Amazon Linux 2023, run the following
command:

sudo dnf install postgresql15

5. Connect to the PostgreSQL DB instance. For example, enter the following command at a command
prompt on a client computer. This action lets you connect to the PostgreSQL DB instance using the
psql client.

Substitute the DB instance endpoint (DNS name) for endpoint, substitute the database name --
dbname that you want to connect to for postgres, and substitute the master username that you
used for postgres. Provide the master password that you used when prompted for a password.

psql --host=endpoint --port=5432 --dbname=postgres --username=postgres

After you enter the password for the user, you should see output similar to the following:

psql (14.3, server 14.6)


SSL connection (protocol: TLSv1.2, cipher: ECDHE-RSA-AES256-GCM-SHA384, bits: 256,
compression: off)
Type "help" for help.

postgres=>

For more information on connecting to a PostgreSQL DB instance, see Connecting to a DB instance


running the PostgreSQL database engine (p. 2167). If you can't connect to your DB instance, see
Troubleshooting connections to your RDS for PostgreSQL instance (p. 2172).

For security, it is a best practice to use encrypted connections. Only use an unencrypted PostgreSQL
connection when the client and server are in the same VPC and the network is trusted. For
information about using encrypted connections, see Connecting to a PostgreSQL DB instance over
SSL (p. 2174).
6. Run SQL commands.

For example, the following SQL command shows the current date and time:

SELECT CURRENT_TIMESTAMP;

247
Amazon Relational Database Service User Guide
Step 4: Delete the EC2 instance and DB instance

Step 4: Delete the EC2 instance and DB instance


After you connect to and explore the sample EC2 instance and DB instance that you created, delete them
so you're no longer charged for them.

To delete the EC2 instance

1. Sign in to the AWS Management Console and open the Amazon EC2 console at https://
console.aws.amazon.com/ec2/.
2. In the navigation pane, choose Instances.
3. Select the EC2 instance, and choose Instance state, Terminate instance.
4. Choose Terminate when prompted for confirmation.

For more information about deleting an EC2 instance, see Terminate your instance in the Amazon EC2
User Guide for Linux Instances.

To delete a DB instance with no final DB snapshot

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases.
3. Choose the DB instance that you want to delete.
4. For Actions, choose Delete.
5. Clear Create final snapshot? and Retain automated backups.
6. Complete the acknowledgement and choose Delete.

(Optional) Connect your DB instance to a Lambda


function
You can also connect your RDS for PostgreSQL DB instance to a Lambda serverless compute resource.
Lambda functions allow you to run code without provisioning or managing infrastructure. A Lambda
function also allows you to automatically respond to code execution requests at any scale, from a dozen
events a day to hundreds of per second. For more information, see Automatically connecting a Lambda
function and a DB instance (p. 392).

248
Amazon Relational Database Service User Guide
Tutorial: Create a web server
and an Amazon RDS DB instance

Tutorial: Create a web server and an Amazon RDS


DB instance
This tutorial shows you how to install an Apache web server with PHP and create a MySQL or PostgreSQL
database. The web server runs on an Amazon EC2 instance using Amazon Linux 2023, and you can
choose between a MySQL or PostgreSQL DB instance. Both the Amazon EC2 instance and the DB
instance run in a virtual private cloud (VPC) based on the Amazon VPC service.
Important
There's no charge for creating an AWS account. However, by completing this tutorial, you might
incur costs for the AWS resources you use. You can delete these resources after you complete
the tutorial if they are no longer needed.
Note
This tutorial works with Amazon Linux 2023 and might not work for other versions of Linux.

In the tutorial that follows, you create an EC2 instance that uses the default VPC, subnets, and security
group for your AWS account. This tutorial shows you how to create the DB instance and automatically set
up connectivity with the EC2 instance that you created. The tutorial then shows you how to install the
web server on the EC2 instance. You connect your web server to your DB instance in the VPC using the
DB instance endpoint.

1. Launch an EC2 instance (p. 250)


2. Create an Amazon RDS DB instance (p. 255)
3. Install a web server on your EC2 instance (p. 264)

The following diagram shows the configuration when the tutorial is complete.

249
Amazon Relational Database Service User Guide
Launch an EC2 instance

Note
After you complete the tutorial, there is a public and private subnet in each Availability Zone
in your VPC. This tutorial uses the default VPC for your AWS account and automatically sets up
connectivity between your EC2 instance and DB instance. If you would rather configure a new
VPC for this scenario instead, complete the tasks in Tutorial: Create a VPC for use with a DB
instance (IPv4 only) (p. 2706).

Launch an EC2 instance


Create an Amazon EC2 instance in the public subnet of your VPC.

To launch an EC2 instance

1. Sign in to the AWS Management Console and open the Amazon EC2 console at https://
console.aws.amazon.com/ec2/.
2. In the upper-right corner of the AWS Management Console, choose the AWS Region where you want
to create the EC2 instance.
3. Choose EC2 Dashboard, and then choose Launch instance, as shown following.

250
Amazon Relational Database Service User Guide
Launch an EC2 instance

4. Choose the following settings in the Launch an instance page.

a. Under Name and tags, for Name, enter tutorial-ec2-instance-web-server.


b. Under Application and OS Images (Amazon Machine Image), choose Amazon Linux, and then
choose the Amazon Linux 2023 AMI. Keep the defaults for the other choices.

251
Amazon Relational Database Service User Guide
Launch an EC2 instance

c. Under Instance type, choose t2.micro.


d. Under Key pair (login), choose a Key pair name to use an existing key pair. To create a new key
pair for the Amazon EC2 instance, choose Create new key pair and then use the Create key pair
window to create it.

For more information about creating a new key pair, see Create a key pair in the Amazon EC2
User Guide for Linux Instances.
e. Under Network settings, set these values and keep the other values as their defaults:

• For Allow SSH traffic from, choose the source of SSH connections to the EC2 instance.

You can choose My IP if the displayed IP address is correct for SSH connections.

Otherwise, you can determine the IP address to use to connect to EC2 instances in your VPC
using Secure Shell (SSH). To determine your public IP address, in a different browser window
or tab, you can use the service at https://checkip.amazonaws.com. An example of an IP
address is 203.0.113.25/32.

In many cases, you might connect through an internet service provider (ISP) or from behind
your firewall without a static IP address. If so, make sure to determine the range of IP
addresses used by client computers.
Warning
If you use 0.0.0.0/0 for SSH access, you make it possible for all IP addresses to
access your public instances using SSH. This approach is acceptable for a short time

252
Amazon Relational Database Service User Guide
Launch an EC2 instance

in a test environment, but it's unsafe for production environments. In production,


authorize only a specific IP address or range of addresses to access your instances
using SSH.
• Turn on Allow HTTPs traffic from the internet.
• Turn on Allow HTTP traffic from the internet.

f. Leave the default values for the remaining sections.


g. Review a summary of your instance configuration in the Summary panel, and when you're
ready, choose Launch instance.
5. On the Launch Status page, note the identifier for your new EC2 instance, for example:
i-1234567890abcdef0.

253
Amazon Relational Database Service User Guide
Launch an EC2 instance

6. Choose the EC2 instance identifier to open the list of EC2 instances, and then select your EC2
instance.
7. In the Details tab, note the following values, which you need when you connect using SSH:

a. In Instance summary, note the value for Public IPv4 DNS.

b. In Instance details, note the value for Key pair name.

8. Wait until Instance state for your instance is Running before continuing.
9. Complete Create an Amazon RDS DB instance (p. 255).

254
Amazon Relational Database Service User Guide
Create a DB instance

Create an Amazon RDS DB instance


Create an RDS for MySQL or RDS for PostgreSQL DB instance that maintains the data used by a web
application.

RDS for MySQL

To create a MySQL DB instance

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the upper-right corner of the AWS Management Console, check the AWS Region. It should be
the same as the one where you created your EC2 instance.
3. In the navigation pane, choose Databases.
4. Choose Create database.
5. On the Create database page, choose Standard create.
6. For Engine options, choose MySQL.
7. For Templates, choose Free tier.

Your DB instance configuration should look similar to the following image.

255
Amazon Relational Database Service User Guide
Create a DB instance

8. In the Availability and durability section, keep the defaults.


9. In the Settings section, set these values:

• DB instance identifier – Type tutorial-db-instance.


• Master username – Type tutorial_user.
• Auto generate a password – Leave the option turned off.
• Master password – Type a password.
• Confirm password – Retype the password.

256
Amazon Relational Database Service User Guide
Create a DB instance

10. In the Instance configuration section, set these values:

• Burstable classes (includes t classes)


• db.t3.micro

11. In the Storage section, keep the defaults.


12. In the Connectivity section, set these values and keep the other values as their defaults:

257
Amazon Relational Database Service User Guide
Create a DB instance

• For Compute resource, choose Connect to an EC2 compute resource.


• For EC2 instance, choose the EC2 instance you created previously, such as tutorial-ec2-
instance-web-server.

13. In the Database authentication section, make sure Password authentication is selected.
14. Open the Additional configuration section, and enter sample for Initial database name. Keep
the default settings for the other options.
15. To create your MySQL DB instance, choose Create database.

Your new DB instance appears in the Databases list with the status Creating.
16. Wait for the Status of your new DB instance to show as Available. Then choose the DB instance
name to show its details.
17. In the Connectivity & security section, view the Endpoint and Port of the DB instance.

258
Amazon Relational Database Service User Guide
Create a DB instance

Note the endpoint and port for your DB instance. You use this information to connect your web
server to your DB instance.
18. Complete Install a web server on your EC2 instance (p. 264).

RDS for PostgreSQL

To create a PostgreSQL DB instance

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the upper-right corner of the AWS Management Console, check the AWS Region. It should be
the same as the one where you created your EC2 instance.
3. In the navigation pane, choose Databases.
4. Choose Create database.
5. On the Create database page, choose Standard create.
6. For Engine options, choose PostgreSQL.
7. For Templates, choose Free tier.

259
Amazon Relational Database Service User Guide
Create a DB instance

Your DB instance configuration should look similar to the following image.

260
Amazon Relational Database Service User Guide
Create a DB instance

261
Amazon Relational Database Service User Guide
Create a DB instance

8. In the Availability and durability section, keep the defaults.


9. In the Settings section, set these values:

• DB instance identifier – Type tutorial-db-instance.


• Master username – Type tutorial_user.
• Auto generate a password – Leave the option turned off.
• Master password – Type a password.
• Confirm password – Retype the password.

10. In the Instance configuration section, set these values:

• Burstable classes (includes t classes)


• db.t3.micro

262
Amazon Relational Database Service User Guide
Create a DB instance

11. In the Storage section, keep the defaults.


12. In the Connectivity section, set these values and keep the other values as their defaults:

• For Compute resource, choose Connect to an EC2 compute resource.


• For EC2 instance, choose the EC2 instance you created previously, such as tutorial-ec2-
instance-web-server.

13. In the Database authentication section, make sure Password authentication is selected.
14. Open the Additional configuration section, and enter sample for Initial database name. Keep
the default settings for the other options.

263
Amazon Relational Database Service User Guide
Install a web server

15. To create your PostgreSQL DB instance, choose Create database.

Your new DB instance appears in the Databases list with the status Creating.
16. Wait for the Status of your new DB instance to show as Available. Then choose the DB instance
name to show its details.
17. In the Connectivity & security section, view the Endpoint and Port of the DB instance.

Note the endpoint and port for your DB instance. You use this information to connect your web
server to your DB instance.
18. Complete Install a web server on your EC2 instance (p. 264).

Install a web server on your EC2 instance


Install a web server on the EC2 instance you created in Launch an EC2 instance (p. 250). The web
server connects to the Amazon RDS DB instance that you created in Create an Amazon RDS DB
instance (p. 255).

264
Amazon Relational Database Service User Guide
Install a web server

Install an Apache web server with PHP and MariaDB


Connect to your EC2 instance and install the web server.

To connect to your EC2 instance and install the Apache web server with PHP

1. Connect to the EC2 instance that you created earlier by following the steps in Connect to your Linux
instance in the Amazon EC2 User Guide for Linux Instances.

We recommend that you connect to your EC2 instance using SSH. If the SSH client utility is installed
on Windows, Linux, or Mac, you can connect to the instance using the following command format:

ssh -i location_of_pem_file ec2-user@ec2-instance-public-dns-name

For example, assume that ec2-database-connect-key-pair.pem is


stored in /dir1 on Linux, and the public IPv4 DNS for your EC2 instance is
ec2-12-345-678-90.compute-1.amazonaws.com. Your SSH command would look as follows:

ssh -i /dir1/ec2-database-connect-key-pair.pem ec2-


[email protected]

2. Get the latest bug fixes and security updates by updating the software on your EC2 instance. To do
this, use the following command.
Note
The -y option installs the updates without asking for confirmation. To examine updates
before installing, omit this option.

sudo dnf update -y

3. After the updates complete, install the Apache web server, PHP, and MariaDB or PostgreSQL
software using the following commands. This command installs multiple software packages and
related dependencies at the same time.

MySQL

sudo dnf install -y httpd php php-mysqli mariadb105

PostgreSQL

sudo dnf install -y httpd php php-pgsql postgresql15

If you receive an error, your instance probably wasn't launched with an Amazon Linux 2023 AMI. You
might be using the Amazon Linux 2 AMI instead. You can view your version of Amazon Linux using
the following command.

cat /etc/system-release

For more information, see Updating instance software.


4. Start the web server with the command shown following.

sudo systemctl start httpd

265
Amazon Relational Database Service User Guide
Install a web server

You can test that your web server is properly installed and started. To do this, enter the public
Domain Name System (DNS) name of your EC2 instance in the address bar of a web browser, for
example: http://ec2-42-8-168-21.us-west-1.compute.amazonaws.com. If your web server
is running, then you see the Apache test page.

If you don't see the Apache test page, check your inbound rules for the VPC security group that you
created in Tutorial: Create a VPC for use with a DB instance (IPv4 only) (p. 2706). Make sure that
your inbound rules include one allowing HTTP (port 80) access for the IP address to connect to the
web server.
Note
The Apache test page appears only when there is no content in the document root
directory, /var/www/html. After you add content to the document root directory, your
content appears at the public DNS address of your EC2 instance. Before this point, it
appears on the Apache test page.
5. Configure the web server to start with each system boot using the systemctl command.

sudo systemctl enable httpd

To allow ec2-user to manage files in the default root directory for your Apache web server, modify the
ownership and permissions of the /var/www directory. There are many ways to accomplish this task.
In this tutorial, you add ec2-user to the apache group, to give the apache group ownership of the /
var/www directory and assign write permissions to the group.

To set file permissions for the Apache web server

1. Add the ec2-user user to the apache group.

sudo usermod -a -G apache ec2-user

2. Log out to refresh your permissions and include the new apache group.

exit

3. Log back in again and verify that the apache group exists with the groups command.

groups

Your output looks similar to the following:

ec2-user adm wheel apache systemd-journal

4. Change the group ownership of the /var/www directory and its contents to the apache group.

sudo chown -R ec2-user:apache /var/www

5. Change the directory permissions of /var/www and its subdirectories to add group write
permissions and set the group ID on subdirectories created in the future.

sudo chmod 2775 /var/www


find /var/www -type d -exec sudo chmod 2775 {} \;

6. Recursively change the permissions for files in the /var/www directory and its subdirectories to add
group write permissions.

266
Amazon Relational Database Service User Guide
Install a web server

find /var/www -type f -exec sudo chmod 0664 {} \;

Now, ec2-user (and any future members of the apache group) can add, delete, and edit files in the
Apache document root. This makes it possible for you to add content, such as a static website or a PHP
application.
Note
A web server running the HTTP protocol provides no transport security for the data that it sends
or receives. When you connect to an HTTP server using a web browser, much information is
visible to eavesdroppers anywhere along the network pathway. This information includes the
URLs that you visit, the content of web pages that you receive, and the contents (including
passwords) of any HTML forms.
The best practice for securing your web server is to install support for HTTPS (HTTP Secure).
This protocol protects your data with SSL/TLS encryption. For more information, see Tutorial:
Configure SSL/TLS with the Amazon Linux AMI in the Amazon EC2 User Guide.

Connect your Apache web server to your DB instance


Next, you add content to your Apache web server that connects to your Amazon RDS DB instance.

To add content to the Apache web server that connects to your DB instance

1. While still connected to your EC2 instance, change the directory to /var/www and create a new
subdirectory named inc.

cd /var/www
mkdir inc
cd inc

2. Create a new file in the inc directory named dbinfo.inc, and then edit the file by calling nano (or
the editor of your choice).

>dbinfo.inc
nano dbinfo.inc

3. Add the following contents to the dbinfo.inc file. Here, db_instance_endpoint is your DB
instance endpoint, without the port, for your DB instance.
Note
We recommend placing the user name and password information in a folder that isn't part
of the document root for your web server. Doing this reduces the possibility of your security
information being exposed.
Make sure to change master password to a suitable password in your application.

<?php

define('DB_SERVER', 'db_instance_endpoint');
define('DB_USERNAME', 'tutorial_user');
define('DB_PASSWORD', 'master password');
define('DB_DATABASE', 'sample');
?>

4. Save and close the dbinfo.inc file. If you are using nano, save and close the file by using Ctrl+S
and Ctrl+X.
5. Change the directory to /var/www/html.

267
Amazon Relational Database Service User Guide
Install a web server

cd /var/www/html

6. Create a new file in the html directory named SamplePage.php, and then edit the file by calling
nano (or the editor of your choice).

>SamplePage.php
nano SamplePage.php

7. Add the following contents to the SamplePage.php file:

MySQL

<?php include "../inc/dbinfo.inc"; ?>


<html>
<body>
<h1>Sample page</h1>
<?php

/* Connect to MySQL and select the database. */


$connection = mysqli_connect(DB_SERVER, DB_USERNAME, DB_PASSWORD);

if (mysqli_connect_errno()) echo "Failed to connect to MySQL: " .


mysqli_connect_error();

$database = mysqli_select_db($connection, DB_DATABASE);

/* Ensure that the EMPLOYEES table exists. */


VerifyEmployeesTable($connection, DB_DATABASE);

/* If input fields are populated, add a row to the EMPLOYEES table. */


$employee_name = htmlentities($_POST['NAME']);
$employee_address = htmlentities($_POST['ADDRESS']);

if (strlen($employee_name) || strlen($employee_address)) {
AddEmployee($connection, $employee_name, $employee_address);
}
?>

<!-- Input form -->


<form action="<?PHP echo $_SERVER['SCRIPT_NAME'] ?>" method="POST">
<table border="0">
<tr>
<td>NAME</td>
<td>ADDRESS</td>
</tr>
<tr>
<td>
<input type="text" name="NAME" maxlength="45" size="30" />
</td>
<td>
<input type="text" name="ADDRESS" maxlength="90" size="60" />
</td>
<td>
<input type="submit" value="Add Data" />
</td>
</tr>
</table>
</form>

<!-- Display table data. -->


<table border="1" cellpadding="2" cellspacing="2">
<tr>

268
Amazon Relational Database Service User Guide
Install a web server

<td>ID</td>
<td>NAME</td>
<td>ADDRESS</td>
</tr>

<?php

$result = mysqli_query($connection, "SELECT * FROM EMPLOYEES");

while($query_data = mysqli_fetch_row($result)) {
echo "<tr>";
echo "<td>",$query_data[0], "</td>",
"<td>",$query_data[1], "</td>",
"<td>",$query_data[2], "</td>";
echo "</tr>";
}
?>

</table>

<!-- Clean up. -->


<?php

mysqli_free_result($result);
mysqli_close($connection);

?>

</body>
</html>

<?php

/* Add an employee to the table. */


function AddEmployee($connection, $name, $address) {
$n = mysqli_real_escape_string($connection, $name);
$a = mysqli_real_escape_string($connection, $address);

$query = "INSERT INTO EMPLOYEES (NAME, ADDRESS) VALUES ('$n', '$a');";

if(!mysqli_query($connection, $query)) echo("<p>Error adding employee data.</


p>");
}

/* Check whether the table exists and, if not, create it. */


function VerifyEmployeesTable($connection, $dbName) {
if(!TableExists("EMPLOYEES", $connection, $dbName))
{
$query = "CREATE TABLE EMPLOYEES (
ID int(11) UNSIGNED AUTO_INCREMENT PRIMARY KEY,
NAME VARCHAR(45),
ADDRESS VARCHAR(90)
)";

if(!mysqli_query($connection, $query)) echo("<p>Error creating table.</p>");


}
}

/* Check for the existence of a table. */


function TableExists($tableName, $connection, $dbName) {
$t = mysqli_real_escape_string($connection, $tableName);
$d = mysqli_real_escape_string($connection, $dbName);

$checktable = mysqli_query($connection,

269
Amazon Relational Database Service User Guide
Install a web server

"SELECT TABLE_NAME FROM information_schema.TABLES WHERE TABLE_NAME = '$t' AND


TABLE_SCHEMA = '$d'");

if(mysqli_num_rows($checktable) > 0) return true;

return false;
}
?>

PostgreSQL

<?php include "../inc/dbinfo.inc"; ?>

<html>
<body>
<h1>Sample page</h1>
<?php

/* Connect to PostgreSQL and select the database. */


$constring = "host=" . DB_SERVER . " dbname=" . DB_DATABASE . " user=" .
DB_USERNAME . " password=" . DB_PASSWORD ;
$connection = pg_connect($constring);

if (!$connection){
echo "Failed to connect to PostgreSQL";
exit;
}

/* Ensure that the EMPLOYEES table exists. */


VerifyEmployeesTable($connection, DB_DATABASE);

/* If input fields are populated, add a row to the EMPLOYEES table. */


$employee_name = htmlentities($_POST['NAME']);
$employee_address = htmlentities($_POST['ADDRESS']);

if (strlen($employee_name) || strlen($employee_address)) {
AddEmployee($connection, $employee_name, $employee_address);
}

?>

<!-- Input form -->


<form action="<?PHP echo $_SERVER['SCRIPT_NAME'] ?>" method="POST">
<table border="0">
<tr>
<td>NAME</td>
<td>ADDRESS</td>
</tr>
<tr>
<td>
<input type="text" name="NAME" maxlength="45" size="30" />
</td>
<td>
<input type="text" name="ADDRESS" maxlength="90" size="60" />
</td>
<td>
<input type="submit" value="Add Data" />
</td>
</tr>
</table>
</form>
<!-- Display table data. -->
<table border="1" cellpadding="2" cellspacing="2">

270
Amazon Relational Database Service User Guide
Install a web server

<tr>
<td>ID</td>
<td>NAME</td>
<td>ADDRESS</td>
</tr>

<?php

$result = pg_query($connection, "SELECT * FROM EMPLOYEES");

while($query_data = pg_fetch_row($result)) {
echo "<tr>";
echo "<td>",$query_data[0], "</td>",
"<td>",$query_data[1], "</td>",
"<td>",$query_data[2], "</td>";
echo "</tr>";
}
?>
</table>

<!-- Clean up. -->


<?php

pg_free_result($result);
pg_close($connection);
?>
</body>
</html>

<?php

/* Add an employee to the table. */


function AddEmployee($connection, $name, $address) {
$n = pg_escape_string($name);
$a = pg_escape_string($address);
echo "Forming Query";
$query = "INSERT INTO EMPLOYEES (NAME, ADDRESS) VALUES ('$n', '$a');";

if(!pg_query($connection, $query)) echo("<p>Error adding employee data.</p>");


}

/* Check whether the table exists and, if not, create it. */


function VerifyEmployeesTable($connection, $dbName) {
if(!TableExists("EMPLOYEES", $connection, $dbName))
{
$query = "CREATE TABLE EMPLOYEES (
ID serial PRIMARY KEY,
NAME VARCHAR(45),
ADDRESS VARCHAR(90)
)";

if(!pg_query($connection, $query)) echo("<p>Error creating table.</p>");


}
}
/* Check for the existence of a table. */
function TableExists($tableName, $connection, $dbName) {
$t = strtolower(pg_escape_string($tableName)); //table name is case sensitive
$d = pg_escape_string($dbName); //schema is 'public' instead of 'sample' db name
so not using that

$query = "SELECT TABLE_NAME FROM information_schema.TABLES WHERE TABLE_NAME =


'$t';";
$checktable = pg_query($connection, $query);

if (pg_num_rows($checktable) >0) return true;

271
Amazon Relational Database Service User Guide
Install a web server

return false;

}
?>

8. Save and close the SamplePage.php file.


9. Verify that your web server successfully connects to your DB instance by opening a web browser
and browsing to http://EC2 instance endpoint/SamplePage.php, for example: http://
ec2-55-122-41-31.us-west-2.compute.amazonaws.com/SamplePage.php.

You can use SamplePage.php to add data to your DB instance. The data that you add is then displayed
on the page. To verify that the data was inserted into the table, install MySQL client on the Amazon EC2
instance. Then connect to the DB instance and query the table.

For information about installing the MySQL client and connecting to a DB instance, see Connecting to a
DB instance running the MySQL database engine (p. 1630).

To make sure that your DB instance is as secure as possible, verify that sources outside of the VPC can't
connect to your DB instance.

After you have finished testing your web server and your database, you should delete your DB instance
and your Amazon EC2 instance.

• To delete a DB instance, follow the instructions in Deleting a DB instance (p. 489). You don't need to
create a final snapshot.
• To terminate an Amazon EC2 instance, follow the instruction in Terminate your instance in the Amazon
EC2 User Guide.

272
Amazon Relational Database Service User Guide
Tutorial: Create a Lambda function to
access your Amazon RDS DB instance

Tutorial: Using a Lambda function to access an


Amazon RDS database
In this tutorial, you use a Lambda function to write data to an Amazon Relational Database Service
(Amazon RDS) database through RDS Proxy. Your Lambda function reads records from an Amazon
Simple Queue Service (Amazon SQS) queue and writes a new item to a table in your database whenever
a message is added. In this example, you use the AWS Management Console to manually add messages
to your queue. The following diagram shows the AWS resources you use to complete the tutorial.

With Amazon RDS, you can run a managed relational database in the cloud using common database
products like Microsoft SQL Server, MariaDB, MySQL, Oracle Database, and PostgreSQL. By using
Lambda to access your database, you can read and write data in response to events, such as a new
customer registering with your website. Your function, database instance, and proxy scale automatically
to meet periods of high demand.

To complete this tutorial, you carry out the following tasks:

1. Launch an RDS for MySQL database instance and a proxy in your AWS account's default VPC.
2. Create and test a Lambda function that creates a new table in your database and writes data to it.
3. Create an Amazon SQS queue and configure it to invoke your Lambda function whenever a new
message is added.
4. Test the complete setup by adding messages to your queue using the AWS Management Console and
monitoring the results using CloudWatch Logs.

By completing these steps, you learn:

• How to use Amazon RDS to create a database instance and a proxy, and connect a Lambda function to
the proxy.
• How to use Lambda to perform create and read operations on an Amazon RDS database.
• How to use Amazon SQS to invoke a Lambda function.

You can complete this tutorial using the AWS Management Console or the AWS Command Line Interface
(AWS CLI).

273
Amazon Relational Database Service User Guide
Prerequisites

Prerequisites
Before you begin, complete the steps in the following sections:

• Sign up for an AWS account (p. 174)


• Create an administrative user (p. 174)

Create an Amazon RDS DB instance

An Amazon RDS DB instance is an isolated database environment running in the AWS Cloud. An instance
can contain one or more user-created databases. Unless you specify otherwise, Amazon RDS creates
new database instances in the default VPC included in your AWS account. For more information about
Amazon VPC, see the Amazon Virtual Private Cloud User Guide.

In this tutorial, you create a new instance in your AWS account's default VPC and create a database
named ExampleDB in that instance. You can create your DB instance and database using either the AWS
Management Console or the AWS CLI.

To create a database instance

1. Open the Amazon RDS console and choose Create database.


2. Leave the Standard create option selected, then in Engine options, choose MySQL.
3. In Templates, choose Free tier.
4. In Settings, for DB instance identifier, enter MySQLForLambda.
5. Set your username and password by doing the following:

a. In Credentials settings, leave Master username set to admin.


b. For Master password, enter and confirm a password to access your database.
6. Specify the database name by doing the following:

• Leave all the remaining default options selected and scroll down to the Additional configuration
section.
• Expand this section and enter ExampleDB as the Initial database name.
7. Leave all the remaining default options selected and choose Create database.

274
Amazon Relational Database Service User Guide
Create Lambda function and proxy

Create Lambda function and proxy

You can use the RDS console to create a Lambda function and a proxy in the same VPC as the database.
Note
You can only create these associated resources when your database has completed creation and
is in Available status.

To create an associated function and proxy

1. From the Databases page, check if your database is in the Available status. If so, proceed to the next
step. Else, wait till your database is available.
2. Select your database and choose Set up Lambda connection from Actions.
3. In the Set up Lambda connection page, choose Create new function.

Set the New Lambda function name to LambdaFunctionWithRDS.


4. In the RDS Proxy section, select the Connect using RDS Proxy option. Further choose Create new
proxy.

• For Database credentials, choose Database username and password.


• For Username, specify admin.
• For Password, enter the password you created for your database instance.
5. Select Set up to complete the proxy and Lambda function creation.

The wizard completes the set up and provides a link to the Lambda console to review your new function.
Note the proxy endpoint before switching to the Lambda console.

Create a function execution role

Before you create your Lambda function, you create an execution role to give your function the
necessary permissions. For this tutorial, Lambda needs permission to manage the network connection to
the VPC containing your database instance and to poll messages from an Amazon SQS queue.

To give your Lambda function the permissions it needs, this tutorial uses IAM managed policies. These
are policies that grant permissions for many common use cases and are available in your AWS account.
For more information about using managed policies, see Policy best practices (p. 2616).

275
Amazon Relational Database Service User Guide
Create a Lambda deployment package

To create the Lambda execution role

1. Open the Roles page of the IAM console and choose Create role.
2. For the Trusted entity type, choose AWS service, and for the Use case, choose Lambda.
3. Choose Next.
4. Add the IAM managed policies by doing the following:

a. Using the policy search box, search for AWSLambdaSQSQueueExecutionRole.


b. In the results list, select the check box next to the role, then choose Clear filters.
c. Using the policy search box, search for AWSLambdaVPCAccessExecutionRole.
d. In the results list, select the check box next to the role, then choose Next.
5. For the Role name, enter lambda-vpc-sqs-role, then choose Create role.

Later in the tutorial, you need the Amazon Resource Name (ARN) of the execution role you just created.

To find the execution role ARN

1. Open the Roles page of the IAM console and choose your role (lambda-vpc-sqs-role).
2. Copy the ARN displayed in the Summary section.

Create a Lambda deployment package

The following example Python code uses the PyMySQL package to open a connection to your database.
The first time you invoke your function, it also creates a new table called Customer. The table uses the
following schema, where CustID is the primary key:

Customer(CustID, Name)

The function also uses PyMySQL to add records to this table. The function adds records using customer
IDs and names specified in messages you will add to your Amazon SQS queue.

The code creates the connection to your database outside of the handler function. Creating the
connection in the initialization code allows the connection to be re-used by subsequent invocations
of your function and improves performance. In a production application, you can also use provisioned
concurrency to initialize a requested number of database connections. These connections are available as
soon as your function is invoked.

import sys
import logging
import pymysql
import json
import os

276
Amazon Relational Database Service User Guide
Create a Lambda deployment package

# rds settings
user_name = os.environ['USER_NAME']
password = os.environ['PASSWORD']
rds_proxy_host = os.environ['RDS_PROXY_HOST']
db_name = os.environ['DB_NAME']

logger = logging.getLogger()
logger.setLevel(logging.INFO)

# create the database connection outside of the handler to allow connections to be


# re-used by subsequent function invocations.
try:
conn = pymysql.connect(host=rds_proxy_host, user=user_name, passwd=password,
db=db_name, connect_timeout=5)
except pymysql.MySQLError as e:
logger.error("ERROR: Unexpected error: Could not connect to MySQL instance.")
logger.error(e)
sys.exit()

logger.info("SUCCESS: Connection to RDS for MySQL instance succeeded")

def lambda_handler(event, context):


"""
This function creates a new RDS database table and writes records to it
"""
message = event['Records'][0]['body']
data = json.loads(message)
CustID = data['CustID']
Name = data['Name']

item_count = 0
sql_string = f"insert into Customer (CustID, Name) values({CustID}, '{Name}')"

with conn.cursor() as cur:


cur.execute("create table if not exists Customer ( CustID int NOT NULL, Name
varchar(255) NOT NULL, PRIMARY KEY (CustID))")
cur.execute(sql_string)
conn.commit()
cur.execute("select * from Customer")
logger.info("The following items have been added to the database:")
for row in cur:
item_count += 1
logger.info(row)
conn.commit()

return "Added %d items to RDS for MySQL table" %(item_count)

Note
In this example, your database access credentials are stored as environment variables. In
production applications, we recommend that you use AWS Secrets Manager as a more secure
option. Note that if your Lambda function is in a VPC, to connect to Secrets Manager you need
to create a VPC endpoint. See How to connect to Secrets Manager service within a Virtual
Private Cloud to learn more.

To include the PyMySQL dependency with your function code, create a .zip deployment package. The
following commands work for Linux, macOS, or Unix:

To create a .zip deployment package

1. Save the example code as a file named lambda_function.py.


2. In the same directory in which you created your lambda_function.py file, create a new directory
named package and install the PyMySQL library.

277
Amazon Relational Database Service User Guide
Update the Lambda function

mkdir package
pip install --target package pymysql

3. Create a zip file containing your application code and the PyMySQL library. In Linux or MacOS,
run the following CLI commands. In Windows, use your preferred zip tool to create the
lambda_function.zip file. Your lambda_function.py source code file and the folders
containing your dependencies must be installed at the root of the .zip file.

cd package
zip -r ../lambda_function.zip .
cd ..
zip lambda_function.zip lambda_function.py

You can also create your deployment package using a Python virtual environment. See Deploy
Python Lambda functions with .zip file archives.

Update the Lambda function


Using the .zip package you just created, you now update your Lambda function using the Lambda
console. To enable your function to access your database, you also need to configure environment
variables with your access credentials.

To update the Lambda function

1. Open the Functions page of the Lambda console and choose your function
LambdaFunctionWithRDS.
2. Change the Runtime of the function to Python 3.10.
3. Change the Handler to lambda_function.lambda_handler.
4. In the Code tab, choose Upload from and then .zip file.
5. Select the lambda_function.zip file you created in the previous stage and choose Save.

Now configure the function with the execution role you created earlier. This grants the function the
permissions it needs to access your database instance and poll an Amazon SQS queue.

To configure the function's execution role

1. In the Functions page of the Lambda console, select the Configuration tab, then choose
Permissions.
2. In Execution role, choose Edit.
3. In Existing role, choose your execution role (lambda-vpc-sqs-role).
4. Choose Save.

To configure your function's environment variables

1. In the Functions page of the Lambda console, select the Configuration tab, then choose
Environment variables.
2. Choose Edit.
3. To add your database access credentials, do the following:

a. Choose Add environment variable, then for Key enter USER_NAME and for Value enter admin.

278
Amazon Relational Database Service User Guide
Test your Lambda function in the console

b. Choose Add environment variable, then for Key enter DB_NAME and for Value enter
ExampleDB.
c. Choose Add environment variable, then for Key enter PASSWORD and for Value enter the
password you chose when you created your database.
d. Choose Add environment variable, then for Key enter RDS_PROXY_HOST and for Value enter
the RDS Proxy endpoint you noted earlier.
e. Choose Save.

Test your Lambda function in the console

You can now use the Lambda console to test your function. You create a test event which mimics the
data your function will receive when you invoke it using Amazon SQS in the final stage of the tutorial.
Your test event contains a JSON object specifying a customer ID and customer name to add to the
Customer table your function creates.

To test the Lambda function

1. Open the Functions page of the Lambda console and choose your function.
2. Choose the Test section.
3. Choose Create new event and enter myTestEvent for the event name.
4. Copy the following code into Event JSON and choose Save.

{
"Records": [
{
"messageId": "059f36b4-87a3-44ab-83d2-661975830a7d",
"receiptHandle": "AQEBwJnKyrHigUMZj6rYigCgxlaS3SLy0a...",
"body": "{\n \"CustID\": 1021,\n \"Name\": \"Martha Rivera\"\n}",
"attributes": {
"ApproximateReceiveCount": "1",
"SentTimestamp": "1545082649183",
"SenderId": "AIDAIENQZJOLO23YVJ4VO",
"ApproximateFirstReceiveTimestamp": "1545082649185"
},
"messageAttributes": {},
"md5OfBody": "e4e68fb7bd0e697a0ae8f1bb342846b3",
"eventSource": "aws:sqs",
"eventSourceARN": "arn:aws:sqs:us-west-2:123456789012:my-queue",
"awsRegion": "us-west-2"
}
]
}

5. Choose Test.

279
Amazon Relational Database Service User Guide
Create an Amazon SQS queue

In the Execution results tab, you should see results similar to the following displayed in the Function
Logs:

[INFO] 2023-02-14T19:31:35.149Z bdd06682-00c7-4d6f-9abb-89f4bbb4a27f The following items


have been added to the database:
[INFO] 2023-02-14T19:31:35.149Z bdd06682-00c7-4d6f-9abb-89f4bbb4a27f (1021, 'Martha
Rivera')

Create an Amazon SQS queue

You have successfully tested the integration of your Lambda function and Amazon RDS database
instance. Now you create the Amazon SQS queue you will use to invoke your Lambda function in the
final stage of the tutorial.

To create the Amazon SQS queue (console)

1. Open the Queues page of the Amazon SQS console and select Create queue.
2. Leave the Type as Standard and enter LambdaRDSQueue for the name of your queue.
3. Leave all the default options selected and choose Create queue.

Create an event source mapping to invoke your


Lambda function

An event source mapping is a Lambda resource which reads items from a stream or queue and invokes
a Lambda function. When you configure an event source mapping, you can specify a batch size so that
records from your stream or queue are batched together into a single payload. In this example, you
set the batch size to 1 so that your Lambda function is invoked every time you send a message to your
queue. You can configure the event source mapping using either the AWS CLI or the Lambda console.

To create an event source mapping (console)

1. Open the Functions page of the Lambda console and select your function
(LambdaFunctionWithRDS).

280
Amazon Relational Database Service User Guide
Test and monitor your setup

2. In the Function overview section, choose Add trigger.


3. For the source, select Amazon SQS, then select the name of your queue (LambdaRDSQueue).
4. For Batch size, enter 1.
5. Leave all the other options set to the default values and choose Add.

You are now ready to test your complete setup by adding a message to your Amazon SQS queue.

Test and monitor your setup

To test your complete setup, add messages to your Amazon SQS queue using the console. You then use
CloudWatch Logs to confirm that your Lambda function is writing records to your database as expected.

To test and monitor your setup

1. Open the Queues page of the Amazon SQS console and select your queue (LambdaRDSQueue).
2. Choose Send and receive messages and paste the following JSON into the Message body in the
Send message section.

{
"CustID": 1054,
"Name": "Richard Roe"
}

3. Choose Send message.

Sending your message to the queue will cause Lambda to invoke your function through your event
source mapping. To confirm that Lambda has invoked your function as expected, use CloudWatch
Logs to verify that the function has written the customer name and ID to your database table.
4. Open the Log groups page of the CloudWatch console and select the log group for your function (/
aws/lambda/LambdaFunctionWithRDS).
5. In the Log streams section, choose the most recent log stream.

Your table should contain two customer records, one from each invocation of your function. In the
log stream, you should see messages similar to the following:

[INFO] 2023-02-14T19:06:43.873Z 45368126-3eee-47f7-88ca-3086ae6d3a77 The following


items have been added to the database:
[INFO] 2023-02-14T19:06:43.873Z 45368126-3eee-47f7-88ca-3086ae6d3a77 (1021, 'Martha
Rivera')
[INFO] 2023-02-14T19:06:43.873Z 45368126-3eee-47f7-88ca-3086ae6d3a77 (1054, 'Richard
Roe')

281
Amazon Relational Database Service User Guide
Clean up your resources

Clean up your resources


You can now delete the resources that you created for this tutorial, unless you want to retain them.
By deleting AWS resources that you're no longer using, you prevent unnecessary charges to your AWS
account.

To delete the Lambda function

1. Open the Functions page of the Lambda console.


2. Select the function that you created.
3. Choose Actions, Delete.
4. Choose Delete.

To delete the execution role

1. Open the Roles page of the IAM console.


2. Select the execution role that you created.
3. Choose Delete role.
4. Choose Yes, delete.

To delete the MySQL DB instance

1. Open the Databases page of the Amazon RDS console.


2. Select the database you created.
3. Choose Actions, Delete.
4. Clear the Create final snapshot check box.
5. Enter delete me in the text box.
6. Choose Delete.

To delete the Amazon SQS queue

1. Sign in to the AWS Management Console and open the Amazon SQS console at https://
console.aws.amazon.com/sqs/.
2. Select the queue you created.
3. Choose Delete.
4. Enter delete in the text box.
5. Choose Delete.

282
Amazon Relational Database Service User Guide
Tutorials in this guide

Amazon RDS tutorials and sample


code
The AWS documentation includes several tutorials that guide you through common Amazon RDS use
cases. Many of these tutorials show you how to use Amazon RDS with other AWS services. In addition,
you can access sample code in GitHub.
Note
You can find more tutorials at the AWS Database Blog. For information about training, see AWS
Training and Certification.

Topics
• Tutorials in this guide (p. 283)
• Tutorials in other AWS guides (p. 284)
• AWS workshop and lab content portal for Amazon RDS PostgreSQL (p. 284)
• AWS workshop and lab content portal for Amazon RDS MySQL (p. 284)
• Tutorials and sample code in GitHub (p. 285)
• Using this service with an AWS SDK (p. 285)

Tutorials in this guide


The following tutorials in this guide show you how to perform common tasks with Amazon RDS:

• Tutorial: Create a VPC for use with a DB instance (IPv4 only) (p. 2706)

Learn how to include a DB instance in a virtual private cloud (VPC) based on the Amazon VPC service.
In this case, the VPC shares data with a web server that is running on an Amazon EC2 instance in the
same VPC.
• Tutorial: Create a VPC for use with a DB instance (dual-stack mode) (p. 2711)

Learn how to include a DB instance in a virtual private cloud (VPC) based on the Amazon VPC service.
In this case, the VPC shares data with an Amazon EC2 instance in the same VPC. In this tutorial, you
create the VPC for this scenario that works with a database running in dual-stack mode.
• Tutorial: Create a web server and an Amazon RDS DB instance (p. 249)

Learn how to install an Apache web server with PHP and create a MySQL database. The web server
runs on an Amazon EC2 instance using Amazon Linux, and the MySQL database is a MySQL DB
instance. Both the Amazon EC2 instance and the DB instance run in an Amazon VPC.
• Tutorial: Restore an Amazon RDS DB instance from a DB snapshot (p. 665)

Learn how to restore a DB instance from a DB snapshot.


• Tutorial: Using a Lambda function to access an Amazon RDS database (p. 273)

Learn how to create a Lambda function from the RDS console to access a database through a proxy,
create a table, add a few records, and retrieve the records from the table. You also learn how to invoke
the Lambda function and verify the query results.
• Tutorial: Use tags to specify which DB instances to stop (p. 466)

Learn how to use tags to specify which DB instances to stop.


• Tutorial: Log DB instance state changes using Amazon EventBridge (p. 871)

283
Amazon Relational Database Service User Guide
Tutorials in other AWS guides

Learn how to log a DB instance state change using Amazon EventBridge and AWS Lambda.
• Tutorial: Creating an Amazon CloudWatch alarm for Multi-AZ DB cluster replica lag (p. 713)

Learn how to create a CloudWatch alarm that sends an Amazon SNS message when replica lag for a
Multi-AZ DB cluster has exceeded a threshold. An alarm watches the ReplicaLag metric over a time
period that you specify. The action is a notification sent to an Amazon SNS topic or Amazon EC2 Auto
Scaling policy.

Tutorials in other AWS guides


The following tutorials in other AWS guides show you how to perform common tasks with Amazon RDS:

• Tutorial: Rotating a Secret for an AWS Database in the AWS Secrets Manager User Guide

Learn how to create a secret for an AWS database and configure the secret to rotate on a schedule.
You trigger one rotation manually, and then confirm that the new version of the secret continues to
provide access.
• Tutorials and samples in the AWS Elastic Beanstalk Developer Guide

Learn how to deploy applications that use Amazon RDS databases with AWS Elastic Beanstalk.
• Using Data from an Amazon RDS Database to Create an Amazon ML Datasource in the Amazon
Machine Learning Developer Guide

Learn how to create an Amazon Machine Learning (Amazon ML) datasource object from data stored in
a MySQL DB instance.
• Manually Enabling Access to an Amazon RDS Instance in a VPC in the Amazon QuickSight User Guide

Learn how to enable Amazon QuickSight access to an Amazon RDS DB instance in a VPC.

AWS workshop and lab content portal for Amazon


RDS PostgreSQL
The following collection of workshops and other hands-on content helps you to gain an understanding
of the Amazon RDS PostgreSQL features and capabilities:

• Creating a DB instance

Learn how to create the DB instance.


• Performance Monitoring with RDS Tools

Learn how to use AWS and SQL tools(Cloudwatch, Enhanced Monitoring, Slow Query Logs,
Performance Insights, PostgreSQL Catalog Views) to understand performance issues and identify ways
to improve performance of your database.

AWS workshop and lab content portal for Amazon


RDS MySQL
The following collection of workshops and other hands-on content helps you to gain an understanding
of the Amazon RDS MySQL features and capabilities:

284
Amazon Relational Database Service User Guide
Tutorials and sample code in GitHub

• Creating a DB instance

Learn how to create the DB instance.


• Using Performance Insights

Learn how to monitor and tune your DB instance using Performance insights.

Tutorials and sample code in GitHub


The following tutorials and sample code in GitHub show you how to perform common tasks with
Amazon RDS:

• Creating the Amazon Relational Database Service item tracker

Learn how to create an application that tracks and reports on work items. This application uses
Amazon RDS, Amazon Simple Email Service, Elastic Beanstalk, and SDK for Java 2.x.

Using this service with an AWS SDK


AWS software development kits (SDKs) are available for many popular programming languages. Each
SDK provides an API, code examples, and documentation that make it easier for developers to build
applications in their preferred language.

SDK documentation Code examples

AWS SDK for C++ AWS SDK for C++ code examples

AWS SDK for Go AWS SDK for Go code examples

AWS SDK for Java AWS SDK for Java code examples

AWS SDK for JavaScript AWS SDK for JavaScript code examples

AWS SDK for Kotlin AWS SDK for Kotlin code examples

AWS SDK for .NET AWS SDK for .NET code examples

AWS SDK for PHP AWS SDK for PHP code examples

AWS SDK for Python (Boto3) AWS SDK for Python (Boto3) code examples

AWS SDK for Ruby AWS SDK for Ruby code examples

AWS SDK for Rust AWS SDK for Rust code examples

AWS SDK for Swift AWS SDK for Swift code examples

For examples specific to this service, see Code examples for Amazon RDS using AWS SDKs (p. 2441).
Example availability
Can't find what you need? Request a code example by using the Provide feedback link at the
bottom of this page.

285
Amazon Relational Database Service User Guide
Amazon RDS basic operational guidelines

Best practices for Amazon RDS


Learn best practices for working with Amazon RDS. As new best practices are identified, we will keep this
section up to date.

Topics
• Amazon RDS basic operational guidelines (p. 286)
• DB instance RAM recommendations (p. 287)
• Using Enhanced Monitoring to identify operating system issues (p. 287)
• Using metrics to identify performance issues (p. 287)
• Tuning queries (p. 291)
• Best practices for working with MySQL (p. 292)
• Best practices for working with MariaDB (p. 293)
• Best practices for working with Oracle (p. 294)
• Best practices for working with PostgreSQL (p. 294)
• Best practices for working with SQL Server (p. 296)
• Working with DB parameter groups (p. 297)
• Best practices for automating DB instance creation (p. 297)
• Amazon RDS new features and best practices presentation video (p. 298)

Note
For common recommendations for Amazon RDS, see Viewing Amazon RDS
recommendations (p. 688).

Amazon RDS basic operational guidelines


The following are basic operational guidelines that everyone should follow when working with Amazon
RDS. Note that the Amazon RDS Service Level Agreement requires that you follow these guidelines:

• Use metrics to monitor your memory, CPU, replica lag, and storage usage. You can set up Amazon
CloudWatch to notify you when usage patterns change or when you approach the capacity of your
deployment. This way, you can maintain system performance and availability.
• Scale up your DB instance when you are approaching storage capacity limits. You should have
some buffer in storage and memory to accommodate unforeseen increases in demand from your
applications.
• Enable automatic backups and set the backup window to occur during the daily low in write IOPS.
That's when a backup is least disruptive to your database usage.
• If your database workload requires more I/O than you have provisioned, recovery after a failover
or database failure will be slow. To increase the I/O capacity of a DB instance, do any or all of the
following:
• Migrate to a different DB instance class with high I/O capacity.
• Convert from magnetic storage to either General Purpose or Provisioned IOPS storage, depending
on how much of an increase you need. For information on available storage types, see Amazon RDS
storage types (p. 101).

286
Amazon Relational Database Service User Guide
DB instance RAM recommendations

If you convert to Provisioned IOPS storage, make sure you also use a DB instance class that is
optimized for Provisioned IOPS. For information on Provisioned IOPS, see Provisioned IOPS SSD
storage (p. 104).
• If you are already using Provisioned IOPS storage, provision additional throughput capacity.
• If your client application is caching the Domain Name Service (DNS) data of your DB instances, set
a time-to-live (TTL) value of less than 30 seconds. The underlying IP address of a DB instance can
change after a failover. Caching the DNS data for an extended time can thus lead to connection
failures. Your application might try to connect to an IP address that's no longer in service.
• Test failover for your DB instance to understand how long the process takes for your particular use
case. Also test failover to ensure that the application that accesses your DB instance can automatically
connect to the new DB instance after failover occurs.

DB instance RAM recommendations


An Amazon RDS performance best practice is to allocate enough RAM so that your working set resides
almost completely in memory. The working set is the data and indexes that are frequently in use on your
instance. The more you use the DB instance, the more the working set will grow.

To tell if your working set is almost all in memory, check the ReadIOPS metric (using Amazon
CloudWatch) while the DB instance is under load. The value of ReadIOPS should be small and stable.
In some cases, scaling up the DB instance class to a class with more RAM results in a dramatic drop in
ReadIOPS. In these cases, your working set was not almost completely in memory. Continue to scale up
until ReadIOPS no longer drops dramatically after a scaling operation, or ReadIOPS is reduced to a very
small amount. For information on monitoring a DB instance's metrics, see Viewing metrics in the Amazon
RDS console (p. 696).

Using Enhanced Monitoring to identify operating


system issues
When Enhanced Monitoring is enabled, Amazon RDS provides metrics in real time for the operating
system (OS) that your DB instance runs on. You can view the metrics for your DB instance using the
console. You can also consume the Enhanced Monitoring JSON output from Amazon CloudWatch Logs in
a monitoring system of your choice. For more information about Enhanced Monitoring, see Monitoring
OS metrics with Enhanced Monitoring (p. 797).

Using metrics to identify performance issues


To identify performance issues caused by insufficient resources and other common bottlenecks, you can
monitor the metrics available for your Amazon RDS DB instance.

Viewing performance metrics


You should monitor performance metrics on a regular basis to see the average, maximum, and minimum
values for a variety of time ranges. If you do so, you can identify when performance is degraded. You
can also set Amazon CloudWatch alarms for particular metric thresholds so you are alerted if they are
reached.

287
Amazon Relational Database Service User Guide
Viewing performance metrics

To troubleshoot performance issues, it's important to understand the baseline performance of the
system. When you set up a DB instance and run it with a typical workload, capture the average,
maximum, and minimum values of all performance metrics. Do so at a number of different intervals (for
example, one hour, 24 hours, one week, two weeks). This can give you an idea of what is normal. It helps
to get comparisons for both peak and off-peak hours of operation. You can then use this information to
identify when performance is dropping below standard levels.

If you use Multi-AZ DB clusters, monitor the time difference between the latest transaction on the writer
DB instance and the latest applied transaction on a reader DB instance. This difference is called replica
lag. For more information, see Replica lag and Multi-AZ DB clusters (p. 504).

You can view the combined Performance Insights and CloudWatch metrics in the Performance Insights
dashboard and monitor your DB instance. To use this monitoring view, Performance Insights must be
turned on for your DB instance. For information about this monitoring view, see Viewing combined
metrics in the Amazon RDS console (p. 699).

You can create a performance analysis report for a specific time period and view the insights identified
and the recommendations to resolve the issues. For more information see, Analyzing database
performance for a period of time (p. 750).

To view performance metrics

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases, and then choose a DB instance.
3. Choose Monitoring.

The dashboard provides the performance metrics. The metrics default to show the information for
the last three hours.
4. Use the numbered buttons in the upper-right to page through the additional metrics, or adjust the
settings to see more metrics.
5. Choose a performance metric to adjust the time range in order to see data for other than the
current day. You can change the Statistic, Time Range, and Period values to adjust the information
displayed. For example, you might want to see the peak values for a metric for each day of the last
two weeks. If so, set Statistic to Maximum, Time Range to Last 2 Weeks, and Period to Day.

You can also view performance metrics using the CLI or API. For more information, see Viewing metrics in
the Amazon RDS console (p. 696).

To set a CloudWatch alarm

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases, and then choose a DB instance.
3. Choose Logs & events.
4. In the CloudWatch alarms section, choose Create alarm.

288
Amazon Relational Database Service User Guide
Viewing performance metrics

5. For Send notifications, choose Yes, and for Send notifications to, choose New email or SMS topic.
6. For Topic name, enter a name for the notification, and for With these recipients, enter a comma-
separated list of email addresses and phone numbers.
7. For Metric, choose the alarm statistic and metric to set.
8. For Threshold, specify whether the metric must be greater than, less than, or equal to the threshold,
and specify the threshold value.
9. For Evaluation period, choose the evaluation period for the alarm. For consecutive period(s) of,
choose the period during which the threshold must have been reached in order to trigger the alarm.
10. For Name of alarm, enter a name for the alarm.
11. Choose Create Alarm.

289
Amazon Relational Database Service User Guide
Evaluating performance metrics

The alarm appears in the CloudWatch alarms section.

Evaluating performance metrics


A DB instance has a number of different categories of metrics, and how to determine acceptable values
depends on the metric.

CPU

• CPU Utilization – Percentage of computer processing capacity used.

Memory

• Freeable Memory – How much RAM is available on the DB instance, in megabytes. The red line in the
Monitoring tab metrics is marked at 75% for CPU, Memory and Storage Metrics. If instance memory
consumption frequently crosses that line, then this indicates that you should check your workload or
upgrade your instance.
• Swap Usage – How much swap space is used by the DB instance, in megabytes.

Disk space

• Free Storage Space – How much disk space is not currently being used by the DB instance, in
megabytes.

Input/output operations

• Read IOPS, Write IOPS – The average number of disk read or write operations per second.
• Read Latency, Write Latency – The average time for a read or write operation in milliseconds.
• Read Throughput, Write Throughput – The average number of megabytes read from or written to disk
per second.
• Queue Depth – The number of I/O operations that are waiting to be written to or read from disk.

Network traffic

• Network Receive Throughput, Network Transmit Throughput – The rate of network traffic to and from
the DB instance in bytes per second.

Database connections

• DB Connections – The number of client sessions that are connected to the DB instance.

For more detailed individual descriptions of the performance metrics available, see Monitoring Amazon
RDS metrics with Amazon CloudWatch (p. 706).

Generally speaking, acceptable values for performance metrics depend on what your baseline looks
like and what your application is doing. Investigate consistent or trending variances from your baseline.
Advice about specific types of metrics follows:

• High CPU or RAM consumption – High values for CPU or RAM consumption might be appropriate. For
example, they might be so if they are in keeping with your goals for your application (like throughput
or concurrency) and are expected.

290
Amazon Relational Database Service User Guide
Tuning queries

• Disk space consumption – Investigate disk space consumption if space used is consistently at or
above 85 percent of the total disk space. See if it is possible to delete data from the instance or archive
data to a different system to free up space.
• Network traffic – For network traffic, talk with your system administrator to understand what
expected throughput is for your domain network and internet connection. Investigate network traffic if
throughput is consistently lower than expected.
• Database connections – Consider constraining database connections if you see high numbers of
user connections in conjunction with decreases in instance performance and response time. The
best number of user connections for your DB instance will vary based on your instance class and the
complexity of the operations being performed. To determine the number of database connections,
associate your DB instance with a parameter group. In this group, set the User Connections parameter
to other than 0 (unlimited). You can either use an existing parameter group or create a new one. For
more information, see Working with parameter groups (p. 347).
• IOPS metrics – The expected values for IOPS metrics depend on disk specification and server
configuration, so use your baseline to know what is typical. Investigate if values are consistently
different than your baseline. For best IOPS performance, make sure your typical working set will fit
into memory to minimize read and write operations.

For issues with performance metrics, a first step to improve performance is to tune the most used and
most expensive queries. Tune them to see if doing so lowers the pressure on system resources. For more
information, see Tuning queries (p. 291).

If your queries are tuned and an issue persists, consider upgrading your Amazon RDS DB instance
classes (p. 11). You might upgrade it to one with more of the resource (CPU, RAM, disk space, network
bandwidth, I/O capacity) that is related to the issue.

Tuning queries
One of the best ways to improve DB instance performance is to tune your most commonly used
and most resource-intensive queries. Here, you tune them to make them less expensive to run. For
information on improving queries, use the following resources:

• MySQL – See Optimizing SELECT statements in the MySQL documentation. For additional query tuning
resources, see MySQL performance tuning and optimization resources.
• Oracle – See Database SQL Tuning Guide in the Oracle Database documentation.
• SQL Server – See Analyzing a query in the Microsoft documentation. You can also use the execution-,
index-, and I/O-related data management views (DMVs) described in System Dynamic Management
Views in the Microsoft documentation to troubleshoot SQL Server query issues.

A common aspect of query tuning is creating effective indexes. For potential index improvements
for your DB instance, see Database Engine Tuning Advisor in the Microsoft documentation. For
information on using Tuning Advisor on RDS for SQL Server, see Analyzing your database workload on
an Amazon RDS for SQL Server DB instance with Database Engine Tuning Advisor (p. 1605).
• PostgreSQL – See Using EXPLAIN in the PostgreSQL documentation to learn how to analyze a query
plan. You can use this information to modify a query or underlying tables in order to improve query
performance.

For information about how to specify joins in your query for the best performance, see Controlling the
planner with explicit JOIN clauses.
• MariaDB – See Query optimizations in the MariaDB documentation.

291
Amazon Relational Database Service User Guide
Best practices for working with MySQL

Best practices for working with MySQL


Both table sizes and number of tables in a MySQL database can affect performance.

Table size
Typically, operating system constraints on file sizes determine the effective maximum table size for
MySQL databases. So, the limits usually aren't determined by internal MySQL constraints.

On a MySQL DB instance, avoid tables in your database growing too large. Although the general storage
limit is 64 TiB, provisioned storage limits restrict the maximum size of a MySQL table file to 16 TiB.
Partition your large tables so that file sizes are well under the 16 TiB limit. This approach can also
improve performance and recovery time. For more information, see MySQL file size limits in Amazon
RDS (p. 1754).

Very large tables (greater than 100 GB in size) can negatively affect performance for both reads
and writes (including DML statements and especially DDL statements). Indexes on larges tables
can significantly improve select performance, but they can also degrade the performance of DML
statements. DDL statements, such as ALTER TABLE, can be significantly slower for the large tables
because those operations might completely rebuild a table in some cases. These DDL statements might
lock the tables for the duration of the operation.

The amount of memory required by MySQL for reads and writes depends on the tables involved in the
operations. It is a best practice to have at least enough RAM to the hold the indexes of actively used
tables. To find the ten largest tables and indexes in a database, use the following query:

select table_schema, TABLE_NAME, dat, idx from


(SELECT table_schema, TABLE_NAME,
( data_length ) / 1024 / 1024 as dat,
( index_length ) / 1024 / 1024 as idx
FROM information_schema.TABLES
order by 3 desc ) a
order by 3 desc
limit 10;

Number of tables
Your underlying file system might have a limit on the number of files that represent tables. However,
MySQL has no limit on the number of tables. Despite this, the total number of tables in the MySQL
InnoDB storage engine can contribute to the performance degradation, regardless of the size of those
tables. To limit the operating system impact, you can split the tables across multiple databases in the
same MySQL DB instance. Doing so might limit the number of files in a directory but won't solve the
overall problem.

When there is performance degradation because of a large number of tables (more than 10 thousand), it
is caused by MySQL working with storage files, including opening and closing them. To address this issue,
you can increase the size of the table_open_cache and table_definition_cache parameters.
However, increasing the values of those parameters might significantly increase the amount of memory
MySQL uses, and might even use all of the available memory. For more information, see How MySQL
Opens and Closes Tables in the MySQL documentation.

In addition, too many tables can significantly affect MySQL startup time. Both a clean shutdown and
restart and a crash recovery can be affected, especially in versions prior to MySQL 8.0.

We recommend having fewer than 10,000 tables total across all of the databases in a DB instance. For a
use case with a large number of tables in a MySQL database, see One Million Tables in MySQL 8.0.

292
Amazon Relational Database Service User Guide
Storage engine

Storage engine
The point-in-time restore and snapshot restore features of Amazon RDS for MySQL require a crash-
recoverable storage engine. These features are supported for the InnoDB storage engine only. Although
MySQL supports multiple storage engines with varying capabilities, not all of them are optimized for
crash recovery and data durability. For example, the MyISAM storage engine doesn't support reliable
crash recovery and might prevent a point-in-time restore or snapshot restore from working as intended.
This might result in lost or corrupt data when MySQL is restarted after a crash.

InnoDB is the recommended and supported storage engine for MySQL DB instances on Amazon RDS.
InnoDB instances can also be migrated to Aurora, while MyISAM instances can't be migrated. However,
MyISAM performs better than InnoDB if you require intense, full-text search capability. If you still choose
to use MyISAM with Amazon RDS, following the steps outlined in Automated backups with unsupported
MySQL storage engines (p. 599) can be helpful in certain scenarios for snapshot restore functionality.

If you want to convert existing MyISAM tables to InnoDB tables, you can use the process outlined in the
MySQL documentation. MyISAM and InnoDB have different strengths and weaknesses, so you should
fully evaluate the impact of making this switch on your applications before doing so.

In addition, Federated Storage Engine is currently not supported by Amazon RDS for MySQL.

Best practices for working with MariaDB


Both table sizes and number of tables in a MariaDB database can affect performance.

Table size
Typically, operating system constraints on file sizes determine the effective maximum table size for
MariaDB databases. So, the limits usually aren't determined by internal MariaDB constraints.

On a MariaDB DB instance, avoid tables in your database growing too large. Although the general
storage limit is 64 TiB, provisioned storage limits restrict the maximum size of a MariaDB table file to 16
TiB. Partition your large tables so that file sizes are well under the 16 TiB limit. This approach can also
improve performance and recovery time.

Very large tables (greater than 100 GB in size) can negatively affect performance for both reads
and writes (including DML statements and especially DDL statements). Indexes on larges tables
can significantly improve select performance, but they can also degrade the performance of DML
statements. DDL statements, such as ALTER TABLE, can be significantly slower for the large tables
because those operations might completely rebuild a table in some cases. These DDL statements might
lock the tables for the duration of the operation.

The amount of memory required by MariaDB for reads and writes depends on the tables involved in
the operations. It is a best practice to have at least enough RAM to the hold the indexes of actively used
tables. To find the ten largest tables and indexes in a database, use the following query:

select table_schema, TABLE_NAME, dat, idx from


(SELECT table_schema, TABLE_NAME,
( data_length ) / 1024 / 1024 as dat,
( index_length ) / 1024 / 1024 as idx
FROM information_schema.TABLES
order by 3 desc ) a
order by 3 desc
limit 10;

293
Amazon Relational Database Service User Guide
Number of tables

Number of tables
Your underlying file system might have a limit on the number of files that represent tables. However,
MariaDB has no limit on the number of tables. Despite this, the total number of tables in the MariaDB
InnoDB storage engine can contribute to the performance degradation, regardless of the size of those
tables. To limit the operating system impact, you can split the tables across multiple databases in the
same MariaDB DB instance. Doing so might limit the number of files in a directory but doesn’t solve the
overall problem.

When there is performance degradation because of a large number of tables (more than 10,000),
it's caused by MariaDB working with storage files. This work includes MariaDB opening and closing
storage files. To address this issue, you can increase the size of the table_open_cache and
table_definition_cache parameters. However, increasing the values of those parameters might
significantly increase the amount of memory MariaDB uses. It might even use all of the available
memory. For more information, see Optimizing table_open_cache in the MariaDB documentation.

In addition, too many tables can significantly affect MariaDB startup time. Both a clean shutdown and
restart and a crash recovery can be affected. We recommend having fewer than ten thousand tables total
across all of the databases in a DB instance.

Storage engine
The point-in-time restore and snapshot restore features of Amazon RDS for MariaDB require a crash-
recoverable storage engine. Although MariaDB supports multiple storage engines with varying
capabilities, not all of them are optimized for crash recovery and data durability. For example, although
Aria is a crash-safe replacement for MyISAM, it might still prevent a point-in-time restore or snapshot
restore from working as intended. This might result in lost or corrupt data when MariaDB is restarted
after a crash. InnoDB is the recommended and supported storage engine for MariaDB DB instances on
Amazon RDS. If you still choose to use Aria with Amazon RDS, following the steps outlined in Automated
backups with unsupported MariaDB storage engines (p. 600) can be helpful in certain scenarios for
snapshot restore functionality.

If you want to convert existing MyISAM tables to InnoDB tables, you can use the process outlined in the
MariaDB documentation. MyISAM and InnoDB have different strengths and weaknesses, so you should
fully evaluate the impact of making this switch on your applications before doing so.

Best practices for working with Oracle


For information about best practices for working with Amazon RDS for Oracle, see Best practices for
running Oracle database on Amazon Web Services.

A 2020 AWS virtual workshop included a presentation on running production Oracle databases on
Amazon RDS. A video of the presentation is available here.

Best practices for working with PostgreSQL


Of two important areas where you can improve performance with RDS for PostgreSQL, one is when
loading data into a DB instance. Another is when using the PostgreSQL autovacuum feature. The
following sections cover some of the practices we recommend for these areas.

For information on how Amazon RDS implements other common PostgreSQL DBA tasks, see Common
DBA tasks for Amazon RDS for PostgreSQL (p. 2270).

294
Amazon Relational Database Service User Guide
Loading data into a PostgreSQL DB instance

Loading data into a PostgreSQL DB instance


When loading data into an Amazon RDS for PostgreSQL DB instance, modify your DB instance settings
and your DB parameter group values. Set these to allow for the most efficient importing of data into
your DB instance.

Modify your DB instance settings to the following:

• Disable DB instance backups (set backup_retention to 0)


• Disable Multi-AZ

Modify your DB parameter group to include the following settings. Also, test the parameter settings to
find the most efficient settings for your DB instance.

• Increase the value of the maintenance_work_mem parameter. For more information about
PostgreSQL resource consumption parameters, see the PostgreSQL documentation.
• Increase the value of the max_wal_size and checkpoint_timeout parameters to reduce the
number of writes to the write-ahead log (WAL) log.
• Disable the synchronous_commit parameter.
• Disable the PostgreSQL autovacuum parameter.
• Make sure that none of the tables you're importing are unlogged. Data stored in unlogged tables can
be lost during a failover. For more information, see CREATE TABLE UNLOGGED.

Use the pg_dump -Fc (compressed) or pg_restore -j (parallel) commands with these settings.

After the load operation completes, return your DB instance and DB parameters to their normal settings.

Working with the PostgreSQL autovacuum feature


The autovacuum feature for PostgreSQL databases is a feature that we strongly recommend you use
to maintain the health of your PostgreSQL DB instance. Autovacuum automates the execution of the
VACUUM and ANALYZE command Using autovacuum is required by PostgreSQL, not imposed by Amazon
RDS, and its use is critical to good performance. The feature is enabled by default for all new Amazon
RDS for PostgreSQL DB instances, and the related configuration parameters are appropriately set by
default.

Your database administrator needs to know and understand this maintenance operation. For the
PostgreSQL documentation on autovacuum, see The Autovacuum Daemon.

Autovacuum is not a "resource free" operation, but it works in the background and yields to user
operations as much as possible. When enabled, autovacuum checks for tables that have had a large
number of updated or deleted tuples. It also protects against loss of very old data due to transaction ID
wraparound. For more information, see Preventing transaction ID wraparound failures.

Autovacuum should not be thought of as a high-overhead operation that can be reduced to gain better
performance. On the contrary, tables that have a high velocity of updates and deletes will quickly
deteriorate over time if autovacuum is not run.
Important
Not running autovacuum can result in an eventual required outage to perform a much more
intrusive vacuum operation. In some cases, an RDS for PostgreSQL DB instance might become
unavailable because of an over-conservative use of autovacuum. In these cases, the PostgreSQL
database shuts down to protect itself. At that point, Amazon RDS must perform a single-user-
mode full vacuum directly on the DB instance. This full vacuum can result in a multi-hour

295
Amazon Relational Database Service User Guide
Amazon RDS for PostgreSQL best practices video

outage. Thus, we strongly recommend that you do not turn off autovacuum, which is turned on
by default.

The autovacuum parameters determine when and how hard autovacuum works.
Theautovacuum_vacuum_threshold and autovacuum_vacuum_scale_factor parameters
determine when autovacuum is run. The autovacuum_max_workers, autovacuum_nap_time,
autovacuum_cost_limit, and autovacuum_cost_delay parameters determine how hard
autovacuum works. For more information about autovacuum, when it runs, and what parameters are
required, see Routine Vacuuming in the PostgreSQL documentation.

The following query shows the number of "dead" tuples in a table named table1:

SELECT relname, n_dead_tup, last_vacuum, last_autovacuum FROM


pg_catalog.pg_stat_all_tables
WHERE n_dead_tup > 0 and relname = 'table1';

The results of the query will resemble the following:

relname | n_dead_tup | last_vacuum | last_autovacuum


---------+------------+-------------+-----------------
tasks | 81430522 | |
(1 row)

Amazon RDS for PostgreSQL best practices video


The 2020 AWS re:Invent conference included a presentation on new features and best practices for
working with PostgreSQL on Amazon RDS. A video of the presentation is available here.

Best practices for working with SQL Server


Best practices for a Multi-AZ deployment with a SQL Server DB instance include the following:

• Use Amazon RDS DB events to monitor failovers. For example, you can be notified by text message
or email when a DB instance fails over. For more information about Amazon RDS events, see Working
with Amazon RDS event notification (p. 855).
• If your application caches DNS values, set time to live (TTL) to less than 30 seconds. Setting TTL as so
is a good practice in case there is a failover. In a failover, the IP address might change and the cached
value might no longer be in service.
• We recommend that you do not enable the following modes because they turn off transaction logging,
which is required for Multi-AZ:
• Simple recover mode
• Offline mode
• Read-only mode
• Test to determine how long it takes for your DB instance to failover. Failover time can vary due to
the type of database, the instance class, and the storage type you use. You should also test your
application's ability to continue working if a failover occurs.
• To shorten failover time, do the following:
• Ensure that you have sufficient Provisioned IOPS allocated for your workload. Inadequate I/O can
lengthen failover times. Database recovery requires I/O.
• Use smaller transactions. Database recovery relies on transactions, so if you can break up large
transactions into multiple smaller transactions, your failover time should be shorter.

296
Amazon Relational Database Service User Guide
Amazon RDS for SQL Server best practices video

• Take into consideration that during a failover, there will be elevated latencies. As part of the failover
process, Amazon RDS automatically replicates your data to a new standby instance. This replication
means that new data is being committed to two different DB instances. So there might be some
latency until the standby DB instance has caught up to the new primary DB instance.
• Deploy your applications in all Availability Zones. If an Availability Zone does go down, your
applications in the other Availability Zones will still be available.

When working with a Multi-AZ deployment of SQL Server, remember that Amazon RDS creates replicas
for all SQL Server databases on your instance. If you don't want specific databases to have secondary
replicas, set up a separate DB instance that doesn't use Multi-AZ for those databases.

Amazon RDS for SQL Server best practices video


The 2019 AWS re:Invent conference included a presentation on new features and best practices for
working with SQL Server on Amazon RDS. A video of the presentation is available here.

Working with DB parameter groups


We recommend that you try out DB parameter group changes on a test DB instance before applying
parameter group changes to your production DB instances. Improperly setting DB engine parameters in a
DB parameter group can have unintended adverse effects, including degraded performance and system
instability. Always exercise caution when modifying DB engine parameters and back up your DB instance
before modifying a DB parameter group.

For information about backing up your DB instance, see Backing up and restoring (p. 590).

Best practices for automating DB instance creation


It’s an Amazon RDS best practice to create a DB instance with the preferred minor version of the
database engine. You can use the AWS CLI, Amazon RDS API, or AWS CloudFormation to automate DB
instance creation. When you use these methods, you can specify only the major version and Amazon RDS
automatically creates the instance with the preferred minor version. For example, if PostgreSQL 12.5 is
the preferred minor version, and if you specify version 12 with create-db-instance, the DB instance
will be version 12.5.

To determine the preferred minor version, you can run the describe-db-engine-versions command
with the --default-only option as shown in the following example.

aws rds describe-db-engine-versions --default-only --engine postgres

{
"DBEngineVersions": [
{
"Engine": "postgres",
"EngineVersion": "12.5",
"DBParameterGroupFamily": "postgres12",
"DBEngineDescription": "PostgreSQL",
"DBEngineVersionDescription": "PostgreSQL 12.5-R1",
...some output truncated...
}
]
}

For information on creating DB instances programmatically, see the following resources:

297
Amazon Relational Database Service User Guide
Amazon RDS new features and
best practices presentation video

• Using the AWS CLI – create-db-instance


• Using the Amazon RDS API – CreateDBInstance
• Using AWS CloudFormation – AWS::RDS::DBInstance

Amazon RDS new features and best practices


presentation video
The 2019 AWS re:Invent conference included a presentation on new Amazon RDS features and best
practices for monitoring, analyzing, and tuning database performance using RDS. A video of the
presentation is available here.

298
Amazon Relational Database Service User Guide

Configuring an Amazon RDS DB


instance
This section shows how to set up your Amazon RDS DB instance. Before creating a DB instance, decide
on the DB instance class that will run the DB instance. Also, decide where the DB instance will run by
choosing an AWS Region. Next, create the DB instance.

You can configure a DB instance with an option group and a DB parameter group.

• An option group specifies features, called options, that are available for a particular Amazon RDS DB
instance.
• A DB parameter group acts as a container for engine configuration values that are applied to one or
more DB instances.

The options and parameters that are available depend on the DB engine and DB engine version. You can
specify an option group and a DB parameter group when you create a DB instance. You can also modify a
DB instance to specify them.

Topics
• Creating an Amazon RDS DB instance (p. 300)
• Creating Amazon RDS resources with AWS CloudFormation (p. 324)
• Connecting to an Amazon RDS DB instance (p. 325)
• Working with option groups (p. 331)
• Working with parameter groups (p. 347)
• Creating an Amazon ElastiCache cluster using Amazon RDS DB instance settings (p. 374)

299
Amazon Relational Database Service User Guide
Creating a DB instance

Creating an Amazon RDS DB instance


The basic building block of Amazon RDS is the DB instance, where you create your databases. You choose
the engine-specific characteristics of the DB instance when you create it. You also choose the storage
capacity, CPU, memory, and so on, of the AWS instance on which the database server runs.

Topics
• DB instance prerequisites (p. 300)
• Creating a DB instance (p. 303)
• Settings for DB instances (p. 308)

DB instance prerequisites
Important
Before you can create an Amazon RDS DB instance, you must complete the tasks in Setting up
for Amazon RDS (p. 174).

The following are prerequisites to complete before creating a DB instance.

Topics
• Configure the network for the DB instance (p. 300)
• Additional prerequisites (p. 303)

Configure the network for the DB instance


You can create an Amazon RDS DB instance only in a virtual private cloud (VPC) based on the Amazon
VPC service. Also, it must be in an AWS Region that has at least two Availability Zones. The DB subnet
group that you choose for the DB instance must cover at least two Availability Zones. This configuration
ensures that you can configure a Multi-AZ deployment when you create the DB instance or easily move
to one in the future.

To set up connectivity between your new DB instance and an Amazon EC2 instance in the same VPC,
do so when you create the DB instance. To connect to your DB instance from resources other than EC2
instances in the same VPC, configure the network connections manually.

Topics
• Configure automatic network connectivity with an EC2 instance (p. 300)
• Configure the network manually (p. 303)

Configure automatic network connectivity with an EC2 instance


When you create an RDS DB instance, you can use the AWS Management Console to set up connectivity
between an EC2 instance and the new DB instance. When you do so, RDS configures your VPC and
network settings automatically. The DB instance is created in the same VPC as the EC2 instance so that
the EC2 instance can access the DB instance.

The following are requirements for connecting an EC2 instance with the DB instance:

• The EC2 instance must exist in the AWS Region before you create the DB instance.

If no EC2 instances exist in the AWS Region, the console provides a link to create one.
• The user who is creating the DB instance must have permissions to perform the following operations:
• ec2:AssociateRouteTable

300
Amazon Relational Database Service User Guide
Prerequisites

• ec2:AuthorizeSecurityGroupEgress
• ec2:AuthorizeSecurityGroupIngress
• ec2:CreateRouteTable
• ec2:CreateSubnet
• ec2:CreateSecurityGroup
• ec2:DescribeInstances
• ec2:DescribeNetworkInterfaces
• ec2:DescribeRouteTables
• ec2:DescribeSecurityGroups
• ec2:DescribeSubnets
• ec2:ModifyNetworkInterfaceAttribute
• ec2:RevokeSecurityGroupEgress

Using this option creates a private DB instance. The DB instance uses a DB subnet group with only private
subnets to restrict access to resources within the VPC.

To connect an EC2 instance to the DB instance, choose Connect to an EC2 compute resource in the
Connectivity section on the Create database page.

When you choose Connect to an EC2 compute resource, RDS sets the following options automatically.
You can't change these settings unless you choose not to set up connectivity with an EC2 instance by
choosing Don't connect to an EC2 compute resource.

Console option Automatic setting

Network type RDS sets network type to IPv4. Currently, dual-stack mode isn't supported
when you set up a connection between an EC2 instance and the DB
instance.

Virtual Private Cloud (VPC) RDS sets the VPC to the one associated with the EC2 instance.

DB subnet group RDS requires a DB subnet group with a private subnet in the same
Availability Zone as the EC2 instance. If a DB subnet group that meets this
requirement exists, then RDS uses the existing DB subnet group. By default,
this option is set to Automatic setup.

301
Amazon Relational Database Service User Guide
Prerequisites

Console option Automatic setting


When you choose Automatic setup and there is no DB subnet group that
meets this requirement, the following action happens. RDS uses three
available private subnets in three Availability Zones where one of the
Availability Zones is the same as the EC2 instance. If a private subnet
isn’t available in an Availability Zone, RDS creates a private subnet in the
Availability Zone. Then RDS creates the DB subnet group.

When a private subnet is available, RDS uses the route table associated
with the subnet and adds any subnets it creates to this route table. When
no private subnet is available, RDS creates a route table without internet
gateway access and adds the subnets it creates to the route table.

RDS also allows you to use existing DB subnet groups. Select Choose
existing if you want to use an existing DB subnet group of your choice.

Public access RDS chooses No so that the DB instance isn't publicly accessible.

For security, it is a best practice to keep the database private and make sure
it isn't accessible from the internet.

VPC security group (firewall) RDS creates a new security group that is associated with the DB instance.
The security group is named rds-ec2-n, where n is a number. This security
group includes an inbound rule with the EC2 VPC security group (firewall)
as the source. This security group that is associated with the DB instance
allows the EC2 instance to access the DB instance.

RDS also creates a new security group that is associated with the EC2
instance. The security group is named ec2-rds-n, where n is a number.
This security group includes an outbound rule with the VPC security
group of the DB instance as the source. This security group allows the EC2
instance to send traffic to the DB instance.

You can add another new security group by choosing Create new and
typing the name of the new security group.

You can add existing security groups by choosing Choose existing and
selecting security groups to add.

Availability Zone When you choose Single DB instance in Availability & durability (Single-
AZ deployment), RDS chooses the Availability Zone of the EC2 instance.

When you choose Multi-AZ DB instance in Availability & durability (Multi-


AZ DB instance deployment), RDS chooses the Availability Zone of the EC2
instance for one DB instance in the deployment. RDS randomly chooses
a different Availability Zone for the other DB instance. Either the primary
DB instance or the standby replica is created in the same Availability Zone
as the EC2 instance. When you choose Multi-AZ DB instance, there is
the possibility of cross Availability Zone costs if the DB instance and EC2
instance are in different Availability Zones.

For more information about these settings, see Settings for DB instances (p. 308).

If you change these settings after the DB instance is created, the changes might affect the connection
between the EC2 instance and the DB instance.

302
Amazon Relational Database Service User Guide
Creating a DB instance

Configure the network manually


To connect to your DB instance from resources other than EC2 instances in the same VPC, configure the
network connections manually. If you use the AWS Management Console to create your DB instance, you
can have Amazon RDS automatically create a VPC for you. Or you can use an existing VPC or create a
new VPC for your DB instance. With any approach, your VPC requires at least one subnet in each of at
least two Availability Zones for use with an RDS DB instance.

By default, Amazon RDS creates the DB instance an Availability Zone automatically for you. To choose
a specific Availability Zone, you need to change the Availability & durability setting to Single DB
instance. Doing so exposes an Availability Zone setting that lets you choose from among the Availability
Zones in your VPC. However, if you choose a Multi-AZ deployment, RDS chooses the Availability Zone of
the primary or writer DB instance automatically, and the Availability Zone setting doesn't appear.

In some cases, you might not have a default VPC or haven't created a VPC. In these cases, you can have
Amazon RDS automatically create a VPC for you when you create a DB instance using the console.
Otherwise, do the following:

• Create a VPC with at least one subnet in each of at least two of the Availability Zones in the AWS
Region where you want to deploy your DB instance. For more information, see Working with a DB
instance in a VPC (p. 2689) and Tutorial: Create a VPC for use with a DB instance (IPv4 only) (p. 2706).
• Specify a VPC security group that authorizes connections to your DB instance. For more information,
see Provide access to your DB instance in your VPC by creating a security group (p. 177) and
Controlling access with security groups (p. 2680).
• Specify an RDS DB subnet group that defines at least two subnets in the VPC that can be used by the
DB instance. For more information, see Working with DB subnet groups (p. 2689).

If you want to connect to a resource that isn't in the same VPC as the DB instance, see the appropriate
scenarios in Scenarios for accessing a DB instance in a VPC (p. 2701).

Additional prerequisites
Before you create your DB instance, consider the following additional prerequisites:

• If you are connecting to AWS using AWS Identity and Access Management (IAM) credentials, your AWS
account must have certain IAM policies. These grant the permissions required to perform Amazon RDS
operations. For more information, see Identity and access management for Amazon RDS (p. 2606).

To use IAM to access the RDS console, sign in to the AWS Management Console with your IAM user
credentials. Then go to the Amazon RDS console at https://console.aws.amazon.com/rds/.
• To tailor the configuration parameters for your DB instance, specify a DB parameter group with the
required parameter settings. For information about creating or modifying a DB parameter group, see
Working with parameter groups (p. 347).
• Determine the TCP/IP port number to specify for your DB instance. The firewalls at some companies
block connections to the default ports for RDS DB instances. If your company firewall blocks the
default port, choose another port for your DB instance.

Creating a DB instance
You can create an Amazon RDS DB instance using the AWS Management Console, the AWS CLI, or the
RDS API.

Console
You can create a DB instance by using the AWS Management Console with Easy create enabled or
not enabled. With Easy create enabled, you specify only the DB engine type, DB instance size, and DB

303
Amazon Relational Database Service User Guide
Creating a DB instance

instance identifier. Easy create uses the default setting for other configuration options. With Easy create
not enabled, you specify more configuration options when you create a database, including ones for
availability, security, backups, and maintenance.
Note
In the following procedure, Standard create is enabled, and Easy create isn't enabled. This
procedure uses Microsoft SQL Server as an example.
For examples that use Easy create to walk you through creating and connecting to sample DB
instances for each engine, see Getting started with Amazon RDS (p. 180).

To create a DB instance

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the upper-right corner of the Amazon RDS console, choose the AWS Region in which you want to
create the DB instance.
3. In the navigation pane, choose Databases.
4. Choose Create database, then choose Standard create.
5. For Engine type, choose MariaDB, Microsoft SQL Server, MySQL, Oracle, or PostgreSQL.

Microsoft SQL Server is shown here.

304
Amazon Relational Database Service User Guide
Creating a DB instance

6. For Database management type, if you're using Oracle or SQL Server choose Amazon RDS or
Amazon RDS Custom.

Amazon RDS is shown here. For more information on RDS Custom, see Working with Amazon RDS
Custom (p. 978).
7. For Edition, if you're using Oracle or SQL Server choose the DB engine edition that you want to use.

MySQL has only one option for the edition, and MariaDB and PostgreSQL have none.
8. For Version, choose the engine version.
9. In Templates, choose the template that matches your use case. If you choose Production, the
following are preselected in a later step:

• Multi-AZ failover option

305
Amazon Relational Database Service User Guide
Creating a DB instance

• Provisioned IOPS SSD (io1) storage option


• Enable deletion protection option

We recommend these features for any production environment.


Note
Template choices vary by edition.
10. To enter your master password, do the following:

a. In the Settings section, open Credential Settings.


b. If you want to specify a password, clear the Auto generate a password check box if it is
selected.
c. (Optional) Change the Master username value.
d. Enter the same password in Master password and Confirm password.
11. (Optional) Set up a connection to a compute resource for this DB instance.

You can configure connectivity between an Amazon EC2 instance and the new DB instance during
DB instance creation. For more information, see Configure automatic network connectivity with an
EC2 instance (p. 300).
12. In the Connectivity section under VPC security group (firewall), if you select Create new, a VPC
security group is created with an inbound rule that allows your local computer's IP address to access
the database.
13. For the remaining sections, specify your DB instance settings. For information about each setting,
see Settings for DB instances (p. 308).
14. Choose Create database.

If you chose to use an automatically generated password, the View credential details button
appears on the Databases page.

To view the master user name and password for the DB instance, choose View credential details.

To connect to the DB instance as the master user, use the user name and password that appear.
Important
You can't view the master user password again. If you don't record it, you might have to
change it. If you need to change the master user password after the DB instance is available,
modify the DB instance to do so. For more information about modifying a DB instance, see
Modifying an Amazon RDS DB instance (p. 401).
15. For Databases, choose the name of the new DB instance.

On the RDS console, the details for the new DB instance appear. The DB instance has a status of
Creating until the DB instance is created and ready for use. When the state changes to Available,
you can connect to the DB instance. Depending on the DB instance class and storage allocated, it can
take several minutes for the new instance to be available.

306
Amazon Relational Database Service User Guide
Creating a DB instance

AWS CLI
To create a DB instance by using the AWS CLI, call the create-db-instance command with the following
parameters:

• --db-instance-identifier
• --db-instance-class
• --vpc-security-group-ids
• --db-subnet-group
• --engine
• --master-username
• --master-user-password
• --allocated-storage
• --backup-retention-period

For information about each setting, see Settings for DB instances (p. 308).

This example uses Microsoft SQL Server.

Example

For Linux, macOS, or Unix:

aws rds create-db-instance \


--engine sqlserver-se \
--db-instance-identifier mymsftsqlserver \
--allocated-storage 250 \
--db-instance-class db.t3.large \
--vpc-security-group-ids mysecuritygroup \
--db-subnet-group mydbsubnetgroup \
--master-username masterawsuser \
--manage-master-user-password \
--backup-retention-period 3

For Windows:

aws rds create-db-instance ^


--engine sqlserver-se ^
--db-instance-identifier mydbinstance ^
--allocated-storage 250 ^

307
Amazon Relational Database Service User Guide
Available settings

--db-instance-class db.t3.large ^
--vpc-security-group-ids mysecuritygroup ^
--db-subnet-group mydbsubnetgroup ^
--master-username masterawsuser ^
--manage-master-user-password ^
--backup-retention-period 3

This command produces output similar to the following.

DBINSTANCE mydbinstance db.t3.large sqlserver-se 250 sa creating 3 **** n


10.50.2789
SECGROUP default active
PARAMGRP default.sqlserver-se-14 in-sync

RDS API
To create a DB instance by using the Amazon RDS API, call the CreateDBInstance operation.

For information about each setting, see Settings for DB instances (p. 308).

Settings for DB instances


In the following table, you can find details about settings that you choose when you create a DB
instance. The table also shows the DB engines for which each setting is supported.

You can create a DB instance using the console, the create-db-instance CLI command, or the
CreateDBInstance RDS API operation.

Console setting Setting description CLI option and RDS API parameter Supported
DB engines

Allocated The amount of storage to allocate CLI option: All


storage for your DB instance (in gibibytes).
In some cases, allocating a higher --allocated-storage
amount of storage for your
DB instance than the size of API parameter:
your database can improve I/O
AllocatedStorage
performance.

For more information, see Amazon


RDS DB instance storage (p. 101).

Architecture The architecture of the database: CLI option: Oracle


settings CDB (single-tenant) or non-CDB.
Oracle Database 21c uses CDB --engine oracle-ee-cdb
architecture only. Oracle Database (multitenant)
19c can use either CDB or non-CDB
architecture. Releases lower than --engine oracle-se2-cdb
Oracle Database 19c use non-CDB (multitenant)
only.
--engine oracle-ee (traditional)
If you choose Use multitenant
--engine oracle-se2 (traditional)
architecture, RDS for Oracle
creates a container database (CDB). API parameter:
This CDB contains one pluggable
database (PDB). If you don't choose Engine
this option, RDS for Oracle creates

308
Amazon Relational Database Service User Guide
Available settings

Console setting Setting description CLI option and RDS API parameter Supported
DB engines
a non-CDB. A non-CDB uses the
traditional Oracle architecture.

For more information, see Overview


of RDS for Oracle CDBs (p. 1840).

Auto minor Enable auto minor version upgrade CLI option: All
version upgrade to enable your DB instance to
receive preferred minor DB engine --auto-minor-version-upgrade
version upgrades automatically
when they become available. --no-auto-minor-version-
Amazon RDS performs automatic upgrade
minor version upgrades in the
API parameter:
maintenance window.
AutoMinorVersionUpgrade
For more information, see
Automatically upgrading the minor
engine version (p. 431).

Availability zone The Availability Zone for your DB CLI option: All
instance. Use the default value of
No Preference unless you want to --availability-zone
specify an Availability Zone.
API parameter:
For more information, see Regions,
Availability Zones, and Local AvailabilityZone
Zones (p. 110).

AWS KMS key Only available if Encryption is set to CLI option: All
Enable encryption. Choose the AWS
KMS key to use for encrypting this --kms-key-id
DB instance. For more information,
see Encrypting Amazon RDS API parameter:
resources (p. 2586).
KmsKeyId

Backup Choose Enable replication in Not available when creating a DB Oracle


replication another AWS Region to create instance. For information on enabling
backups in an additional Region for cross-Region backups using the AWS CLI PostgreSQL
disaster recovery. or RDS API, see Enabling cross-Region
automated backups (p. 604). SQL Server
Then choose the Destination
Region for the additional backups.

Backup The number of days that you want CLI option: All
retention period automatic backups of your DB
instance to be retained. For any --backup-retention-period
nontrivial DB instance, set this value
to 1 or greater. API parameter:

For more information, see Working BackupRetentionPeriod


with backups (p. 591).

309
Amazon Relational Database Service User Guide
Available settings

Console setting Setting description CLI option and RDS API parameter Supported
DB engines

Backup target Choose AWS Cloud to store CLI option: MySQL,


automated backups and manual PostgreSQL,
snapshots in the parent AWS Region. --backup-target SQL Server
Choose Outposts (on-premises) to
store them locally on your Outpost. API parameter:

This option setting applies only BackupTarget


to RDS on Outposts. For more
information, see Creating DB
instances for Amazon RDS on AWS
Outposts (p. 1189).

Backup window The time period during which CLI option: All
Amazon RDS automatically takes
a backup of your DB instance. --preferred-backup-window
Unless you have a specific time that
you want to have your database API parameter:
backed up, use the default of No
PreferredBackupWindow
Preference.

For more information, see Working


with backups (p. 591).

Certificate The certificate authority (CA) for CLI option: All


authority the server certificate used by the DB
instance. --ca-certificate-identifier

For more information, see Using RDS API parameter:


SSL/TLS to encrypt a connection to
a DB instance (p. 2591). CACertificateIdentifier

310
Amazon Relational Database Service User Guide
Available settings

Console setting Setting description CLI option and RDS API parameter Supported
DB engines

Character set The character set for your DB CLI option: Oracle
instance. The default value of
AL32UTF8 for the DB character --character-set-name
set is for the Unicode 5.0 UTF-8
Universal character set. You can't API parameter:
change the DB character set after
CharacterSetName
you create the DB instance.

In a single-tenant configuration, a
non-default DB character set affects
only the PDB, not the CDB. For more
information, see Overview of RDS
for Oracle CDBs (p. 1840).

The DB character set is different


from the national character set,
which is called the NCHAR character
set. Unlike the DB character set,
the NCHAR character set specifies
the encoding for NCHAR data types
(NCHAR, NVARCHAR2, and NCLOB)
columns without affecting database
metadata.

For more information, see RDS for


Oracle character sets (p. 1801).

Collation A server-level collation for your DB CLI option: SQL Server


instance.
--character-set-name
For more information, see Server-
level collation for Microsoft SQL API parameter:
Server (p. 1607).
CharacterSetName

Copy tags to This option copies any DB instance CLI option: All
snapshots tags to a DB snapshot when you
create a snapshot. --copy-tags-to-snapshot

For more information, see Tagging --no-copy-tags-to-snapshot


Amazon RDS resources (p. 461).
RDS API parameter:

CopyTagsToSnapshot

311
Amazon Relational Database Service User Guide
Available settings

Console setting Setting description CLI option and RDS API parameter Supported
DB engines

Database The database authentication option IAM: Varies by


authentication that you want to use. authentication
CLI option: type
Choose Password authentication
to authenticate database users with --enable-iam-database-
database passwords only. authentication

Choose Password and IAM DB --no-enable-iam-database-


authentication to authenticate authentication
database users with database
passwords and user credentials RDS API parameter:
through users and roles. For more
EnableIAMDatabaseAuthentication
information, see IAM database
authentication for MariaDB, MySQL, Kerberos:
and PostgreSQL (p. 2642). This
option is only supported for MySQL CLI option:
and PostgreSQL.
--domain
Choose Password and Kerberos
authentication to authenticate --domain-iam-role-name
database users with database
passwords and Kerberos RDS API parameter:
authentication through an AWS
Managed Microsoft AD created with Domain
AWS Directory Service. Next, choose
the directory or choose Create a DomainIAMRoleName
new Directory.

For more information, see one of the


following:

• Using Kerberos authentication for


MySQL (p. 1645)
• Configuring Kerberos
authentication for Amazon RDS
for Oracle (p. 1819)
• Using Kerberos authentication
with Amazon RDS for
PostgreSQL (p. 2181)

Database Choose Amazon RDS if you For the CLI and API, you specify the Oracle
management don't need to customize your database engine type.
type environment. SQL Server

Choose Amazon RDS Custom if you


want to customize the database,
OS, and infrastructure. For more
information, see Working with
Amazon RDS Custom (p. 978).

312
Amazon Relational Database Service User Guide
Available settings

Console setting Setting description CLI option and RDS API parameter Supported
DB engines

Database port The port that you want to access the CLI option: All
DB instance through. The default
port is shown. --port
Note RDS API parameter:
The firewalls at some
companies block Port
connections to the default
MariaDB, MySQL, and
PostgreSQL ports. If your
company firewall blocks the
default port, enter another
port for your DB instance.

DB engine The version of database engine that CLI option: All


version you want to use.
--engine-version

RDS API parameter:

EngineVersion

DB instance The configuration for your DB CLI option: All


class instance. For example, a db.t3.small
DB instance class has 2 GiB memory, --db-instance-class
2 vCPUs, 1 virtual core, a variable
ECU, and a moderate I/O capacity. RDS API parameter:

If possible, choose a DB instance DBInstanceClass


class large enough that a typical
query working set can be held
in memory. When working sets
are held in memory, the system
can avoid writing to disk, which
improves performance. For more
information, see DB instance
classes (p. 11).

In RDS for Oracle, you can


select Include additional
memory configurations.
These configurations are
optimized for a high ratio of
memory to vCPU. For example,
db.r5.6xlarge.tpc2.mem4x is
a db.r5.8x DB instance that has
2 threads per core (tpc2) and
4x the memory of a standard
db.r5.6xlarge DB instance. For more
information, see RDS for Oracle
instance classes (p. 1796).

313
Amazon Relational Database Service User Guide
Available settings

Console setting Setting description CLI option and RDS API parameter Supported
DB engines

DB instance The name for your DB instance. CLI option: All


identifier Name your DB instances in the
same way that you name your on- --db-instance-identifier
premises servers. Your DB instance
identifier can contain up to 63 RDS API parameter:
alphanumeric characters, and must
DBInstanceIdentifier
be unique for your account in the
AWS Region you chose.

DB parameter A parameter group for your DB CLI option: All


group instance. You can choose the default
parameter group, or you can create --db-parameter-group-name
a custom parameter group.
RDS API parameter:
For more information, see Working
with parameter groups (p. 347). DBParameterGroupName

DB subnet The DB subnet group you want to CLI option: All


group use for the DB cluster.
Select Choose existing to use an --db-subnet-group-name
existing DB subnet group. Then
choose the required subnet group RDS API parameter:
from the Existing DB subnet groups
DBSubnetGroupName
dropdown list.

Choose Automatic setup to let


RDS select a compatible DB subnet
group. If none exist, RDS creates a
new subnet group for your cluster.

For more information, see Working


with DB subnet groups (p. 2689).

Deletion Enable deletion protection to CLI option: All


protection prevent your DB instance from
being deleted. If you create a --deletion-protection
production DB instance with the
AWS Management Console, deletion --no-deletion-protection
protection is enabled by default.
RDS API parameter:
For more information, see Deleting a
DeletionProtection
DB instance (p. 489).

Encryption Enable Encryption to enable CLI option: All


encryption at rest for this DB
instance. --storage-encrypted

For more information, see --no-storage-encrypted


Encrypting Amazon RDS
resources (p. 2586). RDS API parameter:

StorageEncrypted

314
Amazon Relational Database Service User Guide
Available settings

Console setting Setting description CLI option and RDS API parameter Supported
DB engines

Enhanced Enable enhanced monitoring to CLI options: All


Monitoring enable gathering metrics in real
time for the operating system that --monitoring-interval
your DB instance runs on.
--monitoring-role-arn
For more information, see
Monitoring OS metrics with RDS API parameters:
Enhanced Monitoring (p. 797).
MonitoringInterval

MonitoringRoleArn

Engine type Choose the database engine to be CLI option: All


used for this DB instance.
--engine

RDS API parameter:

Engine

Initial database The name for the database on your CLI option: All except
name DB instance. If you don't provide a SQL Server
name, Amazon RDS doesn't create a --db-name
database on the DB instance (except
for Oracle and PostgreSQL). The RDS API parameter:
name can't be a word reserved by
DBName
the database engine, and has other
constraints depending on the DB
engine.

MariaDB and MySQL:

• It must contain 1–64


alphanumeric characters.

Oracle:

• It must contain 1–8 alphanumeric


characters.
• It can't be NULL. The default value
is ORCL.
• It must begin with a letter.

PostgreSQL:

• It must contain 1–63


alphanumeric characters.
• It must begin with a letter or
an underscore. Subsequent
characters can be letters,
underscores, or digits (0-9).
• The initial database name is
postgres.

315
Amazon Relational Database Service User Guide
Available settings

Console setting Setting description CLI option and RDS API parameter Supported
DB engines

License Valid values for the license model: CLI option: All

• general-public-license for --license-model


MariaDB.
• license-included for Microsoft RDS API parameter:
SQL Server.
LicenseModel
• general-public-license for
MySQL.
• license-included or bring-your-
own-license for Oracle.
• postgresql-license for
PostgreSQL.

Log exports The types of database log files to CLI option: All
publish to Amazon CloudWatch
Logs. --enable-cloudwatch-logs-
exports
For more information, see
Publishing database logs to Amazon RDS API parameter:
CloudWatch Logs (p. 898).
EnableCloudwatchLogsExports

Maintenance The 30-minute window in which CLI option: All


window pending modifications to your DB
instance are applied. If the time --preferred-maintenance-window
period doesn't matter, choose No
Preference. RDS API parameter:

For more information, see The PreferredMaintenanceWindow


Amazon RDS maintenance
window (p. 423).

Manage master Select Manage master credentials CLI option: All


credentials in in AWS Secrets Manager to manage
AWS Secrets the master user password in a secret --manage-master-user-password
Manager in Secrets Manager. | --no-manage-master-user-
password
Optionally, choose a KMS key to
use to protect the secret. Choose --master-user-secret-kms-key-
from the KMS keys in your account, id
or enter the key from a different
account. RDS API parameter:

For more information, see Password ManageMasterUserPassword


management with Amazon RDS and
AWS Secrets Manager (p. 2568). MasterUserSecretKmsKeyId

316
Amazon Relational Database Service User Guide
Available settings

Console setting Setting description CLI option and RDS API parameter Supported
DB engines

Master The password for your master user CLI option: All
password account. The password has the
following number of printable ASCII --master-user-password
characters (excluding /, ", a space,
and @) depending on the DB engine: RDS API parameter:

• Oracle: 8–30 MasterUserPassword


• MariaDB and MySQL: 8–41
• SQL Server and PostgreSQL: 8–
128

Master The name that you use as the CLI option: All
username master user name to log on to
your DB instance with all database --master-username
privileges.
RDS API parameter:
• It can contain 1–16 alphanumeric
characters and underscores. MasterUsername
• Its first character must be a letter.
• It can't be a word reserved by the
database engine.

You can't change the master user


name after the DB instance is
created.

For more information on


privileges granted to the master
user, see Master user account
privileges (p. 2682).

Microsoft SQL Enable Microsoft SQL Server CLI options: SQL Server
Server Windows Windows authentication, then
Authentication Browse Directory to choose the --domain
directory where you want to
allow authorized domain users --domain-iam-role-name
to authenticate with this SQL
RDS API parameters:
Server instance using Windows
Authentication. Domain

DomainIAMRoleName

317
Amazon Relational Database Service User Guide
Available settings

Console setting Setting description CLI option and RDS API parameter Supported
DB engines

Multi-AZ Create a standby instance to CLI option: All


deployment create a passive secondary replica
of your DB instance in another --multi-az
Availability Zone for failover
support. We recommend Multi- --no-multi-az
AZ for production workloads to
RDS API parameter:
maintain high availability.
MultiAZ
For development and testing, you
can choose Do not create a standby
instance.

For more information, see


Configuring and managing a Multi-
AZ deployment (p. 492).

National The national character set for your CLI option: Oracle
character set DB instance, commonly called the
(NCHAR) NCHAR character set. You can set --nchar-character-set-name
the national character set to either
AL16UTF16 (default) or UTF-8. You API parameter:
can't change the national character
NcharCharacterSetName
set after you create the DB instance.

The national character set is


different from the DB character set.
Unlike the DB character set, the
national character set specifies the
encoding only for NCHAR data types
(NCHAR, NVARCHAR2, and NCLOB)
columns without affecting database
metadata.

For more information, see RDS for


Oracle character sets (p. 1801).

318
Amazon Relational Database Service User Guide
Available settings

Console setting Setting description CLI option and RDS API parameter Supported
DB engines

Network type The IP addressing protocols CLI option: All


supported by the DB instance.
--network-type
IPv4 (the default) to specify that
resources can communicate with the RDS API parameter:
DB instance only over the Internet
Protocol version 4 (IPv4) addressing NetworkType
protocol.

Dual-stack mode to specify that


resources can communicate with
the DB instance over IPv4, Internet
Protocol version 6 (IPv6), or both.
Use dual-stack mode if you have any
resources that must communicate
with your DB instance over the IPv6
addressing protocol. Also, make
sure that you associate an IPv6 CIDR
block with all subnets in the DB
subnet group that you specify.

For more information, see Amazon


RDS IP addressing (p. 2690).

Option group An option group for your DB CLI option: All


instance. You can choose the default
option group or you can create a --option-group-name
custom option group.
RDS API parameter:
For more information, see Working
with option groups (p. 331). OptionGroupName

319
Amazon Relational Database Service User Guide
Available settings

Console setting Setting description CLI option and RDS API parameter Supported
DB engines

Performance Enable Performance Insights CLI options: All


Insights to monitor your DB instance
load so that you can analyze --enable-performance-insights
and troubleshoot your database
performance. --no-enable-performance-
insights
Choose a retention period to
determine how much Performance --performance-insights-
Insights data history to keep. The retention-period
retention setting in the free tier
is Default (7 days). To retain your --performance-insights-kms-
performance data for longer, specify key-id
1–24 months. For more information
RDS API parameters:
about retention periods, see Pricing
and data retention for Performance EnablePerformanceInsights
Insights (p. 726).
PerformanceInsightsRetentionPeriod
Choose a KMS key to use to protect
the key used to encrypt this PerformanceInsightsKMSKeyId
database volume. Choose from the
KMS keys in your account, or enter
the key from a different account.

For more information, see


Monitoring DB load with
Performance Insights on Amazon
RDS (p. 720).

Provisioned The Provisioned IOPS (I/O CLI option: All


IOPS operations per second) value for
the DB instance. This setting is --iops
available only if you choose one of
the following for Storage type: RDS API parameter:

• General purpose SSD (gp3) Iops


• Provisioned IOPS SSD (io1)

For more information, see Amazon


RDS DB instance storage (p. 101).

320
Amazon Relational Database Service User Guide
Available settings

Console setting Setting description CLI option and RDS API parameter Supported
DB engines

Public access Yes to give the DB instance a CLI option: All


public IP address, meaning that it's
accessible outside the VPC. To be --publicly-accessible
publicly accessible, the DB instance
also has to be in a public subnet in --no-publicly-accessible
the VPC.
RDS API parameter:
No to make the DB instance
PubliclyAccessible
accessible only from inside the VPC.

For more information, see Hiding


a DB instance in a VPC from the
internet (p. 2695).

To connect to a DB instance from


outside of its VPC, the DB instance
must be publicly accessible. Also,
access must be granted using the
inbound rules of the DB instance's
security group. In addition, other
requirements must be met. For more
information, see Can't connect to
Amazon RDS DB instance (p. 2727).

If your DB instance isn't publicly


accessible, use an AWS Site-to-Site
VPN connection or an AWS Direct
Connect connection to access it
from a private network. For more
information, see Internetwork traffic
privacy (p. 2605).

RDS Proxy Choose Create an RDS Proxy to Not available when creating a DB MariaDB
create a proxy for your DB instance. instance.
Amazon RDS automatically creates MySQL
an IAM role and a Secrets Manager
secret for the proxy. PostgreSQL

For more information, see Using


Amazon RDS Proxy (p. 1199).

321
Amazon Relational Database Service User Guide
Available settings

Console setting Setting description CLI option and RDS API parameter Supported
DB engines

Storage Enable storage autoscaling CLI option: All


autoscaling to enable Amazon RDS to
automatically increase storage when --max-allocated-storage
needed to avoid having your DB
instance run out of storage space. RDS API parameter:

Use Maximum storage threshold to MaxAllocatedStorage


set the upper limit for Amazon RDS
to automatically increase storage
for your DB instance. The default is
1,000 GiB.

For more information, see Managing


capacity automatically with Amazon
RDS storage autoscaling (p. 480).

Storage The storage throughput value for CLI option: All


throughput the DB instance. This setting is
available only if you choose General --storage-throughput
purpose SSD (gp3) for Storage
type. RDS API parameter:

For more information, see Amazon StorageThroughput


RDS DB instance storage (p. 101).

Storage type The storage type for your DB CLI option: All
instance.
--storage-type
If you choose General Purpose
SSD (gp3), you can provision RDS API parameter:
additional provisioned IOPS and
storage throughput under Advanced StorageType
settings.

If you choose Provisioned IOPS SSD


(io1), enter the Provisioned IOPS
value.

For more information, see Amazon


RDS storage types (p. 101).

Subnet group A DB subnet group to associate with CLI option: All


this DB instance.
--db-subnet-group-name
For more information, see Working
with DB subnet groups (p. 2689). RDS API parameter:

DBSubnetGroupName

322
Amazon Relational Database Service User Guide
Available settings

Console setting Setting description CLI option and RDS API parameter Supported
DB engines

Time zone The time zone for your DB instance. CLI option: SQL Server
If you don't choose a time zone,
your DB instance uses the default --timezone RDS Custom
time zone. You can't change the for SQL
time zone after the DB instance is RDS API parameter: Server
created.
Timezone
For more information, see Local
time zone for Microsoft SQL Server
DB instances (p. 1371).

Virtual Private A VPC based on the Amazon VPC For the CLI and API, you specify the VPC All
Cloud (VPC) service to associate with this DB security group IDs.
instance.

For more information, see


Amazon VPC VPCs and Amazon
RDS (p. 2688).

VPC security The security group to associate with CLI option: All
group (firewall) the DB instance.
--vpc-security-group-ids
For more information, see Overview
of VPC security groups (p. 2680). RDS API parameter:

VpcSecurityGroupIds

323
Amazon Relational Database Service User Guide
Creating resources with AWS CloudFormation

Creating Amazon RDS resources with AWS


CloudFormation
Amazon RDS is integrated with AWS CloudFormation, a service that helps you to model and set up your
AWS resources so that you can spend less time creating and managing your resources and infrastructure.
You create a template that describes all the AWS resources that you want (such as DB instances and DB
parameter groups), and AWS CloudFormation provisions and configures those resources for you.

When you use AWS CloudFormation, you can reuse your template to set up your RDS resources
consistently and repeatedly. Describe your resources once, and then provision the same resources over
and over in multiple AWS accounts and Regions.

RDS and AWS CloudFormation templates


To provision and configure resources for RDS and related services, you must understand AWS
CloudFormation templates. Templates are formatted text files in JSON or YAML. These templates
describe the resources that you want to provision in your AWS CloudFormation stacks. If you're
unfamiliar with JSON or YAML, you can use AWS CloudFormation Designer to help you get started with
AWS CloudFormation templates. For more information, see What is AWS CloudFormation Designer? in
the AWS CloudFormation User Guide.

RDS supports creating resources in AWS CloudFormation. For more information, including examples
of JSON and YAML templates for these resources, see the RDS resource type reference in the AWS
CloudFormation User Guide.

Learn more about AWS CloudFormation


To learn more about AWS CloudFormation, see the following resources:

• AWS CloudFormation
• AWS CloudFormation User Guide
• AWS CloudFormation API Reference
• AWS CloudFormation Command Line Interface User Guide

324
Amazon Relational Database Service User Guide
Connecting to a DB instance

Connecting to an Amazon RDS DB instance


Before you can connect to a DB instance, you must create the DB instance. For information, see Creating
an Amazon RDS DB instance (p. 300). After Amazon RDS provisions your DB instance, use any standard
client application or utility for your DB engine to connect to the DB instance. In the connection string,
specify the DNS address from the DB instance endpoint as the host parameter. Also, specify the port
number from the DB instance endpoint as the port parameter.

Topics
• Finding the connection information for an Amazon RDS DB instance (p. 325)
• Database authentication options (p. 328)
• Encrypted connections (p. 329)
• Scenarios for accessing a DB instance in a VPC (p. 329)
• Connecting to a DB instance that is running a specific DB engine (p. 329)
• Managing connections with RDS Proxy (p. 330)

Finding the connection information for an Amazon


RDS DB instance
The connection information for a DB instance includes its endpoint, port, and a valid database user,
such as the master user. For example, for a MySQL DB instance, suppose that the endpoint value is
mydb.123456789012.us-east-1.rds.amazonaws.com. In this case, the port value is 3306, and the
database user is admin. Given this information, you specify the following values in a connection string:

• For host or host name or DNS name, specify mydb.123456789012.us-


east-1.rds.amazonaws.com.
• For port, specify 3306.
• For user, specify admin.

The endpoint is unique for each DB instance, and the values of the port and user can vary. The following
list shows the most common port for each DB engine:

• MariaDB – 3306
• Microsoft SQL Server – 1433
• MySQL – 3306
• Oracle – 1521
• PostgreSQL – 5432

To connect to a DB instance, use any client for a DB engine. For example, you might use the mysql utility
to connect to a MariaDB or MySQL DB instance. You might use Microsoft SQL Server Management Studio
to connect to a SQL Server DB instance. You might use Oracle SQL Developer to connect to an Oracle DB
instance. Similarly, you might use the psql command line utility to connect to a PostgreSQL DB instance.

To find the connection information for a DB instance, use the AWS Management Console. You can
also use the AWS Command Line Interface (AWS CLI) describe-db-instances command or the RDS API
DescribeDBInstances operation.

325
Amazon Relational Database Service User Guide
Finding the connection information

Console

To find the connection information for a DB instance in the AWS Management Console

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases to display a list of your DB instances.
3. Choose the name of the DB instance to display its details.
4. On the Connectivity & security tab, copy the endpoint. Also, note the port number. You need both
the endpoint and the port number to connect to the DB instance.

326
Amazon Relational Database Service User Guide
Finding the connection information

5. If you need to find the master user name, choose the Configuration tab and view the Master
username value.

AWS CLI
To find the connection information for a DB instance by using the AWS CLI, call the describe-db-
instances command. In the call, query for the DB instance ID, endpoint, port, and master user name.

327
Amazon Relational Database Service User Guide
Database authentication options

For Linux, macOS, or Unix:

aws rds describe-db-instances \


--query "*[].[DBInstanceIdentifier,Endpoint.Address,Endpoint.Port,MasterUsername]"

For Windows:

aws rds describe-db-instances ^


--query "*[].[DBInstanceIdentifier,Endpoint.Address,Endpoint.Port,MasterUsername]"

Your output should be similar to the following.

[
[
"mydb",
"mydb.123456789012.us-east-1.rds.amazonaws.com",
3306,
"admin"
],
[
"myoracledb",
"myoracledb.123456789012.us-east-1.rds.amazonaws.com",
1521,
"dbadmin"
],
[
"mypostgresqldb",
"mypostgresqldb.123456789012.us-east-1.rds.amazonaws.com",
5432,
"postgresadmin"
]
]

RDS API
To find the connection information for a DB instance by using the Amazon RDS API, call the
DescribeDBInstances operation. In the output, find the values for the endpoint address, endpoint port,
and master user name.

Database authentication options


Amazon RDS supports the following ways to authenticate database users:

• Password authentication – Your DB instance performs all administration of user accounts. You create
users and specify passwords with SQL statements. The SQL statements you can use depend on your
DB engine.
• AWS Identity and Access Management (IAM) database authentication – You don't need to use a
password when you connect to a DB instance. Instead, you use an authentication token.
• Kerberos authentication – You use external authentication of database users using Kerberos and
Microsoft Active Directory. Kerberos is a network authentication protocol that uses tickets and
symmetric-key cryptography to eliminate the need to transmit passwords over the network. Kerberos
has been built into Active Directory and is designed to authenticate users to network resources, such as
databases.

IAM database authentication and Kerberos authentication are available only for specific DB engines and
versions.

328
Amazon Relational Database Service User Guide
Encrypted connections

For more information, see Database authentication with Amazon RDS (p. 2566).

Encrypted connections
You can use Secure Socket Layer (SSL) or Transport Layer Security (TLS) from your application to encrypt
a connection to a DB instance. Each DB engine has its own process for implementing SSL/TLS. For more
information, see Using SSL/TLS to encrypt a connection to a DB instance (p. 2591).

Scenarios for accessing a DB instance in a VPC


Using Amazon Virtual Private Cloud (Amazon VPC), you can launch AWS resources, such as Amazon
RDS DB instances, into a virtual private cloud (VPC). When you use Amazon VPC, you have control over
your virtual networking environment. You can choose your own IP address range, create subnets, and
configure routing and access control lists.

A VPC security group controls access to DB instances inside a VPC. Each VPC security group rule enables
a specific source to access a DB instance in a VPC that is associated with that VPC security group. The
source can be a range of addresses (for example, 203.0.113.0/24), or another VPC security group. By
specifying a VPC security group as the source, you allow incoming traffic from all instances (typically
application servers) that use the source VPC security group.

Before attempting to connect to your DB instance, configure your VPC for your use case. The following
are common scenarios for accessing a DB instance in a VPC:

• A DB instance in a VPC accessed by an Amazon EC2 instance in the same VPC – A common use of a
DB instance in a VPC is to share data with an application server that is running in an EC2 instance in
the same VPC. The EC2 instance might run a web server with an application that interacts with the DB
instance.
• A DB instance in a VPC accessed by an EC2 instance in a different VPC – In some cases, your DB
instance is in a different VPC from the EC2 instance that you're using to access it. If so, you can use VPC
peering to access the DB instance.
• A DB instance in a VPC accessed by a client application through the internet – To access a DB
instance in a VPC from a client application through the internet, you configure a VPC with a single
public subnet. You also configure an internet gateway to enable communication over the internet.

To connect to a DB instance from outside of its VPC, the DB instance must be publicly accessible.
Also, access must be granted using the inbound rules of the DB instance's security group, and
other requirements must be met. For more information, see Can't connect to Amazon RDS DB
instance (p. 2727).
• A DB instance in a VPC accessed by a private network – If your DB instance isn't publicly accessible,
you can use one of the following options to access it from a private network:
• An AWS Site-to-Site VPN connection
• An AWS Direct Connect connection
• An AWS Client VPN connection

For more information, see Scenarios for accessing a DB instance in a VPC (p. 2701).

Connecting to a DB instance that is running a specific


DB engine
For information about connecting to a DB instance that is running a specific DB engine, follow the
instructions for your DB engine:

• Connecting to a DB instance running the MariaDB database engine (p. 1269)

329
Amazon Relational Database Service User Guide
Managing connections with RDS Proxy

• Connecting to a DB instance running the Microsoft SQL Server database engine (p. 1380)
• Connecting to a DB instance running the MySQL database engine (p. 1630)
• Connecting to your RDS for Oracle DB instance (p. 1806)
• Connecting to a DB instance running the PostgreSQL database engine (p. 2167)

Managing connections with RDS Proxy


You can also use Amazon RDS Proxy to manage connections to RDS for MariaDB, RDS for Microsoft SQL
Server, RDS for MySQL, and RDS for PostgreSQL DB instances. RDS Proxy allows applications to pool
and share database connections to improve scalability. For more information, see Using Amazon RDS
Proxy (p. 1199).

330
Amazon Relational Database Service User Guide
Working with option groups

Working with option groups


Some DB engines offer additional features that make it easier to manage data and databases, and to
provide additional security for your database. Amazon RDS uses option groups to enable and configure
these features. An option group can specify features, called options, that are available for a particular
Amazon RDS DB instance. Options can have settings that specify how the option works. When you
associate a DB instance with an option group, the specified options and option settings are enabled for
that DB instance.

Amazon RDS supports options for the following database engines:

Database engine Relevant documentation

MariaDB Options for MariaDB database engine (p. 1334)

Microsoft SQL Server Options for the Microsoft SQL Server database engine (p. 1514)

MySQL Options for MySQL DB instances (p. 1732)

Oracle Adding options to Oracle DB instances (p. 1990)

PostgreSQL PostgreSQL does not use options and option groups. PostgreSQL
uses extensions and modules to provide additional features.
For more information, see Supported PostgreSQL extension
versions (p. 2156).

Option groups overview


Amazon RDS provides an empty default option group for each new DB instance. You can't modify or
delete this default option group, but any new option group that you create derives its settings from the
default option group. To apply an option to a DB instance, you must do the following:

1. Create a new option group, or copy or modify an existing option group.


2. Add one or more options to the option group.
3. Associate the option group with the DB instance.

To associate an option group with a DB instance, modify the DB instance. For more information, see
Modifying an Amazon RDS DB instance (p. 401).

Both DB instances and DB snapshots can be associated with an option group. In some cases, you might
restore from a DB snapshot or perform a point-in-time restore for a DB instance. In these cases, the
option group associated with the DB snapshot or DB instance is, by default, associated with the restored
DB instance. You can associate a different option group with a restored DB instance. However, the new
option group must contain any persistent or permanent options that were included in the original option
group. Persistent and permanent options are described following.

Options require additional memory to run on a DB instance. Thus, you might need to launch a larger
instance to use them, depending on your current use of your DB instance. For example, Oracle Enterprise
Manager Database Control uses about 300 MB of RAM. If you enable this option for a small DB instance,
you might encounter performance problems or out-of-memory errors.

Persistent and permanent options


Two types of options, persistent and permanent, require special consideration when you add them to an
option group.

331
Amazon Relational Database Service User Guide
Creating an option group

Persistent options can't be removed from an option group while DB instances are associated with the
option group. An example of a persistent option is the TDE option for Microsoft SQL Server transparent
data encryption (TDE). You must disassociate all DB instances from the option group before a persistent
option can be removed from the option group. In some cases, you might restore or perform a point-in-
time restore from a DB snapshot. In these cases, if the option group associated with that DB snapshot
contains a persistent option, you can only associate the restored DB instance with that option group.

Permanent options, such as the TDE option for Oracle Advanced Security TDE, can never be removed
from an option group. You can change the option group of a DB instance that is using the permanent
option. However, the option group associated with the DB instance must include the same permanent
option. In some cases, you might restore or perform a point-in-time restore from a DB snapshot. In these
cases, if the option group associated with that DB snapshot contains a permanent option, you can only
associate the restored DB instance with an option group with that permanent option.

For Oracle DB instances, you can copy shared DB snapshots that have the options Timezone or OLS
(or both). To do so, specify a target option group that includes these options when you copy the DB
snapshot. The OLS option is permanent and persistent only for Oracle DB instances running Oracle
version 12.2 or higher. For more information about these options, see Oracle time zone (p. 2087) and
Oracle Label Security (p. 2049).

VPC considerations
The option group associated with the DB instance is linked to the DB instance's VPC. This means that you
can't use the option group assigned to a DB instance if you try to restore the instance to a different VPC.
If you restore a DB instance to a different VPC, you can do one of the following:

• Assign the default option group to the DB instance.


• Assign an option group that is linked to that VPC.
• Create a new option group and assign it to the DB instance.

With persistent or permanent options, such as Oracle TDE, you must create a new option group. This
option group must include the persistent or permanent option when restoring a DB instance into a
different VPC.

Option settings control the behavior of an option. For example, the Oracle Advanced Security option
NATIVE_NETWORK_ENCRYPTION has a setting that you can use to specify the encryption algorithm for
network traffic to and from the DB instance. Some options settings are optimized for use with Amazon
RDS and cannot be changed.

Mutually exclusive options


Some options are mutually exclusive. You can use one or the other, but not both at the same time. The
following options are mutually exclusive:

• Oracle Enterprise Manager Database Express (p. 2035) and Oracle Management Agent for Enterprise
Manager Cloud Control (p. 2039).
• Oracle native network encryption (p. 2057) and Oracle Secure Sockets Layer (p. 2068).

Creating an option group


You can create a new option group that derives its settings from the default option group. You then add
one or more options to the new option group. Or, if you already have an existing option group, you can
copy that option group with all of its options to a new option group. For more information, see Copying
an option group (p. 334).

332
Amazon Relational Database Service User Guide
Creating an option group

After you create a new option group, it has no options. To learn how to add options to the option group,
see Adding an option to an option group (p. 335). After you have added the options you want, you can
then associate the option group with a DB instance. This way, the options become available on the DB
instance. For information about associating an option group with a DB instance, see the documentation
for your engine in Working with option groups (p. 331).

Console
One way of creating an option group is by using the AWS Management Console.

To create a new option group by using the console

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Option groups.
3. Choose Create group.
4. In the Create option group window, do the following:

a. For Name, type a name for the option group that is unique within your AWS account. The name
can contain only letters, digits, and hyphens.
b. For Description, type a brief description of the option group. The description is used for display
purposes.
c. For Engine, choose the DB engine that you want.
d. For Major engine version, choose the major version of the DB engine that you want.
5. To continue, choose Create. To cancel the operation instead, choose Cancel.

AWS CLI
To create an option group, use the AWS CLI create-option-group command with the following
required parameters.

• --option-group-name
• --engine-name
• --major-engine-version
• --option-group-description

Example

The following example creates an option group named testoptiongroup, which is associated with the
Oracle Enterprise Edition DB engine. The description is enclosed in quotation marks.

For Linux, macOS, or Unix:

aws rds create-option-group \


--option-group-name testoptiongroup \
--engine-name oracle-ee \
--major-engine-version 12.1 \
--option-group-description "Test option group"

For Windows:

333
Amazon Relational Database Service User Guide
Copying an option group

aws rds create-option-group ^


--option-group-name testoptiongroup ^
--engine-name oracle-ee ^
--major-engine-version 12.1 ^
--option-group-description "Test option group"

RDS API
To create an option group, call the Amazon RDS API CreateOptionGroup operation. Include the
following parameters:

• OptionGroupName
• EngineName
• MajorEngineVersion
• OptionGroupDescription

Copying an option group


You can use the AWS CLI or the Amazon RDS API copy an option group. Copying an option group can
be convenient. An example is when you have an existing option group and want to include most of its
custom parameters and values in a new option group. You can also make a copy of an option group that
you use in production and then modify the copy to test other option settings.
Note
Currently, you can't copy an option group to a different AWS Region.

AWS CLI
To copy an option group, use the AWS CLI copy-option-group command. Include the following required
options:

• --source-option-group-identifier
• --target-option-group-identifier
• --target-option-group-description

Example

The following example creates an option group named new-option-group, which is a local copy of the
option group my-option-group.

For Linux, macOS, or Unix:

aws rds copy-option-group \


--source-option-group-identifier my-option-group \
--target-option-group-identifier new-option-group \
--target-option-group-description "My new option group"

For Windows:

aws rds copy-option-group ^


--source-option-group-identifier my-option-group ^
--target-option-group-identifier new-option-group ^
--target-option-group-description "My new option group"

334
Amazon Relational Database Service User Guide
Adding an option to an option group

RDS API
To copy an option group, call the Amazon RDS API CopyOptionGroup operation. Include the following
required parameters.

• SourceOptionGroupIdentifier
• TargetOptionGroupIdentifier
• TargetOptionGroupDescription

Adding an option to an option group


You can add an option to an existing option group. After you have added the options you want, you
can then associate the option group with a DB instance so that the options become available on the DB
instance. For information about associating an option group with a DB instance, see the documentation
for your specific DB engine listed at Working with option groups (p. 331).

Option group changes must be applied immediately in two cases:

• When you add an option that adds or updates a port value, such as the OEM option.
• When you add or remove an option group with an option that includes a port value.

In these cases, choose the Apply Immediately option in the console. Or you can include the --apply-
immediately option when using the AWS CLI or set the ApplyImmediately parameter to true when
using the Amazon RDS API. Options that don't include port values can be applied immediately, or can be
applied during the next maintenance window for the DB instance.
Note
If you specify a security group as a value for an option in an option group, manage the security
group by modifying the option group. You can't change or remove this security group by
modifying a DB instance. Also, the security group doesn't appear in the DB instance details in
the AWS Management Console or in the output for the AWS CLI command describe-db-
instances.

Console
You can use the AWS Management Console to add an option to an option group.

To add an option to an option group by using the console

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Option groups.
3. Choose the option group that you want to modify, and then choose Add option.

335
Amazon Relational Database Service User Guide
Adding an option to an option group

4. In the Add option window, do the following:

a. Choose the option that you want to add. You might need to provide additional values,
depending on the option that you select. For example, when you choose the OEM option, you
must also type a port value and specify a security group.
b. To enable the option on all associated DB instances as soon as you add it, for Apply
Immediately, choose Yes. If you choose No (the default), the option is enabled for each
associated DB instance during its next maintenance window.

336
Amazon Relational Database Service User Guide
Adding an option to an option group

5. When the settings are as you want them, choose Add option.

AWS CLI
To add an option to an option group, run the AWS CLI add-option-to-option-group command with the
option that you want to add. To enable the new option immediately on all associated DB instances,
include the --apply-immediately parameter. By default, the option is enabled for each associated DB
instance during its next maintenance window. Include the following required parameter:

• --option-group-name

Example

The following example adds the Oracle Enterprise Manager Database Control (OEM) option to an option
group named testoptiongroup and immediately enables it. Even if you use the default security group,
you must specify that security group.

For Linux, macOS, or Unix:

aws rds add-option-to-option-group \

337
Amazon Relational Database Service User Guide
Adding an option to an option group

--option-group-name testoptiongroup \
--options OptionName=OEM,Port=5500,DBSecurityGroupMemberships=default \
--apply-immediately

For Windows:

aws rds add-option-to-option-group ^


--option-group-name testoptiongroup ^
--options OptionName=OEM,Port=5500,DBSecurityGroupMemberships=default ^
--apply-immediately

Command output is similar to the following:

OPTIONGROUP False oracle-ee 12.1 arn:aws:rds:us-east-1:1234567890:og:testoptiongroup


Test Option Group testoptiongroup default
OPTIONS Oracle 12c EM Express OEM False False 5500
DBSECURITYGROUPMEMBERSHIPS default authorized

Example
The following example adds the Oracle OEM option to an option group. It also specifies a custom port
and a pair of Amazon EC2 VPC security groups to use for that port.

For Linux, macOS, or Unix:

aws rds add-option-to-option-group \


--option-group-name testoptiongroup \
--options OptionName=OEM,Port=5500,VpcSecurityGroupMemberships="sg-test1,sg-test2" \
--apply-immediately

For Windows:

aws rds add-option-to-option-group ^


--option-group-name testoptiongroup ^
--options OptionName=OEM,Port=5500,VpcSecurityGroupMemberships="sg-test1,sg-test2" ^
--apply-immediately

Command output is similar to the following:

OPTIONGROUP False oracle-ee 12.1 arn:aws:rds:us-east-1:1234567890:og:testoptiongroup


Test Option Group testoptiongroup vpc-test
OPTIONS Oracle 12c EM Express OEM False False 5500
VPCSECURITYGROUPMEMBERSHIPS active sg-test1
VPCSECURITYGROUPMEMBERSHIPS active sg-test2

Example
The following example adds the Oracle option NATIVE_NETWORK_ENCRYPTION to an option group and
specifies the option settings. If no option settings are specified, default values are used.

338
Amazon Relational Database Service User Guide
Listing the options and option settings for an option group

For Linux, macOS, or Unix:

aws rds add-option-to-option-group \


--option-group-name testoptiongroup \
--options '[{"OptionSettings":[{"Name":"SQLNET.ENCRYPTION_SERVER","Value":"REQUIRED"},
{"Name":"SQLNET.ENCRYPTION_TYPES_SERVER","Value":"AES256,AES192,DES"}],"OptionName":"NATIVE_NETWORK_ENC
\
--apply-immediately

For Windows:

aws rds add-option-to-option-group ^


--option-group-name testoptiongroup ^
--options "OptionSettings"=[{"Name"="SQLNET.ENCRYPTION_SERVER","Value"="REQUIRED"},
{"Name"="SQLNET.ENCRYPTION_TYPES_SERVER","Value"="AES256\,AES192\,DES"}],"OptionName"="NATIVE_NETWORK_E
^
--apply-immediately

Command output is similar to the following:

OPTIONGROUP False oracle-ee 12.1 arn:aws:rds:us-east-1:1234567890:og:testoptiongroup


Test Option Group testoptiongroup
OPTIONS Oracle Advanced Security - Native Network Encryption NATIVE_NETWORK_ENCRYPTION
False False
OPTIONSETTINGS
RC4_256,AES256,AES192,3DES168,RC4_128,AES128,3DES112,RC4_56,DES,RC4_40,DES40
STATIC STRING
RC4_256,AES256,AES192,3DES168,RC4_128,AES128,3DES112,RC4_56,DES,RC4_40,DES40 Specifies
list of encryption algorithms in order of intended use
True True SQLNET.ENCRYPTION_TYPES_SERVER AES256,AES192,DES
OPTIONSETTINGS ACCEPTED,REJECTED,REQUESTED,REQUIRED STATIC STRING REQUESTED
Specifies the desired encryption behavior False True SQLNET.ENCRYPTION_SERVER
REQUIRED
OPTIONSETTINGS SHA1,MD5 STATIC STRING SHA1,MD5 Specifies list of checksumming
algorithms in order of intended use True True SQLNET.CRYPTO_CHECKSUM_TYPES_SERVER
SHA1,MD5

RDS API
To add an option to an option group using the Amazon RDS API, call the ModifyOptionGroup operation
with the option that you want to add. To enable the new option immediately on all associated DB
instances, include the ApplyImmediately parameter and set it to true. By default, the option is
enabled for each associated DB instance during its next maintenance window. Include the following
required parameter:

• OptionGroupName

Listing the options and option settings for an option


group
You can list all the options and option settings for an option group.

339
Amazon Relational Database Service User Guide
Modifying an option setting

Console
You can use the AWS Management Console to list all of the options and option settings for an option
group.

To list the options and option settings for an option group

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Option groups.
3. Choose the name of the option group to display its details. The options and option settings in the
option group are listed.

AWS CLI
To list the options and option settings for an option group, use the AWS CLI describe-option-
groups command. Specify the name of the option group whose options and settings you want to view.
If you don't specify an option group name, all option groups are described.

Example

The following example lists the options and option settings for all option groups.

aws rds describe-option-groups

Example

The following example lists the options and option settings for an option group named
testoptiongroup.

aws rds describe-option-groups --option-group-name testoptiongroup

RDS API
To list the options and option settings for an option group, use the Amazon RDS API
DescribeOptionGroups operation. Specify the name of the option group whose options and settings
you want to view. If you don't specify an option group name, all option groups are described.

Modifying an option setting


After you have added an option that has modifiable option settings, you can modify the settings at any
time. If you change options or option settings in an option group, those changes are applied to all DB
instances that are associated with that option group. For more information on what settings are available
for the various options, see the documentation for your engine in Working with option groups (p. 331).

Option group changes must be applied immediately in two cases:

• When you add an option that adds or updates a port value, such as the OEM option.
• When you add or remove an option group with an option that includes a port value.

In these cases, choose the Apply Immediately option in the console. Or you can include the --apply-
immediately option when using the AWS CLI or set the ApplyImmediately parameter to true when

340
Amazon Relational Database Service User Guide
Modifying an option setting

using the RDS API. Options that don't include port values can be applied immediately, or can be applied
during the next maintenance window for the DB instance.
Note
If you specify a security group as a value for an option in an option group, you manage the
security group by modifying the option group. You can't change or remove this security group
by modifying a DB instance. Also, the security group doesn't appear in the DB instance details
in the AWS Management Console or in the output for the AWS CLI command describe-db-
instances.

Console
You can use the AWS Management Console to modify an option setting.

To modify an option setting by using the console

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Option groups.
3. Select the option group whose option that you want to modify, and then choose Modify option.
4. In the Modify option window, from Installed Options, choose the option whose setting you want to
modify. Make the changes that you want.
5. To enable the option as soon as you add it, for Apply Immediately, choose Yes. If you choose No
(the default), the option is enabled for each associated DB instance during its next maintenance
window.
6. When the settings are as you want them, choose Modify Option.

AWS CLI
To modify an option setting, use the AWS CLI add-option-to-option-group command with the
option group and option that you want to modify. By default, the option is enabled for each associated
DB instance during its next maintenance window. To apply the change immediately to all associated
DB instances, include the --apply-immediately parameter. To modify an option setting, use the --
settings argument.

Example
The following example modifies the port that the Oracle Enterprise Manager Database Control (OEM)
uses in an option group named testoptiongroup and immediately applies the change.

For Linux, macOS, or Unix:

aws rds add-option-to-option-group \


--option-group-name testoptiongroup \
--options OptionName=OEM,Port=5432,DBSecurityGroupMemberships=default \
--apply-immediately

For Windows:

aws rds add-option-to-option-group ^


--option-group-name testoptiongroup ^
--options OptionName=OEM,Port=5432,DBSecurityGroupMemberships=default ^
--apply-immediately

341
Amazon Relational Database Service User Guide
Modifying an option setting

Command output is similar to the following:

OPTIONGROUP False oracle-ee 12.1 arn:aws:rds:us-east-1:1234567890:og:testoptiongroup


Test Option Group testoptiongroup
OPTIONS Oracle 12c EM Express OEM False False 5432
DBSECURITYGROUPMEMBERSHIPS default authorized

Example

The following example modifies the Oracle option NATIVE_NETWORK_ENCRYPTION and changes the
option settings.

For Linux, macOS, or Unix:

aws rds add-option-to-option-group \


--option-group-name testoptiongroup \
--options '[{"OptionSettings":[{"Name":"SQLNET.ENCRYPTION_SERVER","Value":"REQUIRED"},
{"Name":"SQLNET.ENCRYPTION_TYPES_SERVER","Value":"AES256,AES192,DES,RC4_256"}],"OptionName":"NATIVE_NET
\
--apply-immediately

For Windows:

aws rds add-option-to-option-group ^


--option-group-name testoptiongroup ^
--options "OptionSettings"=[{"Name"="SQLNET.ENCRYPTION_SERVER","Value"="REQUIRED"},
{"Name"="SQLNET.ENCRYPTION_TYPES_SERVER","Value"="AES256\,AES192\,DES
\,RC4_256"}],"OptionName"="NATIVE_NETWORK_ENCRYPTION" ^
--apply-immediately

Command output is similar to the following:

OPTIONGROUP False oracle-ee 12.1 arn:aws:rds:us-east-1:1234567890:og:testoptiongroup


Test Option Group testoptiongroup
OPTIONS Oracle Advanced Security - Native Network Encryption NATIVE_NETWORK_ENCRYPTION
False False
OPTIONSETTINGS
RC4_256,AES256,AES192,3DES168,RC4_128,AES128,3DES112,RC4_56,DES,RC4_40,DES40 STATIC
STRING
RC4_256,AES256,AES192,3DES168,RC4_128,AES128,3DES112,RC4_56,DES,RC4_40,DES40
Specifies list of encryption algorithms in order of intended use
True True SQLNET.ENCRYPTION_TYPES_SERVER AES256,AES192,DES,RC4_256
OPTIONSETTINGS ACCEPTED,REJECTED,REQUESTED,REQUIRED STATIC STRING REQUESTED
Specifies the desired encryption behavior False True SQLNET.ENCRYPTION_SERVER
REQUIRED
OPTIONSETTINGS SHA1,MD5 STATIC STRING SHA1,MD5 Specifies list of
checksumming algorithms in order of intended use True True
SQLNET.CRYPTO_CHECKSUM_TYPES_SERVER SHA1,MD5
OPTIONSETTINGS ACCEPTED,REJECTED,REQUESTED,REQUIRED STATIC STRING
REQUESTED Specifies the desired data integrity behavior False True
SQLNET.CRYPTO_CHECKSUM_SERVER REQUESTED

342
Amazon Relational Database Service User Guide
Removing an option from an option group

RDS API
To modify an option setting, use the Amazon RDS API ModifyOptionGroup command with the option
group and option that you want to modify. By default, the option is enabled for each associated DB
instance during its next maintenance window. To apply the change immediately to all associated DB
instances, include the ApplyImmediately parameter and set it to true.

Removing an option from an option group


Some options can be removed from an option group, and some cannot. A persistent option cannot be
removed from an option group until all DB instances associated with that option group are disassociated.
A permanent option can never be removed from an option group. For more information about what
options are removable, see the documentation for your specific engine listed at Working with option
groups (p. 331).

If you remove all options from an option group, Amazon RDS doesn't delete the option group. DB
instances that are associated with the empty option group continue to be associated with it; they just
won't have any active options. Alternatively, to remove all options from a DB instance, you can associate
the DB instance with the default (empty) option group.

Console
You can use the AWS Management Console to remove an option from an option group.

To remove an option from an option group by using the console

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Option groups.
3. Select the option group whose option you want to remove, and then choose Delete option.
4. In the Delete option window, do the following:

• Select the check box for the option that you want to delete.
• For the deletion to take effect as soon as you make it, for Apply immediately, choose Yes. If you
choose No (the default), the option is deleted for each associated DB instance during its next
maintenance window.

343
Amazon Relational Database Service User Guide
Deleting an option group

5. When the settings are as you want them, choose Yes, Delete.

AWS CLI
To remove an option from an option group, use the AWS CLI remove-option-from-option-group
command with the option that you want to delete. By default, the option is removed from each
associated DB instance during its next maintenance window. To apply the change immediately, include
the --apply-immediately parameter.

Example

The following example removes the Oracle Enterprise Manager Database Control (OEM) option from an
option group named testoptiongroup and immediately applies the change.

For Linux, macOS, or Unix:

aws rds remove-option-from-option-group \


--option-group-name testoptiongroup \
--options OEM \
--apply-immediately

For Windows:

aws rds remove-option-from-option-group ^


--option-group-name testoptiongroup ^
--options OEM ^
--apply-immediately

Command output is similar to the following:

OPTIONGROUP testoptiongroup oracle-ee 12.1 Test option group

RDS API
To remove an option from an option group, use the Amazon RDS API ModifyOptionGroup action. By
default, the option is removed from each associated DB instance during its next maintenance window. To
apply the change immediately, include the ApplyImmediately parameter and set it to true.

Include the following parameters:

• OptionGroupName
• OptionsToRemove.OptionName

Deleting an option group


You can delete an option group that is not associated with any Amazon RDS resource. An option group
can be associated with a DB instance, a manual DB snapshot, or an automated DB snapshot.

You can't delete a default option group. If you try to delete an option group that is associated with an
RDS resource, an error like the following is returned.

344
Amazon Relational Database Service User Guide
Deleting an option group

An error occurred (InvalidOptionGroupStateFault) when calling the DeleteOptionGroup


operation: The option group 'optionGroupName' cannot be deleted because it is in use.

To find the Amazon RDS resources associated with an option group

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Option groups.
3. Choose the name of the option group to show its details.
4. Check the Associated Instances and Snapshots section for the associated Amazon RDS resources.

If a DB instance is associated with the option group, modify the DB instance to use a different option
group. For more information, see Modifying an Amazon RDS DB instance (p. 401).

If a manual DB snapshot is associated with the option group, modify the DB snapshot to use a different
option group. You can do so using the AWS CLI modify-db-snapshot command.
Note
You can't modify the option group of an automated DB snapshot.

Console
One way of deleting an option group is by using the AWS Management Console.

To delete an option group by using the console

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Option groups.
3. Choose the option group.
4. Choose Delete group.
5. On the confirmation page, choose Delete to finish deleting the option group, or choose Cancel to
cancel the deletion.

AWS CLI
To delete an option group, use the AWS CLI delete-option-group command with the following
required parameter.

• --option-group-name

Example

The following example deletes an option group named testoptiongroup.

For Linux, macOS, or Unix:

aws rds delete-option-group \


--option-group-name testoptiongroup

345
Amazon Relational Database Service User Guide
Deleting an option group

For Windows:

aws rds delete-option-group ^


--option-group-name testoptiongroup

RDS API
To delete an option group, call the Amazon RDS API DeleteOptionGroup operation. Include the
following parameter:

• OptionGroupName

346
Amazon Relational Database Service User Guide
Working with parameter groups

Working with parameter groups


Database parameters specify how the database is configured. For example, database parameters can
specify the amount of resources, such as memory, to allocate to a database.

You manage your database configuration by associating your DB instances and Multi-AZ DB clusters with
parameter groups. Amazon RDS defines parameter groups with default settings. You can also define your
own parameter groups with customized settings.
Note
Some DB engines offer additional features that you can add to your database as options in an
option group. For information about option groups, see Working with option groups (p. 331).

Topics
• Overview of parameter groups (p. 347)
• Working with DB parameter groups (p. 349)
• Working with DB cluster parameter groups for Multi-AZ DB clusters (p. 360)
• Comparing parameter groups (p. 368)
• Specifying DB parameters (p. 369)

Overview of parameter groups


A DB parameter group acts as a container for engine configuration values that are applied to one or more
DB instances. DB cluster parameter groups apply to Multi-AZ DB clusters only. In a Multi-AZ DB cluster,
the settings in the DB cluster parameter group apply to all of the DB instances in the cluster. The default
DB parameter group for the DB engine and DB engine version is used for each DB instance in the DB
cluster.

Topics
• Default and custom parameter groups (p. 347)
• Static and dynamic DB instance parameters (p. 348)
• Static and dynamic DB cluster parameters (p. 348)
• Character set parameters (p. 349)
• Supported parameters and parameter values (p. 349)

Default and custom parameter groups


If you create a DB instance without specifying a DB parameter group, the DB instance uses a default DB
parameter group. Likewise, if you create a Multi-AZ DB cluster without specifying a DB cluster parameter
group, the DB cluster uses a default DB cluster parameter group. Each default parameter group contains
database engine defaults and Amazon RDS system defaults based on the engine, compute class, and
allocated storage of the instance.

You can't modify the parameter settings of a default parameter group. Instead, you can do the following:

1. Create a new parameter group.


2. Change the settings of your desired parameters. Not all DB engine parameters in a parameter group
are eligible to be modified.
3. Modify your DB instance or DB cluster to use the custom parameter group. For information about
modifying a DB instance, see Modifying an Amazon RDS DB instance (p. 401). For information about
modifying a Multi-AZ DB clusters, see Modifying a Multi-AZ DB cluster (p. 539).

347
Amazon Relational Database Service User Guide
Overview of parameter groups

Note
If you have modified your DB instance to use a custom parameter group, and you start the DB
instance, RDS automatically reboots the DB instance as part of the startup process.

If you update parameters within a DB parameter group, the changes apply to all DB instances that are
associated with that parameter group. Likewise, if you update parameters within a Multi-AZ DB cluster
parameter group, the changes apply to all Aurora DB clusters that are associated with that DB cluster
parameter group.

If you don't want to create a parameter group from scratch, you can copy an existing parameter group
with the AWS CLI copy-db-parameter-group command or copy-db-cluster-parameter-group command.
You might find that copying a parameter group is useful in some cases. For example, you might want to
include most of an existing DB parameter group's custom parameters and values in a new DB parameter
group.

Static and dynamic DB instance parameters


DB instance parameters are either static or dynamic. They differ as follows:

• When you change a static parameter and save the DB parameter group, the parameter change takes
effect after you manually reboot the associated DB instances. For static parameters, the console always
uses pending-reboot for the ApplyMethod.
• When you change a dynamic parameter, by default the parameter change takes effect immediately,
without requiring a reboot. When you use the AWS Management Console to change DB instance
parameter values, it always uses immediate for the ApplyMethod for dynamic parameters. To defer
the parameter change until after you reboot an associated DB instance, use the AWS CLI or RDS API.
Set the ApplyMethod to pending-reboot for the parameter change.
Note
Using pending-reboot with dynamic parameters in the AWS CLI or RDS API on RDS for SQL
Server DB instances generates an error. Use apply-immediately on RDS for SQL Server.

For more information about using the AWS CLI to change a parameter value, see modify-db-
parameter-group. For more information about using the RDS API to change a parameter value, see
ModifyDBParameterGroup.

When you associate a new DB parameter group with a DB instance, RDS applies the modified static
and dynamic parameters only after the DB instance is rebooted. However, if you modify dynamic
parameters in the DB parameter group after you associate it with the DB instance, these changes are
applied immediately without a reboot. For more information about changing the DB parameter group,
see Modifying an Amazon RDS DB instance (p. 401).

If a DB instance isn't using the latest changes to its associated DB parameter group, the console shows
a status of pending-reboot for the DB parameter group. This status doesn't result in an automatic
reboot during the next maintenance window. To apply the latest parameter changes to that DB instance,
manually reboot the DB instance.

Static and dynamic DB cluster parameters


DB cluster parameters are either static or dynamic. They differ as follows:

• When you change a static parameter and save the DB cluster parameter group, the parameter change
takes effect after you manually reboot the associated DB clusters. For static parameters, the console
always uses pending-reboot for the ApplyMethod.
• When you change a dynamic parameter, by default the parameter change takes effect immediately,
without requiring a reboot. When you use the AWS Management Console to change DB cluster

348
Amazon Relational Database Service User Guide
Working with DB parameter groups

parameter values, it always uses immediate for the ApplyMethod for dynamic parameters. To defer
the parameter change until after an associated DB cluster is rebooted, use the AWS CLI or RDS API. Set
the ApplyMethod to pending-reboot for the parameter change.

For more information about using the AWS CLI to change a parameter value, see modify-db-cluster-
parameter-group. For more information about using the RDS API to change a parameter value, see
ModifyDBClusterParameterGroup.

Character set parameters


Before you create a DB instance or Multi-AZ DB cluster, set any parameters that relate to the character
set or collation of your database in your parameter group. Also do so before you create a database in it.
In this way, you ensure that the default database and new databases use the character set and collation
values that you specify. If you change character set or collation parameters, the parameter changes
aren't applied to existing databases.

For some DB engines, you can change character set or collation values for an existing database using the
ALTER DATABASE command, for example:

ALTER DATABASE database_name CHARACTER SET character_set_name COLLATE collation;

For more information about changing the character set or collation values for a database, check the
documentation for your DB engine.

Supported parameters and parameter values


To determine the supported parameters for your DB engine, view the parameters in the DB parameter
group and DB cluster parameter group used by the DB instance or DB cluster. For more information, see
Viewing parameter values for a DB parameter group (p. 359) and Viewing parameter values for a DB
cluster parameter group (p. 367).

In many cases, you can specify integer and Boolean parameter values using expressions, formulas, and
functions. Functions can include a mathematical log expression. However, not all parameters support
expressions, formulas, and functions for parameter values. For more information, see Specifying DB
parameters (p. 369).

Improperly setting parameters in a parameter group can have unintended adverse effects, including
degraded performance and system instability. Always be cautious when modifying database parameters,
and back up your data before modifying a parameter group. Try parameter group setting changes on
a test DB instance or DB cluster before applying those parameter group changes to a production DB
instance or DB cluster.

Working with DB parameter groups


DB instances use DB parameter groups. The following sections describe configuring and managing DB
instance parameter groups.

Topics
• Creating a DB parameter group (p. 350)
• Associating a DB parameter group with a DB instance (p. 351)
• Modifying parameters in a DB parameter group (p. 352)
• Resetting parameters in a DB parameter group to their default values (p. 354)
• Copying a DB parameter group (p. 356)

349
Amazon Relational Database Service User Guide
Working with DB parameter groups

• Listing DB parameter groups (p. 358)


• Viewing parameter values for a DB parameter group (p. 359)

Creating a DB parameter group


You can create a new DB parameter group using the AWS Management Console, the AWS CLI, or the RDS
API.

The following limitations apply to the DB parameter group name:

• The name must be 1 to 255 letters, numbers, or hyphens.

Default parameter group names can include a period, such as default.mysql8.0. However, custom
parameter group names can't include a period.
• The first character must be a letter.
• The name can't end with a hyphen or contain two consecutive hyphens.

Console

To create a DB parameter group

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Parameter groups.
3. Choose Create parameter group.

The Create parameter group window appears.


4. In the Parameter group family list, select a DB parameter group family.
5. In the Type list, select DB Parameter Group.
6. In the Group name box, enter the name of the new DB parameter group.
7. In the Description box, enter a description for the new DB parameter group.
8. Choose Create.

AWS CLI

To create a DB parameter group, use the AWS CLI create-db-parameter-group command. The
following example creates a DB parameter group named mydbparametergroup for MySQL version 8.0
with a description of "My new parameter group."

Include the following required parameters:

• --db-parameter-group-name
• --db-parameter-group-family
• --description

To list all of the available parameter group families, use the following command:

aws rds describe-db-engine-versions --query "DBEngineVersions[].DBParameterGroupFamily"

Note
The output contains duplicates.

350
Amazon Relational Database Service User Guide
Working with DB parameter groups

Example
For Linux, macOS, or Unix:

aws rds create-db-parameter-group \


--db-parameter-group-name mydbparametergroup \
--db-parameter-group-family MySQL8.0 \
--description "My new parameter group"

For Windows:

aws rds create-db-parameter-group ^


--db-parameter-group-name mydbparametergroup ^
--db-parameter-group-family MySQL8.0 ^
--description "My new parameter group"

This command produces output similar to the following:

DBPARAMETERGROUP mydbparametergroup mysql8.0 My new parameter group

RDS API

To create a DB parameter group, use the RDS API CreateDBParameterGroup operation.

Include the following required parameters:

• DBParameterGroupName
• DBParameterGroupFamily
• Description

Associating a DB parameter group with a DB instance


You can create your own DB parameter groups with customized settings. You can associate a DB
parameter group with a DB instance using the AWS Management Console, the AWS CLI, or the RDS API.
You can do so when you create or modify a DB instance.

For information about creating a DB parameter group, see Creating a DB parameter group (p. 350).
For information about creating a DB instance, see Creating an Amazon RDS DB instance (p. 300). For
information about modifying a DB instance, see Modifying an Amazon RDS DB instance (p. 401).
Note
When you associate a new DB parameter group with a DB instance, the modified static and
dynamic parameters are applied only after the DB instance is rebooted. However, if you modify
dynamic parameters in the DB parameter group after you associate it with the DB instance,
these changes are applied immediately without a reboot.

Console

To associate a DB parameter group with a DB instance

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases, and then choose the DB instance that you want to
modify.
3. Choose Modify. The Modify DB Instance page appears.
4. Change the DB parameter group setting.

351
Amazon Relational Database Service User Guide
Working with DB parameter groups

5. Choose Continue and check the summary of modifications.


6. (Optional) Choose Apply immediately to apply the changes immediately. Choosing this option
can cause an outage in some cases. For more information, see Using the Apply Immediately
setting (p. 402).
7. On the confirmation page, review your changes. If they are correct, choose Modify DB instance to
save your changes.

Or choose Back to edit your changes or Cancel to cancel your changes.

AWS CLI
To associate a DB parameter group with a DB instance, use the AWS CLI modify-db-instance
command with the following options:

• --db-instance-identifier
• --db-parameter-group-name

The following example associates the mydbpg DB parameter group with the database-1 DB
instance. The changes are applied immediately by using --apply-immediately. Use --no-apply-
immediately to apply the changes during the next maintenance window. For more information, see
Using the Apply Immediately setting (p. 402).

Example
For Linux, macOS, or Unix:

aws rds modify-db-instance \


--db-instance-identifier database-1 \
--db-parameter-group-name mydbpg \
--apply-immediately

For Windows:

aws rds modify-db-instance ^


--db-instance-identifier database-1 ^
--db-parameter-group-name mydbpg ^
--apply-immediately

RDS API
To associate a DB parameter group with a DB instance, use the RDS API ModifyDBInstance operation
with the following parameters:

• DBInstanceName
• DBParameterGroupName

Modifying parameters in a DB parameter group


You can modify parameter values in a customer-created DB parameter group; you can't change the
parameter values in a default DB parameter group. Changes to parameters in a customer-created DB
parameter group are applied to all DB instances that are associated with the DB parameter group.

Changes to some parameters are applied to the DB instance immediately without a reboot. Changes
to other parameters are applied only after the DB instance is rebooted. The RDS console shows the
status of the DB parameter group associated with a DB instance on the Configuration tab. For example,

352
Amazon Relational Database Service User Guide
Working with DB parameter groups

suppose that the DB instance isn't using the latest changes to its associated DB parameter group. If so,
the RDS console shows the DB parameter group with a status of pending-reboot. To apply the latest
parameter changes to that DB instance, manually reboot the DB instance.

Console

To modify a DB parameter group

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Parameter groups.
3. In the list, choose the parameter group that you want to modify.
4. For Parameter group actions, choose Edit.
5. Change the values of the parameters that you want to modify. You can scroll through the
parameters using the arrow keys at the top right of the dialog box.

You can't change values in a default parameter group.


6. Choose Save changes.

AWS CLI

To modify a DB parameter group, use the AWS CLI modify-db-parameter-group command with the
following required options:

353
Amazon Relational Database Service User Guide
Working with DB parameter groups

• --db-parameter-group-name
• --parameters

The following example modifies the max_connections and max_allowed_packet values in the DB
parameter group named mydbparametergroup.

Example

For Linux, macOS, or Unix:

aws rds modify-db-parameter-group \


--db-parameter-group-name mydbparametergroup \
--parameters "ParameterName=max_connections,ParameterValue=250,ApplyMethod=immediate" \

"ParameterName=max_allowed_packet,ParameterValue=1024,ApplyMethod=immediate"

For Windows:

aws rds modify-db-parameter-group ^


--db-parameter-group-name mydbparametergroup ^
--parameters "ParameterName=max_connections,ParameterValue=250,ApplyMethod=immediate" ^

"ParameterName=max_allowed_packet,ParameterValue=1024,ApplyMethod=immediate"

The command produces output like the following:

DBPARAMETERGROUP mydbparametergroup

RDS API

To modify a DB parameter group, use the RDS API ModifyDBParameterGroup operation with the
following required parameters:

• DBParameterGroupName
• Parameters

Resetting parameters in a DB parameter group to their default


values
You can reset parameter values in a customer-created DB parameter group to their default values.
Changes to parameters in a customer-created DB parameter group are applied to all DB instances that
are associated with the DB parameter group.

When you use the console, you can reset specific parameters to their default values. However, you can't
easily reset all of the parameters in the DB parameter group at once. When you use the AWS CLI or RDS
API, you can reset specific parameters to their default values. You can also reset all of the parameters in
the DB parameter group at once.

Changes to some parameters are applied to the DB instance immediately without a reboot. Changes
to other parameters are applied only after the DB instance is rebooted. The RDS console shows the
status of the DB parameter group associated with a DB instance on the Configuration tab. For example,
suppose that the DB instance isn't using the latest changes to its associated DB parameter group. If so,
the RDS console shows the DB parameter group with a status of pending-reboot. To apply the latest
parameter changes to that DB instance, manually reboot the DB instance.

354
Amazon Relational Database Service User Guide
Working with DB parameter groups

Note
In a default DB parameter group, parameters are always set to their default values.

Console

To reset parameters in a DB parameter group to their default values

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Parameter groups.
3. In the list, choose the parameter group.
4. For Parameter group actions, choose Edit.
5. Choose the parameters that you want to reset to their default values. You can scroll through the
parameters using the arrow keys at the top right of the dialog box.

You can't reset values in a default parameter group.


6. Choose Reset and then confirm by choosing Reset parameters.

AWS CLI

To reset some or all of the parameters in a DB parameter group, use the AWS CLI reset-db-
parameter-group command with the following required option: --db-parameter-group-name.

355
Amazon Relational Database Service User Guide
Working with DB parameter groups

To reset all of the parameters in the DB parameter group, specify the --reset-all-parameters
option. To reset specific parameters, specify the --parameters option.

The following example resets all of the parameters in the DB parameter group named
mydbparametergroup to their default values.

Example
For Linux, macOS, or Unix:

aws rds reset-db-parameter-group \


--db-parameter-group-name mydbparametergroup \
--reset-all-parameters

For Windows:

aws rds reset-db-parameter-group ^


--db-parameter-group-name mydbparametergroup ^
--reset-all-parameters

The following example resets the max_connections and max_allowed_packet options to their
default values in the DB parameter group named mydbparametergroup.

Example
For Linux, macOS, or Unix:

aws rds reset-db-parameter-group \


--db-parameter-group-name mydbparametergroup \
--parameters "ParameterName=max_connections,ApplyMethod=immediate" \
"ParameterName=max_allowed_packet,ApplyMethod=immediate"

For Windows:

aws rds reset-db-parameter-group ^


--db-parameter-group-name mydbparametergroup ^
--parameters "ParameterName=max_connections,ApplyMethod=immediate" ^
"ParameterName=max_allowed_packet,ApplyMethod=immediate"

The command produces output like the following:

DBParameterGroupName mydbparametergroup

RDS API
To reset parameters in a DB parameter group to their default values, use the RDS
API ResetDBParameterGroup command with the following required parameter:
DBParameterGroupName.

To reset all of the parameters in the DB parameter group, set the ResetAllParameters parameter to
true. To reset specific parameters, specify the Parameters parameter.

Copying a DB parameter group


You can copy custom DB parameter groups that you create. Copying a parameter group can be
convenient solution. An example is when you have created a DB parameter group and want to include
most of its custom parameters and values in a new DB parameter group. You can copy a DB parameter

356
Amazon Relational Database Service User Guide
Working with DB parameter groups

group by using the AWS Management Console. You can also use the AWS CLI copy-db-parameter-group
command or the RDS API CopyDBParameterGroup operation.

After you copy a DB parameter group, wait at least 5 minutes before creating your first DB instance that
uses that DB parameter group as the default parameter group. Doing this allows Amazon RDS to fully
complete the copy action before the parameter group is used. This is especially important for parameters
that are critical when creating the default database for a DB instance. An example is the character set
for the default database defined by the character_set_database parameter. Use the Parameter
Groups option of the Amazon RDS console or the describe-db-parameters command to verify that your
DB parameter group is created.
Note
You can't copy a default parameter group. However, you can create a new parameter group that
is based on a default parameter group.
You can't copy a DB parameter group to a different AWS account or AWS Region.

Console

To copy a DB parameter group

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Parameter groups.
3. In the list, choose the custom parameter group that you want to copy.
4. For Parameter group actions, choose Copy.
5. In New DB parameter group identifier, enter a name for the new parameter group.
6. In Description, enter a description for the new parameter group.
7. Choose Copy.

AWS CLI
To copy a DB parameter group, use the AWS CLI copy-db-parameter-group command with the
following required options:

• --source-db-parameter-group-identifier
• --target-db-parameter-group-identifier
• --target-db-parameter-group-description

The following example creates a new DB parameter group named mygroup2 that is a copy of the DB
parameter group mygroup1.

Example
For Linux, macOS, or Unix:

aws rds copy-db-parameter-group \


--source-db-parameter-group-identifier mygroup1 \
--target-db-parameter-group-identifier mygroup2 \
--target-db-parameter-group-description "DB parameter group 2"

For Windows:

aws rds copy-db-parameter-group ^


--source-db-parameter-group-identifier mygroup1 ^
--target-db-parameter-group-identifier mygroup2 ^
--target-db-parameter-group-description "DB parameter group 2"

357
Amazon Relational Database Service User Guide
Working with DB parameter groups

RDS API
To copy a DB parameter group, use the RDS API CopyDBParameterGroup operation with the following
required parameters:

• SourceDBParameterGroupIdentifier
• TargetDBParameterGroupIdentifier
• TargetDBParameterGroupDescription

Listing DB parameter groups


You can list the DB parameter groups you've created for your AWS account.
Note
Default parameter groups are automatically created from a default parameter template when
you create a DB instance for a particular DB engine and version. These default parameter
groups contain preferred parameter settings and can't be modified. When you create a custom
parameter group, you can modify parameter settings.

Console

To list all DB parameter groups for an AWS account

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Parameter groups.

The DB parameter groups appear in a list.

AWS CLI
To list all DB parameter groups for an AWS account, use the AWS CLI describe-db-parameter-
groups command.

Example
The following example lists all available DB parameter groups for an AWS account.

aws rds describe-db-parameter-groups

The command returns a response like the following:

DBPARAMETERGROUP default.mysql8.0 mysql8.0 Default parameter group for MySQL8.0


DBPARAMETERGROUP mydbparametergroup mysql8.0 My new parameter group

The following example describes the mydbparamgroup1 parameter group.

For Linux, macOS, or Unix:

aws rds describe-db-parameter-groups \


--db-parameter-group-name mydbparamgroup1

For Windows:

aws rds describe-db-parameter-groups ^


--db-parameter-group-name mydbparamgroup1

358
Amazon Relational Database Service User Guide
Working with DB parameter groups

The command returns a response like the following:

DBPARAMETERGROUP mydbparametergroup1 mysql8.0 My new parameter group

RDS API
To list all DB parameter groups for an AWS account, use the RDS API DescribeDBParameterGroups
operation.

Viewing parameter values for a DB parameter group


You can get a list of all parameters in a DB parameter group and their values.

Console

To view the parameter values for a DB parameter group

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Parameter groups.

The DB parameter groups appear in a list.


3. Choose the name of the parameter group to see its list of parameters.

AWS CLI
To view the parameter values for a DB parameter group, use the AWS CLI describe-db-parameters
command with the following required parameter.

• --db-parameter-group-name

Example
The following example lists the parameters and parameter values for a DB parameter group named
mydbparametergroup.

aws rds describe-db-parameters --db-parameter-group-name mydbparametergroup

The command returns a response like the following:

DBPARAMETER Parameter Name Parameter Value Source Data Type Apply


Type Is Modifiable
DBPARAMETER allow-suspicious-udfs engine-default boolean static
false
DBPARAMETER auto_increment_increment engine-default integer dynamic
true
DBPARAMETER auto_increment_offset engine-default integer dynamic
true
DBPARAMETER binlog_cache_size 32768 system integer dynamic
true
DBPARAMETER socket /tmp/mysql.sock system string static
false

RDS API
To view the parameter values for a DB parameter group, use the RDS API DescribeDBParameters
command with the following required parameter.

359
Amazon Relational Database Service User Guide
Working with DB cluster parameter groups

• DBParameterGroupName

Working with DB cluster parameter groups for Multi-


AZ DB clusters
Multi-AZ DB clusters use DB cluster parameter groups. The following sections describe configuring and
managing DB cluster parameter groups.

Topics
• Creating a DB cluster parameter group (p. 360)
• Modifying parameters in a DB cluster parameter group (p. 362)
• Resetting parameters in a DB cluster parameter group (p. 363)
• Copying a DB cluster parameter group (p. 364)
• Listing DB cluster parameter groups (p. 366)
• Viewing parameter values for a DB cluster parameter group (p. 367)

Creating a DB cluster parameter group


You can create a new DB cluster parameter group using the AWS Management Console, the AWS CLI, or
the RDS API.

After you create a DB cluster parameter group, wait at least 5 minutes before creating a DB cluster that
uses that DB cluster parameter group. Doing this allows Amazon RDS to fully create the parameter group
before it is used by the new DB cluster. You can use the Parameter groups page in the Amazon RDS
console or the describe-db-cluster-parameters command to verify that your DB cluster parameter group
is created.

The following limitations apply to the DB cluster parameter group name:

• The name must be 1 to 255 letters, numbers, or hyphens.

Default parameter group names can include a period, such as default.aurora-mysql5.7. However,
custom parameter group names can't include a period.
• The first character must be a letter.
• The name can't end with a hyphen or contain two consecutive hyphens.

Console

To create a DB cluster parameter group

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Parameter groups.
3. Choose Create parameter group.

The Create parameter group window appears.


4. In the Parameter group family list, select a DB parameter group family
5. In the Type list, select DB Cluster Parameter Group.
6. In the Group name box, enter the name of the new DB cluster parameter group.
7. In the Description box, enter a description for the new DB cluster parameter group.

360
Amazon Relational Database Service User Guide
Working with DB cluster parameter groups

8. Choose Create.

AWS CLI

To create a DB cluster parameter group, use the AWS CLI create-db-cluster-parameter-group


command.

The following example creates a DB cluster parameter group named mydbclusterparametergroup for RDS
for MySQL version 8.0 with a description of "My new cluster parameter group."

Include the following required parameters:

• --db-cluster-parameter-group-name
• --db-parameter-group-family
• --description

To list all of the available parameter group families, use the following command:

aws rds describe-db-engine-versions --query "DBEngineVersions[].DBParameterGroupFamily"

Note
The output contains duplicates.

Example

For Linux, macOS, or Unix:

aws rds create-db-cluster-parameter-group \


--db-cluster-parameter-group-name mydbclusterparametergroup \
--db-parameter-group-family mysql8.0 \
--description "My new cluster parameter group"

For Windows:

aws rds create-db-cluster-parameter-group ^


--db-cluster-parameter-group-name mydbclusterparametergroup ^
--db-parameter-group-family mysql8.0 ^
--description "My new cluster parameter group"

This command produces output similar to the following:

{
"DBClusterParameterGroup": {
"DBClusterParameterGroupName": "mydbclusterparametergroup",
"DBParameterGroupFamily": "mysql8.0",
"Description": "My new cluster parameter group",
"DBClusterParameterGroupArn": "arn:aws:rds:us-east-1:123456789012:cluster-
pg:mydbclusterparametergroup2"
}
}

RDS API

To create a DB cluster parameter group, use the RDS API CreateDBClusterParameterGroup action.

Include the following required parameters:

361
Amazon Relational Database Service User Guide
Working with DB cluster parameter groups

• DBClusterParameterGroupName
• DBParameterGroupFamily
• Description

Modifying parameters in a DB cluster parameter group


You can modify parameter values in a customer-created DB cluster parameter group. You can't change
the parameter values in a default DB cluster parameter group. Changes to parameters in a customer-
created DB cluster parameter group are applied to all DB clusters that are associated with the DB cluster
parameter group.

Console

To modify a DB cluster parameter group

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Parameter groups.
3. In the list, choose the parameter group that you want to modify.
4. For Parameter group actions, choose Edit.
5. Change the values of the parameters you want to modify. You can scroll through the parameters
using the arrow keys at the top right of the dialog box.

You can't change values in a default parameter group.


6. Choose Save changes.
7. Reboot the primary DB instance in the cluster to apply the changes to all of the DB instances in the
cluster.

AWS CLI

To modify a DB cluster parameter group, use the AWS CLI modify-db-cluster-parameter-group


command with the following required parameters:

• --db-cluster-parameter-group-name
• --parameters

The following example modifies the server_audit_logging and server_audit_logs_upload


values in the DB cluster parameter group named mydbclusterparametergroup.

Example

For Linux, macOS, or Unix:

aws rds modify-db-cluster-parameter-group \


--db-cluster-parameter-group-name mydbclusterparametergroup \
--parameters
"ParameterName=server_audit_logging,ParameterValue=1,ApplyMethod=immediate" \

"ParameterName=server_audit_logs_upload,ParameterValue=1,ApplyMethod=immediate"

For Windows:

aws rds modify-db-cluster-parameter-group ^

362
Amazon Relational Database Service User Guide
Working with DB cluster parameter groups

--db-cluster-parameter-group-name mydbclusterparametergroup ^
--parameters
"ParameterName=server_audit_logging,ParameterValue=1,ApplyMethod=immediate" ^

"ParameterName=server_audit_logs_upload,ParameterValue=1,ApplyMethod=immediate"

The command produces output like the following:

DBCLUSTERPARAMETERGROUP mydbclusterparametergroup

RDS API

To modify a DB cluster parameter group, use the RDS API ModifyDBClusterParameterGroup


command with the following required parameters:

• DBClusterParameterGroupName
• Parameters

Resetting parameters in a DB cluster parameter group


You can reset parameters to their default values in a customer-created DB cluster parameter group.
Changes to parameters in a customer-created DB cluster parameter group are applied to all DB clusters
that are associated with the DB cluster parameter group.
Note
In a default DB cluster parameter group, parameters are always set to their default values.

Console

To reset parameters in a DB cluster parameter group to their default values

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Parameter groups.
3. In the list, choose the parameter group.
4. For Parameter group actions, choose Edit.
5. Choose the parameters that you want to reset to their default values. You can scroll through the
parameters using the arrow keys at the top right of the dialog box.

You can't reset values in a default parameter group.


6. Choose Reset and then confirm by choosing Reset parameters.
7. Reboot the primary DB instance in the DB cluster to apply the changes to all of the DB instances in
the DB cluster.

AWS CLI

To reset parameters in a DB cluster parameter group to their default values, use the AWS CLI reset-
db-cluster-parameter-group command with the following required option: --db-cluster-
parameter-group-name.

To reset all of the parameters in the DB cluster parameter group, specify the --reset-all-
parameters option. To reset specific parameters, specify the --parameters option.

The following example resets all of the parameters in the DB parameter group named
mydbparametergroup to their default values.

363
Amazon Relational Database Service User Guide
Working with DB cluster parameter groups

Example
For Linux, macOS, or Unix:

aws rds reset-db-cluster-parameter-group \


--db-cluster-parameter-group-name mydbparametergroup \
--reset-all-parameters

For Windows:

aws rds reset-db-cluster-parameter-group ^


--db-cluster-parameter-group-name mydbparametergroup ^
--reset-all-parameters

The following example resets the server_audit_logging and server_audit_logs_upload to their


default values in the DB cluster parameter group named mydbclusterparametergroup.

Example
For Linux, macOS, or Unix:

aws rds reset-db-cluster-parameter-group \


--db-cluster-parameter-group-name mydbclusterparametergroup \
--parameters "ParameterName=server_audit_logging,ApplyMethod=immediate" \
"ParameterName=server_audit_logs_upload,ApplyMethod=immediate"

For Windows:

aws rds reset-db-cluster-parameter-group ^


--db-cluster-parameter-group-name mydbclusterparametergroup ^
--parameters
"ParameterName=server_audit_logging,ParameterValue=1,ApplyMethod=immediate" ^

"ParameterName=server_audit_logs_upload,ParameterValue=1,ApplyMethod=immediate"

The command produces output like the following:

DBClusterParameterGroupName mydbclusterparametergroup

RDS API
To reset parameters in a DB cluster parameter group to their default values, use the RDS
API ResetDBClusterParameterGroup command with the following required parameter:
DBClusterParameterGroupName.

To reset all of the parameters in the DB cluster parameter group, set the ResetAllParameters
parameter to true. To reset specific parameters, specify the Parameters parameter.

Copying a DB cluster parameter group


You can copy custom DB cluster parameter groups that you create. Copying a parameter group is a
convenient solution when you have already created a DB cluster parameter group and you want to
include most of the custom parameters and values from that group in a new DB cluster parameter group.
You can copy a DB cluster parameter group by using the AWS CLI copy-db-cluster-parameter-group
command or the RDS API CopyDBClusterParameterGroup operation.

After you copy a DB cluster parameter group, wait at least 5 minutes before creating a DB cluster that
uses that DB cluster parameter group. Doing this allows Amazon RDS to fully copy the parameter group

364
Amazon Relational Database Service User Guide
Working with DB cluster parameter groups

before it is used by the new DB cluster. You can use the Parameter groups page in the Amazon RDS
console or the describe-db-cluster-parameters command to verify that your DB cluster parameter group
is created.
Note
You can't copy a default parameter group. However, you can create a new parameter group that
is based on a default parameter group.
You can't copy a DB cluster parameter group to a different AWS account or AWS Region.

Console

To copy a DB cluster parameter group

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Parameter groups.
3. In the list, choose the custom parameter group that you want to copy.
4. For Parameter group actions, choose Copy.
5. In New DB parameter group identifier, enter a name for the new parameter group.
6. In Description, enter a description for the new parameter group.
7. Choose Copy.

AWS CLI

To copy a DB cluster parameter group, use the AWS CLI copy-db-cluster-parameter-group


command with the following required parameters:

• --source-db-cluster-parameter-group-identifier
• --target-db-cluster-parameter-group-identifier
• --target-db-cluster-parameter-group-description

The following example creates a new DB cluster parameter group named mygroup2 that is a copy of the
DB cluster parameter group mygroup1.

Example

For Linux, macOS, or Unix:

aws rds copy-db-cluster-parameter-group \


--source-db-cluster-parameter-group-identifier mygroup1 \
--target-db-cluster-parameter-group-identifier mygroup2 \
--target-db-cluster-parameter-group-description "DB parameter group 2"

For Windows:

aws rds copy-db-cluster-parameter-group ^


--source-db-cluster-parameter-group-identifier mygroup1 ^
--target-db-cluster-parameter-group-identifier mygroup2 ^
--target-db-cluster-parameter-group-description "DB parameter group 2"

RDS API

To copy a DB cluster parameter group, use the RDS API CopyDBClusterParameterGroup operation
with the following required parameters:

365
Amazon Relational Database Service User Guide
Working with DB cluster parameter groups

• SourceDBClusterParameterGroupIdentifier
• TargetDBClusterParameterGroupIdentifier
• TargetDBClusterParameterGroupDescription

Listing DB cluster parameter groups


You can list the DB cluster parameter groups you've created for your AWS account.
Note
Default parameter groups are automatically created from a default parameter template
when you create a DB cluster for a particular DB engine and version. These default parameter
groups contain preferred parameter settings and can't be modified. When you create a custom
parameter group, you can modify parameter settings.

Console

To list all DB cluster parameter groups for an AWS account

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Parameter groups.

The DB cluster parameter groups appear in the list with DB cluster parameter group for Type.

AWS CLI

To list all DB cluster parameter groups for an AWS account, use the AWS CLI describe-db-cluster-
parameter-groups command.

Example

The following example lists all available DB cluster parameter groups for an AWS account.

aws rds describe-db-cluster-parameter-groups

The following example describes the mydbclusterparametergroup parameter group.

For Linux, macOS, or Unix:

aws rds describe-db-cluster-parameter-groups \


--db-cluster-parameter-group-name mydbclusterparametergroup

For Windows:

aws rds describe-db-cluster-parameter-groups ^


--db-cluster-parameter-group-name mydbclusterparametergroup

The command returns a response like the following:

{
"DBClusterParameterGroups": [
{
"DBClusterParameterGroupName": "mydbclusterparametergroup2",
"DBParameterGroupFamily": "mysql8.0",

366
Amazon Relational Database Service User Guide
Working with DB cluster parameter groups

"Description": "My new cluster parameter group",


"DBClusterParameterGroupArn": "arn:aws:rds:us-east-1:123456789012:cluster-
pg:mydbclusterparametergroup"
}
]
}

RDS API

To list all DB cluster parameter groups for an AWS account, use the RDS API
DescribeDBClusterParameterGroups action.

Viewing parameter values for a DB cluster parameter group


You can get a list of all parameters in a DB cluster parameter group and their values.

Console

To view the parameter values for a DB cluster parameter group

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Parameter groups.

The DB cluster parameter groups appear in the list with DB cluster parameter group for Type.
3. Choose the name of the DB cluster parameter group to see its list of parameters.

AWS CLI

To view the parameter values for a DB cluster parameter group, use the AWS CLI describe-db-
cluster-parameters command with the following required parameter.

• --db-cluster-parameter-group-name

Example

The following example lists the parameters and parameter values for a DB cluster parameter group
named mydbclusterparametergroup, in JSON format.

The command returns a response like the following:

aws rds describe-db-cluster-parameters --db-cluster-parameter-group-


name mydbclusterparametergroup

{
"Parameters": [
{
"ParameterName": "activate_all_roles_on_login",
"ParameterValue": "0",
"Description": "Automatically set all granted roles as active after the user
has authenticated successfully.",
"Source": "engine-default",
"ApplyType": "dynamic",
"DataType": "boolean",
"AllowedValues": "0,1",
"IsModifiable": true,
"ApplyMethod": "pending-reboot",

367
Amazon Relational Database Service User Guide
Comparing parameter groups

"SupportedEngineModes": [
"provisioned"
]
},
{
"ParameterName": "allow-suspicious-udfs",
"Description": "Controls whether user-defined functions that have only an xxx
symbol for the main function can be loaded",
"Source": "engine-default",
"ApplyType": "static",
"DataType": "boolean",
"AllowedValues": "0,1",
"IsModifiable": false,
"ApplyMethod": "pending-reboot",
"SupportedEngineModes": [
"provisioned"
]
},
...

RDS API

To view the parameter values for a DB cluster parameter group, use the RDS API
DescribeDBClusterParameters command with the following required parameter.

• DBClusterParameterGroupName

In some cases, the allowed values for a parameter aren't shown. These are always parameters where the
source is the database engine default.

To view the values of these parameters, you can run the following SQL statements:

• MySQL:

-- Show the value of a particular parameter


mysql$ SHOW VARIABLES LIKE '%parameter_name%';

-- Show the values of all parameters


mysql$ SHOW VARIABLES;

• PostgreSQL:

-- Show the value of a particular parameter


postgresql=> SHOW parameter_name;

-- Show the values of all parameters


postgresql=> SHOW ALL;

Comparing parameter groups


You can use the AWS Management Console to view the differences between two parameter groups.

To compare two parameter groups

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Parameter groups.
3. In the list, choose the two parameter groups that you want to compare.

368
Amazon Relational Database Service User Guide
Specifying DB parameters

4. For Parameter group actions, choose Compare.


Note
The specified parameter groups must both be DB parameter groups, or they both must
be DB cluster parameter groups. This is true even when the DB engine and version are the
same. For example, you can't compare an Aurora MySQL 8.0 DB parameter group and an
Aurora MySQL 8.0 DB cluster parameter group.
You can compare Aurora MySQL and RDS for MySQL DB parameter groups, even for
different versions, but you can't compare Aurora PostgreSQL and RDS for PostgreSQL DB
parameter groups.

Specifying DB parameters
DB parameter types include the following:

• Integer
• Boolean
• String
• Long
• Double
• Timestamp
• Object of other defined data types
• Array of values of type integer, Boolean, string, long, double, timestamp, or object

You can also specify integer and Boolean parameters using expressions, formulas, and functions.

For the Oracle engine, you can use the DBInstanceClassHugePagesDefault formula variable to
specify a Boolean DB parameter. See DB parameter formula variables (p. 370).

For the PostgreSQL engine, you can use an expression to specify a Boolean DB parameter. See Boolean
DB parameter expressions (p. 371).

Contents
• DB parameter formulas (p. 369)
• DB parameter formula variables (p. 370)
• DB parameter formula operators (p. 370)
• DB parameter functions (p. 371)
• Boolean DB parameter expressions (p. 371)
• DB parameter log expressions (p. 372)
• DB parameter value examples (p. 373)

DB parameter formulas
A DB parameter formula is an expression that resolves to an integer value or a Boolean value. You
enclose the expression in braces: {}. You can use a formula for either a DB parameter value or as an
argument to a DB parameter function.

Syntax

{FormulaVariable}
{FormulaVariable*Integer}
{FormulaVariable*Integer/Integer}

369
Amazon Relational Database Service User Guide
Specifying DB parameters

{FormulaVariable/Integer}

DB parameter formula variables


Each formula variable returns an integer or a Boolean value. The names of the variables are case-
sensitive.

AllocatedStorage

Returns an integer representing the size, in bytes, of the data volume.


DBInstanceClassHugePagesDefault

Returns a Boolean value. Currently, it's only supported for Oracle engines.

For more information, see Turning on HugePages for an RDS for Oracle instance (p. 1942).
DBInstanceClassMemory

Returns an integer for the number of bytes of memory available to the database process. This
number is internally calculated by starting with the total amount of memory for the DB instance
class. From this, the calculation subtracts memory reserved for the operating system and the RDS
processes that manage the instance. Therefore, the number is always somewhat lower than the
memory figures shown in the instance class tables in DB instance classes (p. 11). The exact value
depends on a combination of factors. These include instance class, DB engine, and whether it applies
to an RDS instance or an instance that's part of an Aurora cluster.
DBInstanceVCPU

Returns an integer representing the number of virtual central processing units (vCPUs) used by
Amazon RDS to manage the instance. Currently, it's only supported for the PostgreSQL engine.
EndPointPort

Returns an integer representing the port used when connecting to the DB instance.

DB parameter formula operators


DB parameter formulas support two operators: division and multiplication.

Division operator: /

Divides the dividend by the divisor, returning an integer quotient. Decimals in the quotient are
truncated, not rounded.

Syntax

dividend / divisor

The dividend and divisor arguments must be integer expressions.


Multiplication operator: *

Multiplies the expressions, returning the product of the expressions. Decimals in the expressions are
truncated, not rounded.

Syntax

expression * expression

Both expressions must be integers.

370
Amazon Relational Database Service User Guide
Specifying DB parameters

DB parameter functions
You specify the arguments of DB parameter functions as either integers or formulas. Each function must
have at least one argument. Specify multiple arguments as a comma-separated list. The list can't have
any empty members, such as argument1,,argument3. Function names are case-insensitive.

IF

Returns an argument.

Currently, it's only supported for Oracle engines, and the only supported first argument is
{DBInstanceClassHugePagesDefault}. For more information, see Turning on HugePages for an
RDS for Oracle instance (p. 1942).

Syntax

IF(argument1, argument2, argument3)

Returns the second argument if the first argument evaluates to true. Returns the third argument
otherwise.
GREATEST

Returns the largest value from a list of integers or parameter formulas.

Syntax

GREATEST(argument1, argument2,...argumentn)

Returns an integer.
LEAST

Returns the smallest value from a list of integers or parameter formulas.

Syntax

LEAST(argument1, argument2,...argumentn)

Returns an integer.
SUM

Adds the values of the specified integers or parameter formulas.

Syntax

SUM(argument1, argument2,...argumentn)

Returns an integer.

Boolean DB parameter expressions


A Boolean DB parameter expression resolves to a Boolean value of 1 or 0. The expression is enclosed in
quotation marks.
Note
Boolean DB parameter expressions are only supported for the PostgreSQL engine.

371
Amazon Relational Database Service User Guide
Specifying DB parameters

Syntax

"expression operator expression"

Both expressions must resolve to integers. An expression can be the following:


• integer constant
• DB parameter formula
• DB parameter function
• DB parameter variable

Boolean DB parameter expressions support the following inequality operators:

The greater than operator: >

Syntax

"expression > expression"

The less than operator: <

Syntax

"expression < expression"

The greater than or equal to operators: >=, =>

Syntax

"expression >= expression"


"expression => expression"

The less than or equal to operators: <=, =<

Syntax

"expression <= expression"


"expression =< expression"

Example using a Boolean DB parameter expression

The following Boolean DB parameter expression example compares the result of a parameter formula
with an integer. It does so to modify the Boolean DB parameter wal_compression for a PostgreSQL DB
instance. The parameter expression compares the number of vCPUs with the value 2. If the number of
vCPUs is greater than 2, then the wal_compression DB parameter is set to true.

aws rds modify-db-parameter-group --db-parameter-group-name group-name \


--parameters "ParameterName=wal_compression,ParameterValue=\"{DBInstanceVCPU} > 2\" "

DB parameter log expressions


You can set an integer DB parameter value to a log expression. You enclose the expression in braces: {}.
For example:

372
Amazon Relational Database Service User Guide
Specifying DB parameters

{log(DBInstanceClassMemory/8187281418)*1000}

The log function represents log base 2. This example also uses the DBInstanceClassMemory formula
variable. See DB parameter formula variables (p. 370).
Note
Currently, you can't specify the MySQL innodb_log_file_size parameter with any value
other than an integer.

DB parameter value examples


These examples show using formulas, functions, and expressions for the values of DB parameters.
Note
DB Parameter functions are currently supported only in the console and aren't supported in the
AWS CLI.
Warning
Improperly setting parameters in a DB parameter group can have unintended adverse effects.
These might include degraded performance and system instability. Use caution when modifying
database parameters and back up your data before modifying your DB parameter group. Try out
parameter group changes on a test DB instance, created using point-in-time-restores, before
applying those parameter group changes to your production DB instances.

Example using the DB parameter function GREATEST

You can specify the GREATEST function in an Oracle processes parameter. Use it to set the number of
user processes to the larger of either 80 or DBInstanceClassMemory divided by 9,868,951.

GREATEST({DBInstanceClassMemory/9868951},80)

Example using the DB parameter function LEAST

You can specify the LEAST function in a MySQL max_binlog_cache_size parameter value. Use
it to set the maximum cache size a transaction can use in a MySQL instance to the lesser of 1 MB or
DBInstanceClass/256.

LEAST({DBInstanceClassMemory/256},10485760)

373
Amazon Relational Database Service User Guide
Creating an ElastiCache cluster from Amazon RDS

Creating an Amazon ElastiCache cluster using


Amazon RDS DB instance settings
ElastiCache is a fully managed, in-memory caching service that provides microsecond read and write
latencies that support flexible, real-time use cases. ElastiCache can help you accelerate application and
database performance. You can use ElastiCache as a primary data store for use cases that don't require
data durability, such as gaming leaderboards, streaming, and data analytics. ElastiCache helps remove
the complexity associated with deploying and managing a distributed computing environment. For more
information, see Common ElastiCache Use Cases and How ElastiCache Can Help for Memcached and
Common ElastiCache Use Cases and How ElastiCache Can Help for Redis. You can use the Amazon RDS
console for creating ElastiCache clusters.

Amazon ElastiCache works with both the Redis and Memcached engines. If you're unsure which engine
you want to use, see Comparing Memcached and Redis. For more information about Amazon ElastiCache,
see the Amazon ElastiCache User Guide.

Topics
• Overview of ElastiCache cluster creation with RDS DB instance settings (p. 374)
• Creating an ElastiCache cluster with settings from a new RDS DB instance (p. 375)
• Creating an ElastiCache cluster with settings from an existing RDS DB instance (p. 377)

Overview of ElastiCache cluster creation with RDS DB


instance settings
You can create an ElastiCache cluster from Amazon RDS using the same configuration settings as a newly
created or existing RDS DB instance.

Some use-cases to associate an ElastiCache cluster with your DB instance:

• You can save costs and improve your performance by using ElastiCache with RDS versus running on
RDS alone.

For example, you can save up to 55% in cost and gain up to 80x faster read performance by using
ElastiCache with RDS for MySQL versus RDS for MySQL alone.
• You can use the ElastiCache cluster as a primary data store for applications that don't require data
durability. Your applications that use Redis or Memcached can use ElastiCache with almost no
modification.

When you create an ElastiCache cluster from RDS, the ElastiCache cluster inherits the following settings
from the associated RDS DB instance:

• ElastiCache connectivity settings


• ElastiCache security settings

You can also set the cluster configuration settings according to your requirements.

Setting up ElastiCache in your applications


Your applications must be set up to utilize ElastiCache clusters. You can also optimize and improve
cluster performance by setting up your applications to use caching strategies depending on your
requirements.

374
Amazon Relational Database Service User Guide
Creating an ElastiCache cluster with
settings from a new RDS DB instance

• To access your ElastiCache cluster and get started, see Getting started with Amazon ElastiCache for
Redis and Getting started with Amazon ElastiCache for Memcached.
• For more information about caching strategies, see Caching strategies and best practices for
Memcached and Caching strategies and best practices for Redis.
• For more information about high availability in ElastiCache for Redis clusters, see High availability
using replication groups.
• You might incur costs associated with backup storage, data transfer within or across regions, or use of
AWS Outposts. For pricing details, see Amazon ElastiCache pricing.

Creating an ElastiCache cluster with settings from a


new RDS DB instance
After creating a new RDS DB instance in the RDS console, you can create add-ons for your RDS DB
instance from the Suggested add-ons window.

In the Suggested add-ons window, you can create an ElastiCache cluster from RDS with the same
settings as your newly created RDS DB instance.

Create an ElastiCache cluster with settings from a new DB instance

1. To create a DB instance, follow the instructions in Creating an Amazon RDS DB instance (p. 300).
2. After creating a new RDS DB instance, the console displays the Suggested add-ons window. Select
Create an ElastiCache cluster from RDS using your DB settings.

In the ElastiCache configuration section, the Source DB identifier displays which DB instance the
ElastiCache cluster inherits settings from.
3. Choose whether you want to create a Redis or Memcached cluster. For more information, see
Comparing Memcached and Redis.

If you choose Redis cluster, then choose whether you want to keep the cluster mode Enabled or
Disabled. For more information, see Replication: Redis (Cluster Mode Disabled) vs. Redis (Cluster
Mode Enabled).

375
Amazon Relational Database Service User Guide
Creating an ElastiCache cluster with
settings from a new RDS DB instance

4. Enter values for Name, Description, and Engine version.

For Engine version, the recommended default value is the latest engine version. You can also choose
an Engine version for the ElastiCache cluster that best meets your requirements.
5. Choose the node type in the Node type option. For more information, see Managing nodes.

If you choose to create a Redis cluster with the Cluster mode set to Enabled, then enter the number
of shards (partitions/node groups) in the Number of shards option.

Enter the number of replicas of each shard in Number of replicas.


Note
The selected node type, the number of shards, and the number of replicas all affect your
cluster performance and resource costs. Be sure these settings match your database needs.
For pricing information, see Amazon ElastiCache pricing.
6. Confirm the ElastiCache connectivity settings.

RDS automatically fills the Port and the Network type. ElastiCache creates an equivalent Subnet
group from the source database. To customize these settings, select Customize your connectivity
settings.

7. Confirm the ElastiCache security settings.

ElastiCache provides the default values for Encryption at rest, Encryption key, Encryption in
transit, Access control, and Security groups. To customize these settings, select Customize your
security settings.

376
Amazon Relational Database Service User Guide
Creating an ElastiCache cluster with
settings from an existing RDS DB instance

8. Verify the default and inherited settings of your ElastiCache cluster. Some settings can't be changed
after creation.
Note
RDS might adjust the backup window of your ElastiCache cluster to meet the minimum
window requirement of 60 minutes. The backup window of your source database remains
the same.
9. When you're ready, choose Create ElastiCache cluster.

The console displays a confirmation banner for the ElastiCache cluster creation. Follow the link in the
banner to the ElastiCache console to view the cluster details. The ElastiCache console displays the newly
created ElastiCache cluster.

Creating an ElastiCache cluster with settings from an


existing RDS DB instance
You can create an ElastiCache cluster for your existing RDS DB instances from the Actions dropdown
menu in the console.

Create an ElastiCache cluster with settings from an existing DB instance

1. In the Databases page, select the required DB instance.


2. In the Actions dropdown menu, choose Create ElastiCache cluster to create an ElastiCache cluster
in RDS that has the same settings as your existing RDS DB instance.

In the ElastiCache configuration section, the Source DB identifier shows which DB instance the
ElastiCache cluster inherits settings from.
3. Choose whether you want to create a Redis or Memcached cluster. For more information, see
Comparing Memcached and Redis.

If you choose Redis cluster, then choose whether you want to keep the cluster mode Enabled or
Disabled. For more information, see Replication: Redis (Cluster Mode Disabled) vs. Redis (Cluster
Mode Enabled).

377
Amazon Relational Database Service User Guide
Creating an ElastiCache cluster with
settings from an existing RDS DB instance

4. Enter values for Name, Description, and Engine version.

For Engine version, the recommended default value is the latest engine version. You can also choose
an Engine version for the ElastiCache cluster that best meets your requirements.
5. Choose the node type in the Node type option. For more information, see Managing nodes.

If you choose to create a Redis cluster with the Cluster mode set to Enabled, then enter the number
of shards (partitions/node groups) in the Number of shards option.

Enter the number of replicas of each shard in Number of replicas.


Note
The selected node type, the number of shards, and the number of replicas all affect your
cluster performance and resource costs. Be sure these settings match your database needs.
For pricing information, see Amazon ElastiCache pricing.
6. Confirm the ElastiCache connectivity settings.

RDS automatically fills the Port and the Network type. ElastiCache creates an equivalent Subnet
group from the source database. To customize these settings, select Customize your connectivity
settings.

378
Amazon Relational Database Service User Guide
Creating an ElastiCache cluster with
settings from an existing RDS DB instance

7. Confirm the ElastiCache security settings.

ElastiCache provides the default values for Encryption at rest, Encryption key, Encryption in
transit, Access control, and Security groups. To customize these settings, select Customize your
security settings.

8. Verify the default and inherited settings of your ElastiCache cluster. Some settings can't be changed
after creation.
Note
RDS might adjust the backup window of your ElastiCache cluster to meet the minimum
window requirement of 60 minutes. The backup window of your source database remains
the same.
9. When you're ready, choose Create ElastiCache cluster.

The console displays a confirmation banner for the ElastiCache cluster creation. Follow the link in the
banner to the ElastiCache console to view the cluster details. The ElastiCache console displays the newly
created ElastiCache cluster.

379
Amazon Relational Database Service User Guide

Managing an Amazon RDS DB


instance
Following, you can find instructions for managing and maintaining your Amazon RDS DB instance.

Topics
• Stopping an Amazon RDS DB instance temporarily (p. 381)
• Starting an Amazon RDS DB instance that was previously stopped (p. 384)
• Automatically connecting an AWS compute resource and a DB instance (p. 385)
• Modifying an Amazon RDS DB instance (p. 401)
• Maintaining a DB instance (p. 418)
• Upgrading a DB instance engine version (p. 429)
• Renaming a DB instance (p. 434)
• Rebooting a DB instance (p. 436)
• Working with DB instance read replicas (p. 438)
• Tagging Amazon RDS resources (p. 461)
• Working with Amazon Resource Names (ARNs) in Amazon RDS (p. 471)
• Working with storage for Amazon RDS DB instances (p. 478)
• Deleting a DB instance (p. 489)

380
Amazon Relational Database Service User Guide
Stopping a DB instance

Stopping an Amazon RDS DB instance temporarily


Suppose that you use a DB instance intermittently, for temporary testing, or for a daily development
activity. If so, you can stop your Amazon RDS DB instance temporarily to save money. While your DB
instance is stopped, you are charged for provisioned storage (including Provisioned IOPS). You're also
charged for backup storage, including manual snapshots and automated backups within your specified
retention window. However, you're not charged for DB instance hours. For more information, see Billing
FAQs.
Note
In some cases, a large amount of time is required to stop a DB instance. If you want to stop your
DB instance and restart it immediately, you can reboot the DB instance. For information about
rebooting a DB instance, see Rebooting a DB instance (p. 436).

Supported DB engines, instance classes, and Regions


You can stop and start Amazon RDS DB instances that are running the following DB engines:

• MariaDB
• Microsoft SQL Server, including RDS Custom for SQL Server.
• MySQL
• Oracle
• PostgreSQL

Stopping and starting a DB instance is supported for all DB instance classes, and in all AWS Regions.

Stopping a DB instance in a Multi-AZ deployment


You can stop and start a DB instance whether it is configured for a single Availability Zone or Multi-AZ.
For Multi-AZ, you database engine must support Multi-AZ deployments. For more information, see Multi-
AZ DB clusters (p. 147).

For a Multi-AZ deployment, a long time might be required to stop a DB instance. If you have at least one
backup after a previous failover, then you can speed up the stop DB instance operation. To do so, before
stopping the DB instance, perform a reboot with failover operation.

How stopping a DB instance works


The stopping operation occurs in the following stages:

1. The DB instance initiates the normal shutdown process.

The status of the DB instance changes to stopping.


2. The instance stops running, up to a maximum of 7 consecutive days.

The status of the DB instance changes to stopped. Consider the following characteristics of the
stopped state:
• Any storage volumes remain attached to the DB instance, and their data is kept. RDS deletes any
data stored in the RAM of the DB instance.
• RDS removes pending actions, except for pending actions for the option group or DB parameter
group of the DB instance.

381
Amazon Relational Database Service User Guide
Benefits

• If you don't manually start your DB instance after it is stopped for seven consecutive days,
RDS automatically starts your DB instance for you. This way, it doesn't fall behind any required
maintenance updates. To learn how to stop and start your instance on a schedule, see How can I use
Step Functions to stop an Amazon RDS instance for longer than 7 days?.

Occasionally, an RDS for PostgreSQL DB instance doesn't shut down cleanly. If this happens, you see that
the instance goes through a recovery process when you restart it later. This is expected behavior of the
database engine, intended to protect database integrity. Some memory-based statistics and counters
don't retain history and are re-initialized after restart, to capture the operational workload moving
forward.

Benefits of stopping your DB instance


Stopping and starting a DB instance is faster than creating a DB snapshot, and then restoring the
snapshot.

When you stop a DB instance, it retains the following:

• Instance ID
• Domain Name Server (DNS) endpoint
• Parameter group
• Security group
• Option group
• Amazon S3 transaction logs (necessary for a point-in-time restore)

When you restart a DB instance, it has the same configuration as when you stopped it.

Limitations of stopping your DB instance


The following are some limitations to stopping and starting a DB instance:

• You can't stop a DB instance that has a read replica, or that is a read replica.
• You can't modify a stopped DB instance.
• You can't delete an option group that is associated with a stopped DB instance.
• You can't delete a DB parameter group that is associated with a stopped DB instance.
• In a Multi-AZ deployment, the primary and secondary Availability Zones might be switched after you
start the DB instance.

Additional limitations apply to RDS Custom for SQL Server. For more information, see Starting and
stopping an RDS Custom for SQL Server DB instance (p. 1146).

Option and parameter group considerations


You can't remove persistent options (including permanent options) from an option group if there are DB
instances associated with that option group. This functionality is also true of any DB instance with a state
of stopping, stopped, or starting.

You can change the option group or DB parameter group that is associated with a stopped DB instance.
However, the change doesn't occur until the next time you start the DB instance. If you chose to apply
changes immediately, the change occurs when you start the DB instance. Otherwise the change occurs
during the next maintenance window after you start the DB instance.

382
Amazon Relational Database Service User Guide
Public IP address

Public IP address
When you stop a DB instance, it retains its DNS endpoint. If you stop a DB instance that has a public IP
address, Amazon RDS releases its public IP address. When the DB instance is restarted, it has a different
public IP address.
Note
You should always connect to a DB instance using the DNS endpoint, not the IP address.

Stopping a DB instance temporarily


You can stop a DB using the AWS Management Console, the AWS CLI, or the RDS API.

Console
To stop a DB instance

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases, and then choose the DB instance that you want to stop.
3. For Actions, choose Stop temporarily.
4. In the Stop DB instance temporarily window, select the acknowledgement that the DB instance will
restart automatically after 7 days.
5. (Optional) Select Save the DB instance in a snapshot and enter the snapshot name for Snapshot
name. Choose this option if you want to create a snapshot of the DB instance before stopping it.
6. Choose Stop temporarily to stop the DB instance, or choose Cancel to cancel the operation.

AWS CLI
To stop a DB instance by using the AWS CLI, call the stop-db-instance command with the following
option:

• --db-instance-identifier – the name of the DB instance.

Example

aws rds stop-db-instance --db-instance-identifier mydbinstance

RDS API
To stop a DB instance by using the Amazon RDS API, call the StopDBInstance operation with the
following parameter:

• DBInstanceIdentifier – the name of the DB instance.

383
Amazon Relational Database Service User Guide
Starting a DB instance

Starting an Amazon RDS DB instance that was


previously stopped
You can stop your Amazon RDS DB instance temporarily to save money. After you stop your DB instance,
you can restart it to begin using it again. For more details about stopping and starting DB instances, see
Stopping an Amazon RDS DB instance temporarily (p. 381).

When you start a DB instance that you previously stopped, the DB instance retains certain information.
This information is the ID, Domain Name Server (DNS) endpoint, parameter group, security group, and
option group. When you start a stopped instance, you are charged a full instance hour.

Console
To start a DB instance

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases, and then choose the DB instance that you want to start.
3. For Actions, choose Start.

AWS CLI
To start a DB instance by using the AWS CLI, call the start-db-instance command with the following
option:

• --db-instance-identifier – The name of the DB instance.

Example

aws rds start-db-instance --db-instance-identifier mydbinstance

RDS API
To start a DB instance by using the Amazon RDS API, call the StartDBInstance operation with the
following parameter:

• DBInstanceIdentifier – The name of the DB instance.

384
Amazon Relational Database Service User Guide
Connecting an AWS compute resource

Automatically connecting an AWS compute


resource and a DB instance
You can automatically connect a DB instance and AWS compute resources such as Amazon Elastic
Compute Cloud (Amazon EC2) instances and AWS Lambda functions.

Topics
• Automatically connecting an EC2 instance and a DB instance (p. 385)
• Automatically connecting a Lambda function and a DB instance (p. 392)

Automatically connecting an EC2 instance and a DB


instance
You can use the Amazon RDS console to simplify setting up a connection between an Amazon Elastic
Compute Cloud (Amazon EC2) instance and a DB instance. Often, your DB instance is in a private
subnet and your EC2 instance is in a public subnet within a VPC. You can use a SQL client on your EC2
instance to connect to your DB instance . The EC2 instance can also run web servers or applications
that access your private DB instance . For instructions on setting up a connection between an EC2
instance and a Multi-AZ DB cluster, see the section called “Connecting an EC2 instance and a Multi-AZ DB
cluster” (p. 525).

If you want to connect to an EC2 instance that isn't in the same VPC as the DB instance, see the scenarios
in Scenarios for accessing a DB instance in a VPC (p. 2701).

Topics
• Overview of automatic connectivity with an EC2 instance (p. 386)

385
Amazon Relational Database Service User Guide
Connecting an EC2 instance

• Automatically connecting an EC2 instance and an RDS database (p. 388)


• Viewing connected compute resources (p. 390)
• Connecting to a DB instance that is running a specific DB engine (p. 391)

Overview of automatic connectivity with an EC2 instance


When you set up a connection between an EC2 instance and an RDS database, Amazon RDSautomatically
configures the VPC security group for your EC2 instance and for your RDS database.

The following are requirements for connecting an EC2 instance with an RDS database:

• The EC2 instance must exist in the same VPC as the RDS database.

If no EC2 instances exist in the same VPC, then the console provides a link to create one.
• The user who sets up connectivity must have permissions to perform the following Amazon EC2
operations:
• ec2:AuthorizeSecurityGroupEgress
• ec2:AuthorizeSecurityGroupIngress
• ec2:CreateSecurityGroup
• ec2:DescribeInstances
• ec2:DescribeNetworkInterfaces
• ec2:DescribeSecurityGroups
• ec2:ModifyNetworkInterfaceAttribute
• ec2:RevokeSecurityGroupEgress

If the DB instance and EC2 instance are in different Availability Zones, your account may incur cross-
Availability Zone costs.

When you set up a connection to an EC2 instance, Amazon RDS acts according to the current
configuration of the security groups associated with the RDS database and EC2 instance, as described in
the following table.

Current RDS security group Current EC2 security group RDS action
configuration configuration

There are one or more security There are one or more security RDS takes no action.
groups associated with the RDS groups associated with the EC2
database with a name that matches instance with a name that matches A connection was already configured
the pattern rds-ec2-n (where n the pattern rds-ec2-n (where automatically between the EC2
is a number). A security group that n is a number). A security group instance and RDS database. Because
matches the pattern hasn't been that matches the pattern hasn't a connection already exists between
modified. This security group has been modified. This security group the EC2 instance and the RDS
only one inbound rule with the VPC has only one outbound rule with database, the security groups aren't
security group of the EC2 instance the VPC security group of the RDS modified.
as the source. database as the source.

Either of the following conditions Either of the following conditions RDS action: create new security
apply: apply: groups

• There is no security group • There is no security group


associated with the RDS database associated with the EC2 instance
with a name that matches the with a name that matches the
pattern rds-ec2-n. pattern ec2-rds-n.

386
Amazon Relational Database Service User Guide
Connecting an EC2 instance

Current RDS security group Current EC2 security group RDS action
configuration configuration
• There are one or more security • There are one or more security
groups associated with the groups associated with the
RDS database with a name that EC2 instance with a name that
matches the pattern rds-ec2-n. matches the pattern ec2-rds-n.
However, Amazon RDS can't use However, Amazon RDS can't use
any of these security groups for any of these security groups for
the connection with the EC2 the connection with the RDS
instance. Amazon RDS can't use a database. Amazon RDS can't use
security group that doesn't have a security group that doesn't
one inbound rule with the VPC have one outbound rule with the
security group of the EC2 instance VPC security group of the RDS
as the source. Amazon RDS also database as the source. Amazon
can't use a security group that RDS also can't use a security
has been modified. Examples of group that has been modified.
modifications include adding a
rule or changing the port of an
existing rule.

There are one or more security There are one or more security RDS action: create new security
groups associated with the RDS groups associated with the EC2 groups
database with a name that matches instance with a name that matches
the pattern rds-ec2-n. A security the pattern ec2-rds-n. However,
group that matches the pattern Amazon RDS can't use any of these
hasn't been modified. This security security groups for the connection
group has only one inbound rule with the RDS database. Amazon
with the VPC security group of the RDS can't use a security group that
EC2 instance as the source. doesn't have one outbound rule with
the VPC security group of the RDS
database as the source. Amazon RDS
also can't use a security group that
has been modified.

There are one or more security A valid EC2 security group for the RDS action: associate EC2 security
groups associated with the RDS connection exists, but it is not group
database with a name that matches associated with the EC2 instance.
the pattern rds-ec2-n. A security This security group has a name that
group that matches the pattern matches the pattern rds-ec2-n.
hasn't been modified. This security It hasn't been modified. It has only
group has only one inbound rule one outbound rule with the VPC
with the VPC security group of the security group of the RDS database
EC2 instance as the source. as the source.

387
Amazon Relational Database Service User Guide
Connecting an EC2 instance

Current RDS security group Current EC2 security group RDS action
configuration configuration

Either of the following conditions There are one or more security RDS action: create new security
apply: groups associated with the EC2 groups
instance with a name that matches
• There is no security group the pattern rds-ec2-n. A security
associated with the RDS database group that matches the pattern
with a name that matches the hasn't been modified. This security
pattern rds-ec2-n. group has only one outbound rule
• There are one or more security with the VPC security group of the
groups associated with the RDS database as the source.
RDS database with a name that
matches the pattern rds-ec2-n.
However, Amazon RDS can't use
any of these security groups for
the connection with the EC2
instance. Amazon RDS can't use a
security group that doesn't have
one inbound rule with the VPC
security group of the EC2 instance
as the source. Amazon RDS also
can't use security group that has
been modified.

RDS action: create new security groups

Amazon RDS takes the following actions:

• Creates a new security group that matches the pattern rds-ec2-n. This security group has an
inbound rule with the VPC security group of the EC2 instance as the source. This security group is
associated with the RDS database and allows the EC2 instance to access the RDS database.
• Creates a new security group that matches the pattern ec2-rds-n. This security group has an
outbound rule with the VPC security group of the RDS database as the source. This security group is
associated with the EC2 instance and allows the EC2 instance to send traffic to the RDS database.

RDS action: associate EC2 security group

Amazon RDS associates the valid, existing EC2 security group with the EC2 instance. This security group
allows the EC2 instance to send traffic to the RDS database.

Automatically connecting an EC2 instance and an RDS database


Before setting up a connection between an EC2 instance and an RDS database, make sure you meet the
requirements described in Overview of automatic connectivity with an EC2 instance (p. 386).

If you make changes to security groups after you configure connectivity, the changes might affect the
connection between the EC2 instance and the RDS database.
Note
You can only set up a connection between an EC2 instance and an RDS database automatically
by using the AWS Management Console. You can't set up a connection automatically with the
AWS CLI or RDS API.

388
Amazon Relational Database Service User Guide
Connecting an EC2 instance

To connect an EC2 instance and an RDS database automatically

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases, and then choose the RDS database.
3. From Actions, choose Set up EC2 connection.

The Set up EC2 connection page appears.


4. On the Set up EC2 connection page, choose the EC2 instance.

If no EC2 instances exist in the same VPC, choose Create EC2 instance to create one. In this case,
make sure the new EC2 instance is in the same VPC as the RDS database.
5. Choose Continue.

The Review and confirm page appears.

389
Amazon Relational Database Service User Guide
Connecting an EC2 instance

6. On the Review and confirm page, review the changes that RDS will make to set up connectivity with
the EC2 instance.

If the changes are correct, choose Confirm and set up.

If the changes aren't correct, choose Previous or Cancel.

Viewing connected compute resources


You can use the AWS Management Console to view the compute resources that are connected to an RDS
database. The resources shown include compute resource connections that were set up automatically.
You can set up connectivity with compute resources automatically in the following ways:

• You can select the compute resource when you create the database.

390
Amazon Relational Database Service User Guide
Connecting an EC2 instance

For more information, see Creating an Amazon RDS DB instance (p. 300) and Creating a Multi-AZ DB
cluster (p. 508).
• You can set up connectivity between an existing database and a compute resource.

For more information, see Automatically connecting an EC2 instance and an RDS database (p. 388).

The listed compute resources don't include ones that were connected to the database manually. For
example, you can allow a compute resource to access a database manually by adding a rule to the VPC
security group associated with the database.

For a compute resource to be listed, the following conditions must apply:

• The name of the security group associated with the compute resource matches the pattern ec2-
rds-n (where n is a number).
• The security group associated with the compute resource has an outbound rule with the port range set
to the port that the RDS database uses.
• The security group associated with the compute resource has an outbound rule with the source set to a
security group associated with the RDS database.
• The name of the security group associated with the RDS database matches the pattern rds-ec2-n
(where n is a number).
• The security group associated with the RDS database has an inbound rule with the port range set to
the port that the RDS database uses.
• The security group associated with the RDS database has an inbound rule with the source set to a
security group associated with the compute resource.

To view compute resources connected to an RDS database

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases, and then choose the name of the RDS database.
3. On the Connectivity & security tab, view the compute resources in the Connected compute
resources.

Connecting to a DB instance that is running a specific DB engine


For information about connecting to a DB instance that is running a specific DB engine, follow the
instructions for your DB engine:

• Connecting to a DB instance running the MariaDB database engine (p. 1269)


• Connecting to a DB instance running the Microsoft SQL Server database engine (p. 1380)
• Connecting to a DB instance running the MySQL database engine (p. 1630)
• Connecting to your RDS for Oracle DB instance (p. 1806)
• Connecting to a DB instance running the PostgreSQL database engine (p. 2167)

391
Amazon Relational Database Service User Guide
Connecting a Lambda function

Automatically connecting a Lambda function and a


DB instance
You can use the Amazon RDS console to simplify setting up a connection between a Lambda function
and a DB instance. Often, your DB instance is in a private subnet within a VPC. The Lambda function can
be used by applications to access your private DB instance.

For instructions on setting up a connection between a Lambda function and a Multi-AZ DB cluster, see
the section called “Connecting a Lambda function and a Multi-AZ DB cluster” (p. 530).

The following image shows a direct connection between your DB instance and your Lambda function.

You can set up the connection between your Lambda function and your DB instance through RDS
Proxy to improve your database performance and resiliency. Often, Lambda functions make frequent,
short database connections that benefit from connection pooling that RDS Proxy offers. You can take
advantage of any AWS Identity and Access Management (IAM) authentication that you already have for
Lambda functions, instead of managing database credentials in your Lambda application code. For more
information, see Using Amazon RDS Proxy (p. 1199).

When you use the console to connect with an existing proxy, Amazon RDS updates the proxy security
group to allow connections from your DB instance and Lambda function.

You can also create a new proxy from the same console page. When you create a proxy in the console,
to access the DB instance, you must input your database credentials or select an AWS Secrets Manager
secret.

392
Amazon Relational Database Service User Guide
Connecting a Lambda function

Topics
• Overview of automatic connectivity with a Lambda function (p. 393)
• Automatically connecting a Lambda function and an RDS database (p. 399)
• Viewing connected compute resources (p. 400)

Overview of automatic connectivity with a Lambda function


The following are requirements for connecting a Lambda function with an RDS DB instance:

• The Lambda function must exist in the same VPC as the DB instance.
• The user who sets up connectivity must have permissions to perform the following Amazon RDS,
Amazon EC2, Lambda, Secrets Manager, and IAM operations:
• Amazon RDS
• rds:CreateDBProxies
• rds:DescribeDBInstances
• rds:DescribeDBProxies
• rds:ModifyDBInstance
• rds:ModifyDBProxy
• rds:RegisterProxyTargets
• Amazon EC2
• ec2:AuthorizeSecurityGroupEgress
• ec2:AuthorizeSecurityGroupIngress
• ec2:CreateSecurityGroup
• ec2:DeleteSecurityGroup
• ec2:DescribeSecurityGroups
393
Amazon Relational Database Service User Guide
Connecting a Lambda function

• ec2:RevokeSecurityGroupEgress
• ec2:RevokeSecurityGroupIngress
• Lambda
• lambda:CreateFunctions
• lambda:ListFunctions
• lambda:UpdateFunctionConfiguration
• Secrets Manager
• sercetsmanager:CreateSecret
• secretsmanager:DescribeSecret
• IAM
• iam:AttachPolicy
• iam:CreateRole
• iam:CreatePolicy
• AWS KMS
• kms:describeKey

Note
If the DB instance and Lambda function are in different Availability Zones, your account might
incur cross-Availability Zone costs.

When you set up a connection between a Lambda function and an RDS database, Amazon RDS
configures the VPC security group for your function and for your DB instance. If you use RDS Proxy, then
Amazon RDS also configures the VPC security group for the proxy. Amazon RDS acts according to the
current configuration of the security groups associated with the DB instance, Lambda function, and
proxy, as described in the following table.

Current RDS security Current Lambda security Current proxy security RDS action
group configuration group configuration group configuration

There are one or more There are one or more There are one or more Amazon RDS takes no
security groups associated security groups associated security groups associated action.
with the DB instance with with the Lambda function with the proxy with a
a name that matches the with a name that matches name that matches the A connection was already
pattern rds-lambda-n the pattern lambda- pattern rdsproxy- configured automatically
or if a proxy is already rds-n or lambda- lambda-n (where n is a between the Lambda
connected to your DB rdsproxy-n (where n is a number). function, the proxy
instance, RDS checks if number). (optional), and DB
the TargetHealth of A security group that instance. Because a
an associated proxy is A security group that matches the pattern connection already exists
AVAILABLE. matches the pattern hasn't been modified. between the function,
hasn't been modified. This security group has proxy, and the database,
A security group that This security group has inbound and outbound the security groups aren't
matches the pattern only one outbound rule rules with the VPC security modified.
hasn't been modified. This with either the VPC groups of the Lambda
security group has only security group of the DB function and the DB
one inbound rule with the instance or the proxy as instance.
VPC security group of the the destination.
Lambda function or proxy
as the source.

Either of the following Either of the following Either of the following RDS action: create new
conditions apply: conditions apply: conditions apply: security groups
394
Amazon Relational Database Service User Guide
Connecting a Lambda function

Current RDS security Current Lambda security Current proxy security RDS action
group configuration group configuration group configuration
• There is no security • There is no security • There is no security
group associated with group associated with group associated with
the DB instance with a the Lambda function the proxy with a name
name that matches the with a name that that matches the
pattern rds-lambda-n matches the pattern pattern rdsproxy-
or if the TargetHealth lambda-rds-n or lambda-n.
of an associated proxy is lambda-rdsproxy-n. • There are one or
AVAILABLE. • There are one or more security groups
• There are one or more security groups associated with the
more security groups associated with the proxy with a name that
associated with the Lambda function matches rdsproxy-
DB instance with a with a name that lambda-n. However,
name that matches the matches the pattern Amazon RDS can't
pattern rds-lambda-n lambda-rds-n or use any of these
or if the TargetHealth lambda-rdsproxy-n. security groups for the
of an associated proxy However, Amazon RDS connection with the
is AVAILABLE. However, can't use any of these DB instance or Lambda
none of these security security groups for the function.
groups can be used for connection with the DB
the connection with the instance.
Lambda function. Amazon RDS can't use
a security group that
Amazon RDS can't doesn't have inbound
Amazon RDS can't use a use a security group and outbound rules with
security group that doesn't that doesn't have one the VPC security group
have one inbound rule outbound rule with the of the DB instance and
with the VPC security VPC security group of the the Lambda function.
group of the Lambda DB instance or proxy as Amazon RDS also can't use
function or proxy as the the destination. Amazon a security group that has
source. Amazon RDS also RDS also can't use a been modified.
can't use a security group security group that has
that has been modified. been modified.
Examples of modifications
include adding a rule or
changing the port of an
existing rule.

395
Amazon Relational Database Service User Guide
Connecting a Lambda function

Current RDS security Current Lambda security Current proxy security RDS action
group configuration group configuration group configuration

There are one or more There are one or more There are one or more RDS action: create new
security groups associated security groups associated security groups associated security groups
with the DB instance with with the Lambda function with the proxy with a
a name that matches the with a name that matches name that matches the
pattern rds-lambda-n the pattern lambda- pattern rdsproxy-
or if the TargetHealth rds-n or lambda- lambda-n.
of an associated proxy is rdsproxy-n.
AVAILABLE. However, Amazon RDS
However, Amazon RDS can't use any of these
A security group that can't use any of these security groups for the
matches the pattern security groups for the connection with the
hasn't been modified. This connection with the DB DB instance or Lambda
security group has only instance. Amazon RDS function. Amazon RDS
one inbound rule with the can't use a security group can't use a security group
VPC security group of the that doesn't have one that doesn't have inbound
Lambda function or proxy outbound rule with the and outbound rules with
as the source. VPC security group of the the VPC security group
DB instance or proxy as of the DB instance and
the destination. Amazon the Lambda function.
RDS also can't use a Amazon RDS also can't use
security group that has a security group that has
been modified. been modified.

There are one or more A valid Lambda security A valid proxy security RDS action: associate
security groups associated group for the connection group for the connection Lambda security group
with the DB instance with exists, but it isn't exists, but it isn't
a name that matches the associated with the associated with the proxy.
pattern rds-lambda-n Lambda function. This This security group has
or if the TargetHealth security group has a a name that matches
of an associated proxy is name that matches the pattern rdsproxy-
AVAILABLE. the pattern lambda- lambda-n. It hasn't been
rds-n or lambda- modified. It has inbound
A security group that rdsproxy-n. It hasn't and outbound rules with
matches the pattern been modified. It has only the VPC security group of
hasn't been modified. This one outbound rule with the DB instance and the
security group has only the VPC security group of Lambda function.
one inbound rule with the the DB instance or proxy
VPC security group of the as the destination.
Lambda function or proxy
as the source.

396
Amazon Relational Database Service User Guide
Connecting a Lambda function

Current RDS security Current Lambda security Current proxy security RDS action
group configuration group configuration group configuration

Either of the following There are one or more There are one or more RDS action: create new
conditions apply: security groups associated security groups associated security groups
with the Lambda function with the proxy with a
• There is no security with a name that matches name that matches the
group associated with the pattern lambda- pattern rdsproxy-
the DB instance with a rds-n or lambda- lambda-n.
name that matches the rdsproxy-n.
pattern rds-lambda-n A security group that
or if the TargetHealth A security group that matches the pattern
of an associated proxy is matches the pattern hasn't been modified.
AVAILABLE. hasn't been modified. This This security group has
• There are one or security group has only inbound and outbound
more security groups one outbound rule with rules with the VPC security
associated with the the VPC security group of group of the DB instance
DB instance with a the DB instance or proxy and the Lambda function.
name that matches the as the destination.
pattern rds-lambda-n
or if the TargetHealth
of an associated
proxy is AVAILABLE.
However, Amazon RDS
can't use any of these
security groups for the
connection with the
Lambda function or
proxy.

Amazon RDS can't use a


security group that doesn't
have one inbound rule
with the VPC security
group of the Lambda
function or proxy as the
source. Amazon RDS also
can't use a security group
that has been modified.

397
Amazon Relational Database Service User Guide
Connecting a Lambda function

Current RDS security Current Lambda security Current proxy security RDS action
group configuration group configuration group configuration

Either of the following Either of the following Either of the following RDS action: create new
conditions apply: conditions apply: conditions apply: security groups

• There is no security • There is no security • There is no security


group associated with group associated with group associated with
the DB instance with a the Lambda function the proxy with a name
name that matches the with a name that that matches the
pattern rds-lambda-n matches the pattern pattern rdsproxy-
or if the TargetHealth lambda-rds-n or lambda-n.
of an associated proxy is lambda-rdsproxy-n. • There are one or
AVAILABLE. • There are one or more security groups
• There are one or more security groups associated with the
more security groups associated with the proxy with a name that
associated with the Lambda function matches rdsproxy-
DB instance with a with a name that lambda-n. However,
name that matches the matches the pattern Amazon RDS can't
pattern rds-lambda-n lambda-rds-n or use any of these
or if the TargetHealth lambda-rdsproxy-n. security groups for the
of an associated However, Amazon RDS connection with the
proxy is AVAILABLE. can't use any of these DB instance or Lambda
However, Amazon RDS security groups for the function.
can't use any of these connection with the DB
security groups for the instance.
connection with the Amazon RDS can't use
Lambda function or a security group that
proxy. Amazon RDS can't use a doesn't have inbound
security group that doesn't and outbound rules with
have one outbound rule the VPC security group
Amazon RDS can't use a with the VPC security of the DB instance and
security group that doesn't group of the DB instance the Lambda function.
have one inbound rule or proxy as the source. Amazon RDS also can't use
with the VPC security Amazon RDS also can't use a security group that has
group of the Lambda a security group that has been modified.
function or proxy as the been modified.
source. Amazon RDS also
can't use a security group
that has been modified.

RDS action: create new security groups

Amazon RDS takes the following actions:

• Creates a new security group that matches the pattern rds-lambda-n or rds-rdsproxy-n (if you
choose to use RDS Proxy). This security group has an inbound rule with the VPC security group of the
Lambda function or proxy as the source. This security group is associated with the DB instance and
allows the function or proxy to access the DB instance.
• Creates a new security group that matches the pattern lambda-rds-n or lambda-rdsproxy-n. This
security group has an outbound rule with the VPC security group of the DB instance or proxy as the
destination. This security group is associated with the Lambda function and allows the function to
send traffic to the DB instance or send traffic through a proxy.
• Creates a new security group that matches the pattern rdsproxy-lambda-n. This security group has
inbound and outbound rules with the VPC security group of the DB instance and the Lambda function.

398
Amazon Relational Database Service User Guide
Connecting a Lambda function

RDS action: associate Lambda security group

Amazon RDS associates the valid, existing Lambda security group with the Lambda function. This
security group allows the function to send traffic to the DB instance or send traffic through a proxy.

Automatically connecting a Lambda function and an RDS


database
You can use the Amazon RDS console to automatically connect a Lambda function to your DB instance.
This simplifies the process of setting up a connection between these resources.

You can also use RDS Proxy to include a proxy in your connection. Lambda functions make frequent
short database connections that benefit from the connection pooling that RDS Proxy offers. You can also
use any IAM authentication that you've already set up for your Lambda function, instead of managing
database credentials in your Lambda application code.

You can connect an existing DB instance to new and existing Lambda functions using the Set up Lambda
connection page. The setup process automatically sets up the required security groups for you.

Before setting up a connection between a Lambda function and a DB instance, make sure that:

• Your Lambda function and DB instance are in the same VPC.


• You have the right permissions for your user account. For more information about the requirements,
see Overview of automatic connectivity with a Lambda function (p. 393).

If you change security groups after you configure connectivity, the changes might affect the connection
between the Lambda function and the DB instance.
Note
You can automatically set up a connection between a DB instance and a Lambda function only
in the AWS Management Console. To connect a Lambda function, the DB instance must be in the
Available state.

To automatically connect a Lambda function and a DB instance


<result>

After you confirm the setup, Amazon RDS begins the process of connecting your Lambda function, RDS
Proxy (if you used a proxy), and DB instance. The console shows the Connection details dialog box,
which lists the security group changes that allow connections between your resources.
</result>

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases, and then choose the DB instance that you want to
connect to a Lambda function.
3. For Actions, choose Set up Lambda connection.
4. On the Set up Lambda connection page, under Select Lambda function, do either of the following:
• If you have an existing Lambda function in the same VPC as your DB instance, choose Choose
existing function, and then choose the function.
• If you don't have a Lambda function in the same VPC, choose Create new function, and then
enter a Function name. The default runtime is set to Nodejs.18. You can modify the settings for
your new Lambda function in the Lambda console after you complete the connection setup.
5. (Optional) Under RDS Proxy, select Connect using RDS Proxy, and then do any of the following:
• If you have an existing proxy that you want to use, choose Choose existing proxy, and then
choose the proxy.

399
Amazon Relational Database Service User Guide
Connecting a Lambda function

• If you don't have a proxy, and you want Amazon RDS to automatically create one for you,
choose Create new proxy. Then, for Database credentials, do either of the following:

a. Choose Database username and password, and then enter the Username and Password
for your DB instance.
b. Choose Secrets Manager secret. Then, for Select secret, choose an AWS Secrets Manager
secret. If you don't have a Secrets Manager secret, choose Create new Secrets Manager
secret to create a new secret. After you create the secret, for Select secret, choose the new
secret.

After you create the new proxy, choose Choose existing proxy, and then choose the proxy. Note
that it might take some time for your proxy to be available for connection.
6. (Optional) Expand Connection summary and verify the highlighted updates for your resources.
7. Choose Set up.

Viewing connected compute resources


You can use the AWS Management Console to view the Lambda functions that are connected to your
DB instance. The resources shown include compute resource connections that Amazon RDS set up
automatically.

The listed compute resources don't include those that are manually connected to the DB instance. For
example, you can allow a compute resource to access your DB instance manually by adding a rule to your
VPC security group associated with the database.

For the console to list a Lambda function, the following conditions must apply:

• The name of the security group associated with the compute resource matches the pattern lambda-
rds-n or lambda-rdsproxy-n (where n is a number).
• The security group associated with the compute resource has an outbound rule with the port range set
to the port of the DB instance or an associated proxy. The destination for the outbound rule must be
set to a security group associated with the DB instance or an associated proxy.
• If the configuration includes a proxy, the name of the security group attached to the proxy associated
with your database matches the pattern rdsproxy-lambda-n (where n is a number).
• The security group associated with the function has an outbound rule with the port set to the port
that the DB instance or associated proxy uses. The destination must be set to a security group
associated with the DB instance or associated proxy.

To view compute resources automatically connected to an DB instance

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases, and then choose the DB instance.
3. On the Connectivity & security tab, view the compute resources under Connected compute
resources.

400
Amazon Relational Database Service User Guide
Modifying a DB instance

Modifying an Amazon RDS DB instance


You can change the settings of a DB instance to accomplish tasks such as adding additional storage or
changing the DB instance class. In this topic, you can find out how to modify an Amazon RDS DB instance
and learn about the settings for DB instances.

We recommend that you test any changes on a test instance before modifying a production instance.
Doing this helps you to fully understand the impact of each change. Testing is especially important when
upgrading database versions.

Most modifications to a DB instance you can either apply immediately or defer until the next
maintenance window. Some modifications, such as parameter group changes, require that you manually
reboot your DB instance for the change to take effect.
Important
Some modifications result in downtime because Amazon RDS must reboot your DB instance
for the change to take effect. Review the impact to your database and applications before
modifying your DB instance settings.

Console
To modify a DB instance

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases, and then choose the DB instance that you want to
modify.
3. Choose Modify. The Modify DB instance page appears.
4. Change any of the settings that you want. For information about each setting, see Settings for DB
instances (p. 402).
5. When all the changes are as you want them, choose Continue and check the summary of
modifications.
6. (Optional) Choose Apply immediately to apply the changes immediately. Choosing this option
can cause downtime in some cases. For more information, see Using the Apply Immediately
setting (p. 402).
7. On the confirmation page, review your changes. If they are correct, choose Modify DB instance to
save your changes.

Or choose Back to edit your changes or Cancel to cancel your changes.

AWS CLI
To modify a DB instance by using the AWS CLI, call the modify-db-instance command. Specify the DB
instance identifier and the values for the options that you want to modify. For information about each
option, see Settings for DB instances (p. 402).

Example

The following code modifies mydbinstance by setting the backup retention period to 1 week (7
days). The code enables deletion protection by using --deletion-protection. To disable deletion
protection, use --no-deletion-protection. The changes are applied during the next maintenance
window by using --no-apply-immediately. Use --apply-immediately to apply the changes
immediately. For more information, see Using the Apply Immediately setting (p. 402).

401
Amazon Relational Database Service User Guide
Apply Immediately setting

For Linux, macOS, or Unix:

aws rds modify-db-instance \


--db-instance-identifier mydbinstance \
--backup-retention-period 7 \
--deletion-protection \
--no-apply-immediately

For Windows:

aws rds modify-db-instance ^


--db-instance-identifier mydbinstance ^
--backup-retention-period 7 ^
--deletion-protection ^
--no-apply-immediately

RDS API
To modify a DB instance by using the Amazon RDS API, call the ModifyDBInstance operation. Specify
the DB instance identifier, and the parameters for the settings that you want to modify. For information
about each parameter, see Settings for DB instances (p. 402).

Using the Apply Immediately setting


When you modify a DB instance, you can apply the changes immediately. To apply changes immediately,
you choose the Apply Immediately option in the AWS Management Console. Or you use the --apply-
immediately parameter when calling the AWS CLI or set the ApplyImmediately parameter to true
when using the Amazon RDS API.

If you don't choose to apply changes immediately, the changes are put into the pending modifications
queue. During the next maintenance window, any pending changes in the queue are applied. If you
choose to apply changes immediately, your new changes and any changes in the pending modifications
queue are applied.
Important
If any of the pending modifications require the DB instance to be temporarily unavailable
(downtime), choosing the apply immediately option can cause unexpected downtime.
When you choose to apply a change immediately, any pending modifications are also applied
immediately, instead of during the next maintenance window.
If you don't want a pending change to be applied in the next maintenance window, you
can modify the DB instance to revert the change. You can do this by using the AWS CLI and
specifying the --apply-immediately option.

Changes to some database settings are applied immediately, even if you choose to defer your changes.
To see how the different database settings interact with the apply immediately setting, see Settings for
DB instances (p. 402).

Settings for DB instances


In the following table, you can find details about which settings you can and can't modify. You can also
find when changes can be applied and whether the changes cause downtime for your DB instance. By
using Amazon RDS features such as Multi-AZ, you can minimize downtime if you later modify the DB
instance. For more information, see Configuring and managing a Multi-AZ deployment (p. 492).

You can modify a DB instance using the console, the modify-db-instance CLI command, or the
ModifyDBInstance RDS API operation.

402
Amazon Relational Database Service User Guide
Available settings

Console setting and description CLI option When the change Downtime notes Supported
and RDS API occurs DB
parameter engines

Allocated storage CLI option: If you choose to Downtime doesn't All DB


apply the change occur during engines
The storage, in gibibytes, that you want --allocated- immediately, this change.
to allocate for your DB instance. You storage it occurs Performance
can only increase the allocated storage. immediately. might be degraded
You can't reduce the allocated storage. RDS API during the change.
parameter: If you don't
You can't modify the storage of some choose to apply
older DB instances, or DB instances AllocatedStoragethe change
restored from older DB snapshots. The immediately, it
Allocated storage setting is disabled occurs during the
in the console if your DB instance isn't next maintenance
eligible. You can check whether you can window.
allocate more storage by using the CLI
command describe-valid-db-instance-
modifications. This command returns
the valid storage options for your DB
instance.

You can't modify allocated storage


if the DB instance status is storage-
optimization. You also can't modify
allocated storage for a DB instance if it's
been modified in the last six hours.

The maximum storage allowed depends


on your DB engine and the storage type.
For more information, see Amazon RDS
DB instance storage (p. 101).

Architecture settings CLI option: If you choose to Downtime occurs Oracle


apply the change during this change.
The architecture of the database: CDB --engine immediately,
(single-tenant) or non-CDB. Oracle oracle-ee-cdb it occurs
Database 21c uses CDB architecture (multitenant) immediately.
only. Oracle Database 19c can use
either CDB or non-CDB architecture. --engine If you don't
Releases lower than Oracle Database oracle-se2-cdb choose to apply
19c use non-CDB only. (multitenant) the change
immediately, it
If you choose Multitenant architecture, API parameter: occurs during the
RDS for Oracle converts your non-CDB next maintenance
into a CDB. This setting is supported Engine
window.
only if your database is a non-CDB
running Oracle Database 19c with
the April 2021 or higher RU. After
conversion, your CDB contains one
pluggable database (PDB).

For more information, see Overview of


RDS for Oracle CDBs (p. 1840).

403
Amazon Relational Database Service User Guide
Available settings

Console setting and description CLI option When the change Downtime notes Supported
and RDS API occurs DB
parameter engines

Auto minor version upgrade CLI option: The change occurs Downtime doesn't All DB
immediately. This occur during this engines
Yes to enable your DB instance to --auto-minor- setting ignores the change.
receive preferred minor DB engine version- apply immediately
version upgrades automatically when upgrade|--no- setting.
they become available. Amazon RDS auto-minor-
performs automatic minor version version-
upgrades in the maintenance window. upgrade
Otherwise, No.
RDS API
For more information, see parameter:
Automatically upgrading the minor
engine version (p. 431). AutoMinorVersionUpgrade

Backup replication Not available The change Downtime doesn't Oracle,


when modifying is applied occur during this PostgreSQL,
Choose Enable replication to another a DB instance. For asynchronously, as change. SQL
AWS Region to create backups in an information on soon as possible. Server
additional Region for disaster recovery. enabling cross-
Region backups
Then choose the Destination Region for using the AWS CLI
the additional backups. or RDS API, see
Enabling cross-
Region automated
backups (p. 604).

Backup retention period CLI option: If you choose to Downtime occurs All DB
apply the change if you change from engines
The number of days that automatic --backup- immediately, 0 to a nonzero
backups are retained. To disable retention- it occurs value, or from a
automatic backups, set the backup period immediately. nonzero value to
retention period to 0. 0.
RDS API If you don't
For more information, see Working with parameter: choose to apply This applies to
backups (p. 591). the change both Single-AZ
BackupRetentionPeriod
immediately, and Multi-AZ DB
Note
If you use AWS Backup to and you change instances.
manage your backups, this the setting
option doesn't apply. For from a nonzero
information about AWS value to another
Backup, see the AWS Backup nonzero value, the
Developer Guide. change is applied
asynchronously, as
soon as possible.
Otherwise, the
change occurs
during the next
maintenance
window.

404
Amazon Relational Database Service User Guide
Available settings

Console setting and description CLI option When the change Downtime notes Supported
and RDS API occurs DB
parameter engines

Backup window CLI option: The change Downtime doesn't All DB


is applied occur during this engines
The time range during which automated --preferred- asynchronously, as change.
backups of your databases occur. backup-window soon as possible.
The backup window is a start time in
Universal Coordinated Time (UTC), and RDS API
a duration in hours. parameter:

For more information, see Working with PreferredBackupWindow


backups (p. 591).
Note
If you use AWS Backup to
manage your backups, this
option doesn't appear. For
information about AWS
Backup, see the AWS Backup
Developer Guide.

Certificate authority CLI option: If you choose to Downtime only All DB


apply the change occurs if the DB engines
The certificate authority (CA) for --ca- immediately, engine doesn't
the server certificate used by the DB certificate- it occurs support rotation
instance. identifier immediately. without restart.
You can use the
For more information, see Using SSL/ RDS API If you don't describe-db-
TLS to encrypt a connection to a DB parameter: choose to apply engine-versions
instance (p. 2591). the change AWS CLI command
CACertificateIdentifier
immediately, it to determine
occurs during the whether the DB
next maintenance engine supports
window. rotation without
restart.

Copy tags to snapshots CLI option: The change occurs Downtime doesn't All DB
immediately. This occur during this engines
If you have any DB instance tags, enable --copy-tags- setting ignores the change.
this option to copy them when you to-snapshot apply immediately
create a DB snapshot. or --no-copy- setting.
tags-to-
For more information, see Tagging snapshot
Amazon RDS resources (p. 461).
RDS API
parameter:

CopyTagsToSnapshot

405
Amazon Relational Database Service User Guide
Available settings

Console setting and description CLI option When the change Downtime notes Supported
and RDS API occurs DB
parameter engines

Database port CLI option: The change occurs The DB instance All DB
immediately. This is rebooted engines
The port that you want to use to access --db-port- setting ignores the immediately.
the DB instance. number apply immediately
setting.
The port value must not match any of RDS API
the port values specified for options in parameter:
the option group that is associated with
the DB instance. DBPortNumber

For more information, see Connecting


to an Amazon RDS DB instance (p. 325).

DB engine version CLI option: If you choose to Downtime occurs All DB


apply the change during this change. engines
The version of the DB engine that you --engine- immediately,
want to use. Before you upgrade your version it occurs
production DB instance, we recommend immediately.
that you test the upgrade process on RDS API
a test DB instance. Doing this helps parameter: If you don't
verify its duration and validate your choose to apply
applications. EngineVersion the change
immediately, it
For more information, see Upgrading a occurs during the
DB instance engine version (p. 429). next maintenance
window.

DB instance class CLI option: If you choose to Downtime occurs All DB


apply the change during this change. engines
The DB instance class that you want to --db-instance- immediately,
use. class it occurs
immediately.
For more information, see DB instance RDS API
classes (p. 11). parameter: If you don't
choose to apply
DBInstanceClass the change
immediately, it
occurs during the
next maintenance
window.

406
Amazon Relational Database Service User Guide
Available settings

Console setting and description CLI option When the change Downtime notes Supported
and RDS API occurs DB
parameter engines

DB instance identifier CLI option: If you choose to Downtime occurs All DB


apply the change during this change engines
The new DB instance identifier. This --new-db- immediately, unless your DB
value is stored as a lowercase string. instance- it occurs engine version
identifier immediately. supports dynamic
For more information about the SSL upload.
effects of renaming a DB instance, see RDS API If you don't To determine
Renaming a DB instance (p. 434). parameter: choose to apply whether your
the change version requires
NewDBInstanceIdentifier
immediately, it a restart, run the
occurs during the following AWS CLI
next maintenance command:
window.
aws rds
describe-db-
engine-versions
\
--default-only \
--engine your-
db-engine \
--query
'DBEngineVersions[*].SupportsCertific

407
Amazon Relational Database Service User Guide
Available settings

Console setting and description CLI option When the change Downtime notes Supported
and RDS API occurs DB
parameter engines

DB parameter group CLI option: The parameter Downtime doesn't All DB


group change occur during this engines
The DB parameter group that you want --db- occurs change.
associated with the DB instance. parameter- immediately.
group-name When you
For more information, see Working with associate a new DB
parameter groups (p. 347). RDS API parameter group
parameter: with a DB instance,
the modified
DBParameterGroupName static and dynamic
parameters are
applied only after
the DB instance
is rebooted.
However, if you
modify dynamic
parameters in the
DB parameter
group after you
associate it with
the DB instance,
these changes
are applied
immediately
without a reboot.

For more
information,
see Working
with parameter
groups (p. 347)
and Rebooting
a DB
instance (p. 436).

Deletion protection CLI option: The change occurs Downtime doesn't All DB
immediately. This occur during this engines
Enable deletion protection to prevent --deletion- setting ignores the change.
your DB instance from being deleted. protection|-- apply immediately
no-deletion- setting.
For more information, see Deleting a DB protection
instance (p. 489).
RDS API
parameter:

DeletionProtection

408
Amazon Relational Database Service User Guide
Available settings

Console setting and description CLI option When the change Downtime notes Supported
and RDS API occurs DB
parameter engines

Enhanced Monitoring CLI option: The change occurs Downtime doesn't All DB
immediately. This occur during this engines
Enable Enhanced Monitoring to enable --monitoring- setting ignores the change.
gathering metrics in real time for the interval and apply immediately
operating system that your DB instance --monitoring- setting.
runs on. role-arn

For more information, see Monitoring RDS API


OS metrics with Enhanced parameter:
Monitoring (p. 797).
MonitoringInterval
and
MonitoringRoleArn

IAM DB authentication CLI option: If you choose to Downtime doesn't Only


apply the change occur during this MariaDB,
Enable IAM DB authentication to --enable-iam- immediately, change. MySQL,
authenticate database users through database- it occurs and
users and roles. authentication|--
immediately. PostgreSQL
no-enable-
For more information, see IAM database iam-database- If you don't
authentication for MariaDB, MySQL, and authentication choose to apply
PostgreSQL (p. 2642). the change
RDS API immediately, it
parameter: occurs during the
next maintenance
EnableIAMDatabaseAuthentication
window.

Kerberos authentication CLI option: If you choose to A brief downtime Only


apply the change occurs during this Microsoft
Choose the Active Directory to move --domain and immediately, change. SQL
the DB instance to. The directory --domain-iam- it occurs Server,
must exist prior to this operation. If a role-name immediately. MySQL,
directory is already selected, you can Oracle,
specify None to remove the DB instance RDS API If you don't and
from its current directory. parameter: choose to apply PostgreSQL
the change
For more information, see Kerberos Domain and immediately, it
authentication (p. 2567). DomainIAMRoleName
occurs during the
next maintenance
window.

409
Amazon Relational Database Service User Guide
Available settings

Console setting and description CLI option When the change Downtime notes Supported
and RDS API occurs DB
parameter engines

License model CLI option: If you choose to Downtime occurs Only


apply the change during this change. Microsoft
Choose bring-your-own-license to use --license- immediately, SQL
your license for Oracle. model it occurs Server
immediately. and
Choose license-included to use the RDS API Oracle
general license agreement for Microsoft parameter: If you don't
SQL Server or Oracle. choose to apply
LicenseModel the change
For more information, see Licensing immediately, it
Microsoft SQL Server on Amazon occurs during the
RDS (p. 1379) and RDS for Oracle next maintenance
licensing options (p. 1793). window.

Log exports CLI option: The change occurs Downtime doesn't All DB
immediately. This occur during this engines
The types of database log files to --cloudwatch- setting ignores the change.
publish to Amazon CloudWatch Logs. logs-export- apply immediately
configuration setting.
For more information, see Publishing
database logs to Amazon CloudWatch RDS API
Logs (p. 898). parameter:

CloudwatchLogsExportConfiguration

Maintenance window CLI option: The change occurs If there are one All DB
immediately. This or more pending engines
The time range during which --preferred- setting ignores the actions that cause
system maintenance occurs. System maintenance- apply immediately downtime, and
maintenance includes upgrades, if window setting. the maintenance
applicable. The maintenance window window is changed
is a start time in Universal Coordinated RDS API to include the
Time (UTC), and a duration in hours. parameter: current time,
those pending
If you set the window to the current PreferredMaintenanceWindow
actions are applied
time, there must be at least 30 minutes immediately and
between the current time and the end downtime occurs.
of the window. This timing helps ensure
that any pending changes are applied.

For more information, see The Amazon


RDS maintenance window (p. 423).

410
Amazon Relational Database Service User Guide
Available settings

Console setting and description CLI option When the change Downtime notes Supported
and RDS API occurs DB
parameter engines

Manage master credentials in AWS CLI option: If you are turning Downtime doesn't All DB
Secrets Manager on or turning off occur during this engines
--manage- automatic master change.
Select Manage master credentials in master-user- user password
AWS Secrets Manager to manage the password | -- management, the
master user password in a secret in no-manage- change occurs
Secrets Manager. master-user- immediately. This
password change ignores the
Optionally, choose a KMS key to use apply immediately
to protect the secret. Choose from the --master-user- setting.
KMS keys in your account, or enter the secret-kms-
key from a different account. key-id If you are rotating
the master user
If RDS is already managing the master --rotate- password, you
user password for the DB instance, you master-user- must specify
can rotate the master user password by password | -- that the change
choosing Rotate secret immediately. no-rotate- is applied
master-user- immediately.
For more information, see Password password
management with Amazon RDS and
AWS Secrets Manager (p. 2568). RDS API
parameter:

ManageMasterUserPassword

MasterUserSecretKmsKeyId

RotateMasterUserPassword

Multi-AZ deployment CLI option: If you choose to Downtime doesn't All DB


apply the change occur during this engines
Yes to deploy your DB instance in --multi-az|-- immediately, change. However,
multiple Availability Zones. Otherwise, no-multi-az it occurs there is a possible
No. immediately. performance
RDS API impact. For more
For more information, see parameter: If you don't information,
Configuring and managing a Multi-AZ choose to apply see Modifying
deployment (p. 492). MultiAZ the change a DB instance
immediately, it to be a Multi-
occurs during the AZ DB instance
next maintenance deployment (p. 494).
window.

411
Amazon Relational Database Service User Guide
Available settings

Console setting and description CLI option When the change Downtime notes Supported
and RDS API occurs DB
parameter engines

Network type CLI option: If you choose to Downtime is All DB


apply the change possible during engines
The IP addressing protocols supported --network-type immediately, this change.
by the DB instance. it occurs
RDS API immediately.
IPv4 to specify that resources can parameter:
communicate with the DB instance only If you don't
over the Internet Protocol version 4 NetworkType choose to apply
(IPv4) addressing protocol. the change
immediately, it
Dual-stack mode to specify that occurs during the
resources can communicate with the next maintenance
DB instance over IPv4, Internet Protocol window.
version 6 (IPv6), or both. Use dual-
stack mode if you have any resources
that must communicate with your
DB instance over the IPv6 addressing
protocol. Also, make sure that you
associate an IPv6 CIDR block with all
subnets in the DB subnet group that
you specify.

For more information, see Amazon RDS


IP addressing (p. 2690).

New master password CLI option: The change Downtime doesn't All DB
is applied occur during this engines
The password for your master user. --master-user- asynchronously, as change.
The password must contain 8–41 password soon as possible.
alphanumeric characters. This setting
RDS API ignores the apply
parameter: immediately
setting.
MasterUserPassword

Option group CLI option: If you choose to Downtime doesn't All DB


apply the change occur during engines
The option group that you want --option- immediately, this change.
associated with the DB instance. group-name it occurs One exception
immediately. is adding the
For more information, see Working with RDS API MariaDB Audit
option groups (p. 331). parameter: If you don't Plugin to an RDS
choose to apply for MariaDB or
OptionGroupName the change
RDS for MySQL
immediately, it DB instance, which
occurs during the might cause an
next maintenance outage.
window.

412
Amazon Relational Database Service User Guide
Available settings

Console setting and description CLI option When the change Downtime notes Supported
and RDS API occurs DB
parameter engines

Performance Insights CLI option: The change occurs Downtime doesn't All DB
immediately. This occur during this engines
Enable Performance Insights to --enable- setting ignores the change.
monitor your DB instance load so that performance- apply immediately
you can analyze and troubleshoot your insights|-- setting.
database performance. no-enable-
performance-
Performance Insights isn't available insights
for some DB engine versions and DB
instance classes. The Performance RDS API
Insights section doesn't appear in the parameter:
console if it isn't available for your DB
instance. EnablePerformanceInsights

For more information, see Monitoring


DB load with Performance Insights
on Amazon RDS (p. 720) and
Amazon RDS DB engine, Region, and
instance class support for Performance
Insights (p. 724).

Performance Insights AWS KMS key CLI option: The change occurs Downtime doesn't All DB
immediately. This occur during this engines
The AWS KMS key identifier for the AWS --performance- setting ignores the change.
KMS key for encryption of Performance insights-kms- apply immediately
Insights data. The key identifier is the key-id setting.
Amazon Resource Name (ARN), AWS
KMS key identifier, or the key alias for RDS API
the KMS key. parameter:

For more information, see Turning PerformanceInsightsKMSKeyId


Performance Insights on and
off (p. 727).

Performance Insights Retention period CLI option: The change occurs Downtime doesn't All DB
immediately. This occur during this engines
The amount of time, in days, to --performance- setting ignores the change.
retain Performance Insights data. insights- apply immediately
The retention setting in the free tier retention- setting.
is Default (7 days). To retain your period
performance data for longer, specify
1–24 months. For more information RDS API
about retention periods, see Pricing parameter:
and data retention for Performance
Insights (p. 726). PerformanceInsightsRetentionPeriod

For more information, see Turning


Performance Insights on and
off (p. 727).

413
Amazon Relational Database Service User Guide
Available settings

Console setting and description CLI option When the change Downtime notes Supported
and RDS API occurs DB
parameter engines

Processor features CLI option: If you choose to Downtime occurs Only


apply the change during this change. Oracle
The number of CPU cores and the --processor- immediately,
number of threads per core for the DB features and -- it occurs
instance class of the DB instance. use-default- immediately.
processor-
For more information, see Configuring features | If you don't
the processor for a DB instance class in --no-use- choose to apply
RDS for Oracle (p. 71). default- the change
processor- immediately, it
features occurs during the
next maintenance
RDS API window.
parameter:

ProcessorFeatures
and
UseDefaultProcessorFeatures

Provisioned IOPS CLI option: If you choose to Downtime doesn't All DB


apply the change occur during this engines
The Provisioned IOPS (I/O operations --iops immediately, change.
per second) value for the DB instance. it occurs
This setting is available only if you RDS API immediately.
choose one of the following for Storage parameter:
type: If you don't
Iops choose to apply
• General purpose SSD (gp3) the change
• Provisioned IOPS SSD (io1) immediately, it
occurs during the
next maintenance
For more information, see Provisioned window.
IOPS SSD storage (p. 104) and Amazon
RDS DB instance storage (p. 101).

414
Amazon Relational Database Service User Guide
Available settings

Console setting and description CLI option When the change Downtime notes Supported
and RDS API occurs DB
parameter engines

Public access CLI option: The change occurs Downtime doesn't All DB
immediately. This occur during this engines
Publicly accessible to give the DB --publicly- setting ignores the change.
instance a public IP address, meaning accessible|-- apply immediately
that it's accessible outside the VPC. To no-publicly- setting.
be publicly accessible, the DB instance accessible
also has to be in a public subnet in the
VPC. RDS API
parameter:
Not publicly accessible to make the DB
instance accessible only from inside the PubliclyAccessible
VPC.

For more information, see Hiding


a DB instance in a VPC from the
internet (p. 2695).

To connect to a DB instance from


outside its VPC, the DB instance must
be publicly accessible. Also, access
must be granted using the inbound
rules of the DB instance's security
group. In addition, other requirements
must be met. For more information,
see Can't connect to Amazon RDS DB
instance (p. 2727).

If your DB instance isn't publicly


accessible, you can also use an AWS
Site-to-Site VPN connection or an AWS
Direct Connect connection to access
it from a private network. For more
information, see Internetwork traffic
privacy (p. 2605).

Security group CLI option: The change Downtime doesn't All DB


is applied occur during this engines
The VPC security group that you want --vpc- asynchronously, as change.
associated with the DB instance. security- soon as possible.
group-ids This setting
For more information, see Controlling ignores the apply
access with security groups (p. 2680). RDS API immediately
parameter: setting.
VpcSecurityGroupIds

415
Amazon Relational Database Service User Guide
Available settings

Console setting and description CLI option When the change Downtime notes Supported
and RDS API occurs DB
parameter engines

Storage autoscaling CLI option: The change occurs Downtime doesn't All DB
immediately. This occur during this engines
Enable storage autoscaling to enable --max- setting ignores the change.
Amazon RDS to automatically increase allocated- apply immediately
storage when needed to avoid having storage setting.
your DB instance run out of storage
space. RDS API
parameter:
Use Maximum storage threshold to
set the upper limit for Amazon RDS to MaxAllocatedStorage
automatically increase storage for your
DB instance. The default is 1,000 GiB.

For more information, see Managing


capacity automatically with Amazon
RDS storage autoscaling (p. 480).

Storage throughput CLI option: If you choose to Downtime doesn't All DB


apply the change occur during this engines
The new storage throughput value for --storage- immediately, change.
the DB instance. This setting is available throughput it occurs
only if you choose General purpose immediately.
SSD (gp3) for Storage type. RDS API
parameter: If you don't
For more information, see Amazon RDS choose to apply
DB instance storage (p. 101). StorageThroughput
the change
immediately, it
occurs during the
next maintenance
window.

Storage type CLI option: If you choose to The following All DB


apply the change changes all engines
The storage type that you want to use. --storage-type immediately, result in a brief
it occurs downtime while
If you choose General Purpose SSD RDS API immediately. the process starts.
(gp3), you can provision additional parameter: After that, you can
Provisioned IOPS and Storage If you don't use your database
throughput under Advanced settings. StorageType choose to apply normally while
the change the change takes
If you choose Provisioned IOPS SSD immediately, it place.
(io1), enter the Provisioned IOPS value. occurs during the
next maintenance • From General
After Amazon RDS begins to modify
window. Purpose (SSD)
your DB instance to change the storage
or Provisioned
size or type, you can't submit another
IOPS (SSD) to
request to change the storage size,
Magnetic.
performance, or type for six hours.
• From Magnetic
For more information, see Amazon RDS to General
storage types (p. 101). Purpose (SSD)
or Provisioned
IOPS (SSD).

416
Amazon Relational Database Service User Guide
Available settings

Console setting and description CLI option When the change Downtime notes Supported
and RDS API occurs DB
parameter engines

DB subnet group CLI option: If you choose to Downtime occurs All DB


apply the change during this change. engines
The DB subnet group for the DB --db-subnet- immediately,
instance. You can use this setting to group-name it occurs
move your DB instance to a different immediately.
VPC. RDS API
parameter: If you don't
For more information, see Amazon VPC choose to apply
VPCs and Amazon RDS (p. 2688). DBSubnetGroupName
the change
immediately, it
occurs during the
next maintenance
window.

417
Amazon Relational Database Service User Guide
Maintaining a DB instance

Maintaining a DB instance
Periodically, Amazon RDS performs maintenance on Amazon RDS resources. Maintenance most often
involves updates to the following resources in your DB instance:

• Underlying hardware
• Underlying operating system (OS)
• Database engine version

Updates to the operating system most often occur for security issues. You should do them as soon as
possible.

Some maintenance items require that Amazon RDS take your DB instance offline for a short time.
Maintenance items that require a resource to be offline include required operating system or database
patching. Required patching is automatically scheduled only for patches that are related to security
and instance reliability. Such patching occurs infrequently, typically once every few months. It seldom
requires more than a fraction of your maintenance window.

Deferred DB instance modifications that you have chosen not to apply immediately are also applied
during the maintenance window. For example, you might choose to change the DB instance class
or parameter group during the maintenance window. Such modifications that you specify using
the pending reboot setting don't show up in the Pending maintenance list. For information about
modifying a DB instance, see Modifying an Amazon RDS DB instance (p. 401).

Topics
• Viewing pending maintenance (p. 418)
• Applying updates for a DB instance (p. 421)
• Maintenance for Multi-AZ deployments (p. 422)
• The Amazon RDS maintenance window (p. 423)
• Adjusting the preferred DB instance maintenance window (p. 424)
• Working with operating system updates (p. 426)

Viewing pending maintenance


View whether a maintenance update is available for your DB instance by using the RDS console, the
AWS CLI, or the RDS API. If an update is available, it is indicated in the Maintenance column for the DB
instance on the Amazon RDS console, as shown following.

418
Amazon Relational Database Service User Guide
Viewing pending maintenance

If no maintenance update is available for a DB instance, the column value is none for it.

If a maintenance update is available for a DB instance, the following column values are possible:

• required – The maintenance action will be applied to the resource and can't be deferred indefinitely.
• available – The maintenance action is available, but it will not be applied to the resource
automatically. You can apply it manually.
• next window – The maintenance action will be applied to the resource during the next maintenance
window.
• In progress – The maintenance action is in the process of being applied to the resource.

If an update is available, you can take one of the actions:

• If the maintenance value is next window, defer the maintenance items by choosing Defer upgrade
from Actions. You can't defer a maintenance action if it has already started.
• Apply the maintenance items immediately.
• Schedule the maintenance items to start during your next maintenance window.
• Take no action.

To take an action, choose the DB instance to show its details, then choose Maintenance & backups. The
pending maintenance items appear.

419
Amazon Relational Database Service User Guide
Viewing pending maintenance

The maintenance window determines when pending operations start, but doesn't limit the total run
time of these operations. Maintenance operations aren't guaranteed to finish before the maintenance
window ends, and can continue beyond the specified end time. For more information, see The Amazon
RDS maintenance window (p. 423).

You can also view whether a maintenance update is available for your DB instance by running the
describe-pending-maintenance-actions AWS CLI command.

420
Amazon Relational Database Service User Guide
Applying updates

Applying updates for a DB instance


With Amazon RDS, you can choose when to apply maintenance operations. You can decide when Amazon
RDS applies updates by using the RDS console, AWS Command Line Interface (AWS CLI), or RDS API.

Console
To manage an update for a DB instance

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases.
3. Choose the DB instance that has a required update.
4. For Actions, choose one of the following:

• Upgrade now
• Upgrade at next window
Note
If you choose Upgrade at next window and later want to delay the update, you can
choose Defer upgrade. You can't defer a maintenance action if it has already started.
To cancel a maintenance action, modify the DB instance and disable Auto minor version
upgrade.

AWS CLI
To apply a pending update to a DB instance, use the apply-pending-maintenance-action AWS CLI
command.

Example
For Linux, macOS, or Unix:

aws rds apply-pending-maintenance-action \


--resource-identifier arn:aws:rds:us-west-2:001234567890:db:mysql-db \
--apply-action system-update \
--opt-in-type immediate

For Windows:

aws rds apply-pending-maintenance-action ^


--resource-identifier arn:aws:rds:us-west-2:001234567890:db:mysql-db ^
--apply-action system-update ^
--opt-in-type immediate

Note
To defer a maintenance action, specify undo-opt-in for --opt-in-type. You can't specify
undo-opt-in for --opt-in-type if the maintenance action has already started.
To cancel a maintenance action, run the modify-db-instance AWS CLI command and specify --
no-auto-minor-version-upgrade.

To return a list of resources that have at least one pending update, use the describe-pending-
maintenance-actions AWS CLI command.

Example
For Linux, macOS, or Unix:

421
Amazon Relational Database Service User Guide
Maintenance for Multi-AZ deployments

aws rds describe-pending-maintenance-actions \


--resource-identifier arn:aws:rds:us-west-2:001234567890:db:mysql-db

For Windows:

aws rds describe-pending-maintenance-actions ^


--resource-identifier arn:aws:rds:us-west-2:001234567890:db:mysql-db

You can also return a list of resources for a DB instance by specifying the --filters parameter of the
describe-pending-maintenance-actions AWS CLI command. The format for the --filters
command is Name=filter-name,Value=resource-id,....

The following are the accepted values for the Name parameter of a filter:

• db-instance-id – Accepts a list of DB instance identifiers or Amazon Resource Names (ARNs). The
returned list only includes pending maintenance actions for the DB instances identified by these
identifiers or ARNs.
• db-cluster-id – Accepts a list of DB cluster identifiers or ARNs for Amazon Aurora. The returned list
only includes pending maintenance actions for the DB clusters identified by these identifiers or ARNs.

For example, the following example returns the pending maintenance actions for the sample-
instance1 and sample-instance2 DB instances.

Example
For Linux, macOS, or Unix:

aws rds describe-pending-maintenance-actions \


--filters Name=db-instance-id,Values=sample-instance1,sample-instance2

For Windows:

aws rds describe-pending-maintenance-actions ^


--filters Name=db-instance-id,Values=sample-instance1,sample-instance2

RDS API
To apply an update to a DB instance, call the Amazon RDS API ApplyPendingMaintenanceAction
operation.

To return a list of resources that have at least one pending update, call the Amazon RDS API
DescribePendingMaintenanceActions operation.

Maintenance for Multi-AZ deployments


Running a DB instance as a Multi-AZ deployment can further reduce the impact of a maintenance event.
This result is because Amazon RDS applies operating system updates by following these steps:

1. Perform maintenance on the standby.


2. Promote the standby to primary.
3. Perform maintenance on the old primary, which becomes the new standby.

If you upgrade the database engine for your DB instance in a Multi-AZ deployment, Amazon RDS
modifies both primary and secondary DB instances at the same time. In this case, both the primary and

422
Amazon Relational Database Service User Guide
The maintenance window

secondary DB instances in the Multi-AZ deployment are unavailable during the upgrade. This operation
causes downtime until the upgrade is complete. The duration of the downtime varies based on the size
of your DB instance.

If your DB instance runs RDS for MySQL or RDS for MariaDB, you can minimize the downtime required
for an upgrade by using a blue/green deployment. For more information, see Using Amazon RDS
Blue/Green Deployments for database updates (p. 566). If you upgrade an RDS for SQL Server DB
instance in a Multi-AZ deployment, then Amazon RDS performs rolling upgrades, so you have an outage
only for the duration of a failover. For more information, see Multi-AZ and in-memory optimization
considerations (p. 1417).

For more information about Multi-AZ deployments, see Configuring and managing a Multi-AZ
deployment (p. 492).

The Amazon RDS maintenance window


Every DB instance has a weekly maintenance window during which any system changes are applied.
Think of the maintenance window as an opportunity to control when modifications and software
patching occur. If a maintenance event is scheduled for a given week, it's initiated during the 30-minute
maintenance window you identify. Most maintenance events also complete during the 30-minute
maintenance window, although larger maintenance events may take more than 30 minutes to complete.

The 30-minute maintenance window is selected at random from an 8-hour block of time per region.
If you don't specify a maintenance window when you create the DB instance, RDS assigns a 30-minute
maintenance window on a randomly selected day of the week.

RDS consumes some of the resources on your DB instance while maintenance is being applied. You might
observe a minimal effect on performance. For a DB instance, on rare occasions, a Multi-AZ failover might
be required for a maintenance update to complete.

Following, you can find the time blocks for each region from which default maintenance windows are
assigned.

Region Name Region Time Block

US East (Ohio) us-east-2 03:00–11:00 UTC

US East (N. Virginia) us-east-1 03:00–11:00 UTC

US West (N. California) us-west-1 06:00–14:00 UTC

US West (Oregon) us-west-2 06:00–14:00 UTC

Africa (Cape Town) af-south-1 03:00–11:00 UTC

Asia Pacific (Hong ap-east-1 06:00–14:00 UTC


Kong)

Asia Pacific ap-south-2 06:30–14:30 UTC


(Hyderabad)

Asia Pacific (Jakarta) ap-southeast-3 08:00–16:00 UTC

Asia Pacific ap-southeast-4 11:00–19:00 UTC


(Melbourne)

Asia Pacific (Mumbai) ap-south-1 06:00–14:00 UTC

Asia Pacific (Osaka) ap-northeast-3 22:00–23:59 UTC

423
Amazon Relational Database Service User Guide
Adjusting the maintenance window for a DB instance

Region Name Region Time Block

Asia Pacific (Seoul) ap-northeast-2 13:00–21:00 UTC

Asia Pacific (Singapore) ap-southeast-1 14:00–22:00 UTC

Asia Pacific (Sydney) ap-southeast-2 12:00–20:00 UTC

Asia Pacific (Tokyo) ap-northeast-1 13:00–21:00 UTC

Canada (Central) ca-central-1 03:00–11:00 UTC

China (Beijing) cn-north-1 06:00–14:00 UTC

China (Ningxia) cn-northwest-1 06:00–14:00 UTC

Europe (Frankfurt) eu-central-1 21:00–05:00 UTC

Europe (Ireland) eu-west-1 22:00–06:00 UTC

Europe (London) eu-west-2 22:00–06:00 UTC

Europe (Milan) eu-south-1 02:00–10:00 UTC

Europe (Paris) eu-west-3 23:59–07:29 UTC

Europe (Spain) eu-south-2 02:00–10:00 UTC

Europe (Stockholm) eu-north-1 23:00–07:00 UTC

Europe (Zurich) eu-central-2 02:00–10:00 UTC

Israel (Tel Aviv) il-central-1 03:00–11:00 UTC

Middle East (Bahrain) me-south-1 06:00–14:00 UTC

Middle East (UAE) me-central-1 05:00–13:00 UTC

South America (São sa-east-1 00:00–08:00 UTC


Paulo)

AWS GovCloud (US- us-gov-east-1 17:00–01:00 UTC


East)

AWS GovCloud (US- us-gov-west-1 06:00–14:00 UTC


West)

Adjusting the preferred DB instance maintenance


window
The maintenance window should fall at the time of lowest usage and thus might need modification
from time to time. Your DB instance is unavailable during this time only if the system changes, such as
a change in DB instance class, are being applied and require an outage. Your DB instance is unavailable
only for the minimum amount of time required to make the necessary changes.

In the following example, you adjust the preferred maintenance window for a DB instance.

For this example, we assume that a DB instance named mydbinstance exists and has a preferred
maintenance window of "Sun:05:00-Sun:06:00" UTC.

424
Amazon Relational Database Service User Guide
Adjusting the maintenance window for a DB instance

Console
To adjust the preferred maintenance window

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases, and then select the DB instance that you want to modify.
3. Choose Modify. The Modify DB instance page appears.
4. In the Maintenance section, update the maintenance window.
Note
The maintenance window and the backup window for the DB instance cannot overlap. If
you enter a value for the maintenance window that overlaps the backup window, an error
message appears.
5. Choose Continue.

On the confirmation page, review your changes.


6. To apply the changes to the maintenance window immediately, select Apply immediately.
7. Choose Modify DB instance to save your changes.

Alternatively, choose Back to edit your changes, or choose Cancel to cancel your changes.

AWS CLI
To adjust the preferred maintenance window, use the AWS CLI modify-db-instance command with
the following parameters:

• --db-instance-identifier
• --preferred-maintenance-window

Example

The following code example sets the maintenance window to Tuesdays from 4:00-4:30AM UTC.

For Linux, macOS, or Unix:

aws rds modify-db-instance \


--db-instance-identifier mydbinstance \
--preferred-maintenance-window Tue:04:00-Tue:04:30

For Windows:

aws rds modify-db-instance ^


--db-instance-identifier mydbinstance ^
--preferred-maintenance-window Tue:04:00-Tue:04:30

RDS API
To adjust the preferred maintenance window, use the Amazon RDS API ModifyDBInstance operation
with the following parameters:

• DBInstanceIdentifier
• PreferredMaintenanceWindow

425
Amazon Relational Database Service User Guide
Working with operating system updates

Working with operating system updates


RDS for MariaDB, RDS for MySQL, RDS for PostgreSQL, and RDS for Oracle DB instances occasionally
require operating system updates. Amazon RDS upgrades the operating system to a newer version to
improve database performance and customers’ overall security posture. Typically, the updates take about
10 minutes. Operating system updates don't change the DB engine version or DB instance class of a DB
instance.

Operating system updates can be either optional or mandatory:

• An optional update can be applied at any time. While these updates are optional, we recommend
that you apply them periodically to keep your RDS fleet up to date. RDS does not apply these updates
automatically.

To be notified when a new, optional operating system patch becomes available, you can subscribe to
RDS-EVENT-0230 (p. 889) in the security patching event category. For information about subscribing
to RDS events, see Subscribing to Amazon RDS event notification (p. 860).
Note
RDS-EVENT-0230 doesn't apply to operating system distribution upgrades.
• A mandatory update is required and has an apply date. Plan to schedule your update before this apply
date. After the specified apply date, Amazon RDS automatically upgrades the operating system for
your DB instance to the latest version during one of your assigned maintenance windows.

Note
Staying current on all optional and mandatory updates might be required to meet various
compliance obligations. We recommend that you apply all updates made available by RDS
routinely during your maintenance windows.

You can use the AWS Management Console or the AWS CLI to get information about the type of
operating system upgrade.

Console

To get update information using the AWS Management Console

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases, and then select the DB instance.
3. Choose Maintenance & backups.
4. In the Pending maintenance section, find the operating system update, and check the Status value.

In the AWS Management Console, an optional update has its maintenance Status set to available and
doesn't have an Apply date, as shown in the following image.

426
Amazon Relational Database Service User Guide
Working with operating system updates

A mandatory update has its maintenance Status set to required and has an Apply date, as shown in the
following image.

AWS CLI
To get update information from the AWS CLI, use the describe-pending-maintenance-actions command.

aws rds describe-pending-maintenance-actions

A mandatory operating system update includes an AutoAppliedAfterDate value and a


CurrentApplyDate value. An optional operating system update doesn't include these values.

The following output shows a mandatory operating system update.

{
"ResourceIdentifier": "arn:aws:rds:us-east-1:123456789012:db:mydb1",
"PendingMaintenanceActionDetails": [
{
"Action": "system-update",
"AutoAppliedAfterDate": "2022-08-31T00:00:00+00:00",
"CurrentApplyDate": "2022-08-31T00:00:00+00:00",
"Description": "New Operating System update is available"
}
]
}

The following output shows an optional operating system update.

427
Amazon Relational Database Service User Guide
Working with operating system updates

{
"ResourceIdentifier": "arn:aws:rds:us-east-1:123456789012:db:mydb2",
"PendingMaintenanceActionDetails": [
{
"Action": "system-update",
"Description": "New Operating System update is available"
}
]
}

Availability of operating system updates


Operating system updates are specific to DB engine version and DB instance class. Therefore, DB
instances receive or require updates at different times. When an operating system update is available
for your DB instance based on its engine version and instance class, the update appears in the console.
It can also be viewed by running AWS CLI describe-pending-maintenance-actions command or by
calling the RDS DescribePendingMaintenanceActions API operation. If an update is available for your
instance, you can update your operating system by following the instructions in Applying updates for a
DB instance (p. 421).

Mandatory operating system updates schedule


We plan to use the following schedule for mandatory operating system updates. The Apply date refers
to when Amazon RDS starts to apply mandatory updates. For each date in the table, the start time is
00:00 Universal Coordinated Time (UTC).

DB engine Apply date

RDS for MySQL January 30, 2023

RDS for MariaDB January 30, 2023

RDS for PostgreSQL March 31, 2023

Note
The dates in the table apply to customers who didn't experience mandatory operating system
updates in 2022. To confirm whether the mandatory operating system updates in 2023
impact you, check the Pending maintenance section in the console for operating system
updates. For more information, see the Console section under Working with operating system
updates (p. 426).

After the apply date, Amazon RDS automatically upgrades the operating system for your DB instances to
the latest version in a subsequent maintenance window. To avoid an automatic upgrade, we recommend
that you schedule your update before the apply date.

428
Amazon Relational Database Service User Guide
Upgrading the engine version

Upgrading a DB instance engine version


Amazon RDS provides newer versions of each supported database engine so you can keep your
DB instance up-to-date. Newer versions can include bug fixes, security enhancements, and other
improvements for the database engine. When Amazon RDS supports a new version of a database engine,
you can choose how and when to upgrade your database DB instances.

There are two kinds of upgrades: major version upgrades and minor version upgrades. In general, a
major engine version upgrade can introduce changes that are not compatible with existing applications.
In contrast, a minor version upgrade includes only changes that are backward-compatible with existing
applications.

For Multi-AZ DB clusters, major version upgrades are only supported for RDS for PostgreSQL. Minor
version upgrades are supported for all engines that support Multi-AZ DB clusters. For more information,
see the section called “Upgrading the engine version of a Multi-AZ DB cluster” (p. 503).

The version numbering sequence is specific to each database engine. For example, RDS for MySQL 5.7
and 8.0 are major engine versions and upgrading from any 5.7 version to any 8.0 version is a major
version upgrade. RDS for MySQL version 5.7.22 and 5.7.23 are minor versions and upgrading from 5.7.22
to 5.7.23 is a minor version upgrade.
Important
You can't modify a DB instance when it is being upgraded. During an upgrade, the DB instance
status is upgrading.

For more information about major and minor version upgrades for a specific DB engine, see the
following documentation for your DB engine:

• Upgrading the MariaDB DB engine (p. 1289)


• Upgrading the Microsoft SQL Server DB engine (p. 1414)
• Upgrading the MySQL DB engine (p. 1664)
• Upgrading the RDS for Oracle DB engine (p. 2103)
• Upgrading the PostgreSQL DB engine for Amazon RDS (p. 2197)

For major version upgrades, you must manually modify the DB engine version through the AWS
Management Console, AWS CLI, or RDS API. For minor version upgrades, you can manually modify the
engine version, or you can choose to enable the Auto minor version upgrade option.
Note
Database engine upgrades require downtime. You can minimize the downtime required for DB
instance upgrade by using a blue/green deployment. For more information, see Using Amazon
RDS Blue/Green Deployments for database updates (p. 566).

Topics
• Manually upgrading the engine version (p. 429)
• Automatically upgrading the minor engine version (p. 431)

Manually upgrading the engine version


To manually upgrade the engine version of a DB instance, you can use the AWS Management Console,
the AWS CLI, or the RDS API.

429
Amazon Relational Database Service User Guide
Manually upgrading the engine version

Console
To upgrade the engine version of a DB instance by using the console

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases, and then choose the DB instance that you want to
upgrade.
3. Choose Modify. The Modify DB instance page appears.
4. For DB engine version, choose the new version.
5. Choose Continue and check the summary of modifications.
6. To apply the changes immediately, choose Apply immediately. Choosing this option can cause an
outage in some cases. For more information, see Using the Apply Immediately setting (p. 402).
7. On the confirmation page, review your changes. If they are correct, choose Modify DB instance to
save your changes.

Alternatively, choose Back to edit your changes, or choose Cancel to cancel your changes.

AWS CLI
To upgrade the engine version of a DB instance, use the CLI modify-db-instance command. Specify the
following parameters:

• --db-instance-identifier – the name of the DB instance.


• --engine-version – the version number of the database engine to upgrade to.

For information about valid engine versions, use the AWS CLI describe-db-engine-versions command.
• --allow-major-version-upgrade – to upgrade the major version.
• --no-apply-immediately – to apply changes during the next maintenance window. To apply
changes immediately, use --apply-immediately.

Example

For Linux, macOS, or Unix:

aws rds modify-db-instance \


--db-instance-identifier mydbinstance \
--engine-version new_version \
--allow-major-version-upgrade \
--no-apply-immediately

For Windows:

aws rds modify-db-instance ^


--db-instance-identifier mydbinstance ^
--engine-version new_version ^
--allow-major-version-upgrade ^
--no-apply-immediately

RDS API
To upgrade the engine version of a DB instance, use the ModifyDBInstance action. Specify the following
parameters:

430
Amazon Relational Database Service User Guide
Automatically upgrading the minor engine version

• DBInstanceIdentifier – the name of the DB instance, for example mydbinstance.


• EngineVersion – the version number of the database engine to upgrade to. For information about
valid engine versions, use the DescribeDBEngineVersions operation.
• AllowMajorVersionUpgrade – whether to allow a major version upgrade. To do so, set the value to
true.
• ApplyImmediately – whether to apply changes immediately or during the next maintenance
window. To apply changes immediately, set the value to true. To apply changes during the next
maintenance window, set the value to false.

Automatically upgrading the minor engine version


A minor engine version is an update to a DB engine version within a major engine version. For example, a
major engine version might be 9.6 with the minor engine versions 9.6.11 and 9.6.12 within it.

If you want Amazon RDS to upgrade the DB engine version of a database automatically, you can enable
auto minor version upgrades for the database.

Topics
• How automatic minor version upgrades work (p. 431)
• Turning on automatic minor version upgrades (p. 431)
• Determining the availability of maintenance updates (p. 432)
• Finding automatic minor version upgrade targets (p. 432)

How automatic minor version upgrades work


Amazon RDS designates a minor engine version as the preferred minor engine version when the
following conditions are met:

• The database is running a minor version of the DB engine that is lower than the preferred minor
engine version.

You can find your current engine version for your DB instance by looking on the Configuration tab of
the database details page or running the CLI command describe-db-instances.
• The database has auto minor version upgrade enabled.

RDS schedules the upgrades to run automatically in the maintenance window. During the upgrade, RDS
performs the following basic steps:

1. Runs a precheck to make sure the database is healthy and ready to be upgraded
2. Upgrades the DB engine
3. Runs post-upgrade checks
4. Marks the database upgrade as complete

Automatic upgrades incur downtime. The length of the downtime depends on various factors, including
the DB engine type and the size of the database.

Turning on automatic minor version upgrades


You can control whether auto minor version upgrade is enabled for a DB instance when you perform the
following tasks:

431
Amazon Relational Database Service User Guide
Automatically upgrading the minor engine version

• Creating a DB instance (p. 300)


• Modifying a DB instance (p. 401)
• Creating a read replica (p. 445)
• Restoring a DB instance from a snapshot (p. 615)
• Restoring a DB instance to a specific time (p. 660)
• Importing a DB instance from Amazon S3 (p. 1680) (for a MySQL backup on Amazon S3)

When you perform these tasks, you can control whether auto minor version upgrade is enabled for the
DB instance in the following ways:

• Using the console, set the Auto minor version upgrade option.
• Using the AWS CLI, set the --auto-minor-version-upgrade|--no-auto-minor-version-
upgrade option.
• Using the RDS API, set the AutoMinorVersionUpgrade parameter.

Determining the availability of maintenance updates


To determine whether a maintenance update, such as a DB engine version upgrade, is available for
your DB instance, you can use the console, AWS CLI, or RDS API. You can also upgrade the DB engine
version manually and adjust the maintenance window. For more information, see Maintaining a DB
instance (p. 418).

Finding automatic minor version upgrade targets


You can use the following AWS CLI command to determine the current automatic minor upgrade target
version for a specified minor DB engine version in a specific AWS Region. You can find the possible --
engine values for this command in the description for the Engine parameter in CreateDBInstance.

For Linux, macOS, or Unix:

aws rds describe-db-engine-versions \


--engine engine \
--engine-version minor-version \
--region region \
--query "DBEngineVersions[*].ValidUpgradeTarget[*].
{AutoUpgrade:AutoUpgrade,EngineVersion:EngineVersion}" \
--output text

For Windows:

aws rds describe-db-engine-versions ^


--engine engine ^
--engine-version minor-version ^
--region region ^
--query "DBEngineVersions[*].ValidUpgradeTarget[*].
{AutoUpgrade:AutoUpgrade,EngineVersion:EngineVersion}" ^
--output text

For example, the following AWS CLI command determines the automatic minor upgrade target for
MySQL minor version 8.0.11 in the US East (Ohio) AWS Region (us-east-2).

For Linux, macOS, or Unix:

aws rds describe-db-engine-versions \

432
Amazon Relational Database Service User Guide
Automatically upgrading the minor engine version

--engine mysql \
--engine-version 8.0.11 \
--region us-east-2 \
--query "DBEngineVersions[*].ValidUpgradeTarget[*].
{AutoUpgrade:AutoUpgrade,EngineVersion:EngineVersion}" \
--output table

For Windows:

aws rds describe-db-engine-versions ^


--engine mysql ^
--engine-version 8.0.11 ^
--region us-east-2 ^
--query "DBEngineVersions[*].ValidUpgradeTarget[*].
{AutoUpgrade:AutoUpgrade,EngineVersion:EngineVersion}" ^
--output table

Your output is similar to the following.

----------------------------------
| DescribeDBEngineVersions |
+--------------+-----------------+
| AutoUpgrade | EngineVersion |
+--------------+-----------------+
| False | 8.0.15 |
| False | 8.0.16 |
| False | 8.0.17 |
| False | 8.0.19 |
| False | 8.0.20 |
| False | 8.0.21 |
| True | 8.0.23 |
| False | 8.0.25 |
+--------------+-----------------+

In this example, the AutoUpgrade value is True for MySQL version 8.0.23. So, the automatic minor
upgrade target is MySQL version 8.0.23, which is highlighted in the output.
Important
If you plan to migrate an RDS for PostgreSQL DB instance to an Aurora PostgreSQL DB cluster
soon, we strongly recommend that you turn off auto minor version upgrades for the DB
instance early during planning. Migration to Aurora PostgreSQL might be delayed if the RDS for
PostgreSQL version isn't yet supported by Aurora PostgreSQL. For information about Aurora
PostgreSQL versions, see Engine versions for Amazon Aurora PostgreSQL.

433
Amazon Relational Database Service User Guide
Renaming a DB instance

Renaming a DB instance
You can rename a DB instance by using the AWS Management Console, the AWS CLI modify-db-
instance command, or the Amazon RDS API ModifyDBInstance action. Renaming a DB instance can
have far-reaching effects. The following is a list of considerations before you rename a DB instance.

• When you rename a DB instance, the endpoint for the DB instance changes, because the URL includes
the name you assigned to the DB instance. You should always redirect traffic from the old URL to the
new one.
• When you rename a DB instance, the old DNS name that was used by the DB instance is immediately
deleted, although it could remain cached for a few minutes. The new DNS name for the renamed DB
instance becomes effective in about 10 minutes. The renamed DB instance is not available until the
new name becomes effective.
• You cannot use an existing DB instance name when renaming an instance.
• All read replicas associated with a DB instance remain associated with that instance after it is
renamed. For example, suppose you have a DB instance that serves your production database and the
instance has several associated read replicas. If you rename the DB instance and then replace it in the
production environment with a DB snapshot, the DB instance that you renamed will still have the read
replicas associated with it.
• Metrics and events associated with the name of a DB instance are maintained if you reuse a DB
instance name. For example, if you promote a read replica and rename it to be the name of the
previous primary DB instance, the events and metrics associated with the primary DB instance are
associated with the renamed instance.
• DB instance tags remain with the DB instance, regardless of renaming.
• DB snapshots are retained for a renamed DB instance.

Note
A DB instance is an isolated database environment running in the cloud. A DB instance can host
multiple databases, or a single Oracle database with multiple schemas. For information about
changing a database name, see the documentation for your DB engine.

Renaming to replace an existing DB instance


The most common reasons for renaming a DB instance are that you are promoting a read replica or
you are restoring data from a DB snapshot or point-in-time recovery (PITR). By renaming the database,
you can replace the DB instance without having to change any application code that references the DB
instance. In these cases, you would do the following:

1. Stop all traffic going to the primary DB instance. This can involve redirecting traffic from accessing
the databases on the DB instance or some other way you want to use to prevent traffic from accessing
your databases on the DB instance.
2. Rename the primary DB instance to a name that indicates it is no longer the primary DB instance as
described later in this topic.
3. Create a new primary DB instance by restoring from a DB snapshot or by promoting a read replica, and
then give the new instance the name of the previous primary DB instance.
4. Associate any read replicas with the new primary DB instance.

If you delete the old primary DB instance, you are responsible for deleting any unwanted DB snapshots
of the old primary DB instance.

For information about promoting a read replica, see Promoting a read replica to be a standalone DB
instance (p. 447).

434
Amazon Relational Database Service User Guide
Renaming to replace an existing DB instance

Important
The DB instance is rebooted when it is renamed.

Console
To rename a DB instance

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases.
3. Choose the DB instance that you want to rename.
4. Choose Modify.
5. In Settings, enter a new name for DB instance identifier.
6. Choose Continue.
7. To apply the changes immediately, choose Apply immediately. Choosing this option can cause an
outage in some cases. For more information, see Modifying an Amazon RDS DB instance (p. 401).
8. On the confirmation page, review your changes. If they are correct, choose Modify DB Instance to
save your changes.

Alternatively, choose Back to edit your changes, or choose Cancel to cancel your changes.

AWS CLI
To rename a DB instance, use the AWS CLI command modify-db-instance. Provide the current --db-
instance-identifier value and --new-db-instance-identifier parameter with the new name
of the DB instance.

Example

For Linux, macOS, or Unix:

aws rds modify-db-instance \


--db-instance-identifier DBInstanceIdentifier \
--new-db-instance-identifier NewDBInstanceIdentifier

For Windows:

aws rds modify-db-instance ^


--db-instance-identifier DBInstanceIdentifier ^
--new-db-instance-identifier NewDBInstanceIdentifier

RDS API
To rename a DB instance, call Amazon RDS API operation ModifyDBInstance with the following
parameters:

• DBInstanceIdentifier — existing name for the instance


• NewDBInstanceIdentifier — new name for the instance

435
Amazon Relational Database Service User Guide
Rebooting a DB instance

Rebooting a DB instance
You might need to reboot your DB instance, usually for maintenance reasons. For example, if you make
certain modifications, or if you change the DB parameter group associated with the DB instance, you
must reboot the instance for the changes to take effect.
Note
If a DB instance isn't using the latest changes to its associated DB parameter group, the AWS
Management Console shows the DB parameter group with a status of pending-reboot. The
pending-reboot parameter groups status doesn't result in an automatic reboot during the next
maintenance window. To apply the latest parameter changes to that DB instance, manually
reboot the DB instance. For more information about parameter groups, see Working with
parameter groups (p. 347).

If the Amazon RDS DB instance is configured for Multi-AZ, you can perform the reboot with a failover.
An Amazon RDS event is created when the reboot is completed. If your DB instance is a Multi-AZ
deployment, you can force a failover from one Availability Zone (AZ) to another when you reboot. When
you force a failover of your DB instance, Amazon RDS automatically switches to a standby replica in
another Availability Zone, and updates the DNS record for the DB instance to point to the standby DB
instance. As a result, you need to clean up and re-establish any existing connections to your DB instance.
Rebooting with failover is beneficial when you want to simulate a failure of a DB instance for testing, or
restore operations to the original AZ after a failover occurs. For more information, see Configuring and
managing a Multi-AZ deployment (p. 492).
Warning
When you force a failover of your DB instance, the database is abruptly interrupted. The DB
instance and its client sessions might not have time to shut down gracefully. To avoid the
possibility of data loss, we recommend stopping transactions on your DB instance before
rebooting with a failover.

On RDS for Microsoft SQL Server, reboot with failover reboots only the primary DB instance. After
the failover, the primary DB instance becomes the new secondary DB instance. Parameters might not
be updated for Multi-AZ instances. For reboot without failover, both the primary and secondary DB
instances reboot, and parameters are updated after the reboot. If the DB instance is unresponsive, we
recommend reboot without failover.
Note
When you force a failover from one Availability Zone to another when you reboot, the
Availability Zone change might not be reflected in the AWS Management Console, and in calls to
the AWS CLI and RDS API, for several minutes.

Rebooting a DB instance restarts the database engine service. Rebooting a DB instance results in a
momentary outage, during which the DB instance status is set to rebooting. An outage occurs for both a
Single-AZ deployment and a Multi-AZ DB instance deployment, even when you reboot with a failover.

You can't reboot your DB instance if it isn't in the available state. Your database can be unavailable for
several reasons, such as an in-progress backup, a previously requested modification, or a maintenance-
window action.

The time required to reboot your DB instance depends on the crash recovery process, database activity
at the time of reboot, and the behavior of your specific DB engine. To improve the reboot time, we
recommend that you reduce database activity as much as possible during the reboot process. Reducing
database activity reduces rollback activity for in-transit transactions.

For a DB instance with read replicas, you can reboot the source DB instance and its read replicas
independently. After a reboot completes, replication resumes automatically.

436
Amazon Relational Database Service User Guide
Rebooting a DB instance

Console
To reboot a DB instance

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases, and then choose the DB instance that you want to reboot.
3. For Actions, choose Reboot.

The Reboot DB Instance page appears.


4. (Optional) Choose Reboot with failover? to force a failover from one AZ to another.
5. Choose Reboot to reboot your DB instance.

Alternatively, choose Cancel.

AWS CLI
To reboot a DB instance by using the AWS CLI, call the reboot-db-instance command.

Example Simple reboot

For Linux, macOS, or Unix:

aws rds reboot-db-instance \


--db-instance-identifier mydbinstance

For Windows:

aws rds reboot-db-instance ^


--db-instance-identifier mydbinstance

Example Reboot with failover

To force a failover from one AZ to the other, use the --force-failover parameter.

For Linux, macOS, or Unix:

aws rds reboot-db-instance \


--db-instance-identifier mydbinstance \
--force-failover

For Windows:

aws rds reboot-db-instance ^


--db-instance-identifier mydbinstance ^
--force-failover

RDS API
To reboot a DB instance by using the Amazon RDS API, call the RebootDBInstance operation.

437
Amazon Relational Database Service User Guide
Working with DB instance read replicas

Working with DB instance read replicas


A read replica is a read-only copy of a DB instance. You can reduce the load on your primary DB instance
by routing queries from your applications to the read replica. In this way, you can elastically scale out
beyond the capacity constraints of a single DB instance for read-heavy database workloads.

To create a read replica from a source DB instance, Amazon RDS uses the built-in replication features
of the DB engine. For information about using read replicas with a specific engine, see the following
sections:

• Working with MariaDB read replicas (p. 1318)


• Working with read replicas for Microsoft SQL Server in Amazon RDS (p. 1446)
• Working with MySQL read replicas (p. 1708)
• Working with read replicas for Amazon RDS for Oracle (p. 1973)
• Working with read replicas for Amazon RDS for PostgreSQL (p. 2212)

After you create a read replica from a source DB instance, the source becomes the primary DB instance.
When you make updates to the primary DB instance, Amazon RDS copies them asynchronously to the
read replica. The following diagram shows a source DB instance replicating to a read replica in a different
Availability Zone (AZ). Client have read/write access to the primary DB instance and read-only access to
the replica.

438
Amazon Relational Database Service User Guide
Overview

Topics
• Overview of Amazon RDS read replicas (p. 439)
• Creating a read replica (p. 445)
• Promoting a read replica to be a standalone DB instance (p. 447)
• Monitoring read replication (p. 449)
• Creating a read replica in a different AWS Region (p. 452)

Overview of Amazon RDS read replicas


The following sections discuss DB instance read replicas. For information about Multi-AZ DB cluster read
replicas, see the section called “Working with Multi-AZ DB cluster read replicas” (p. 554).

Topics
• Use cases for read replicas (p. 440)
• How read replicas work (p. 440)
• Read replicas in a Multi-AZ deployment (p. 440)

439
Amazon Relational Database Service User Guide
Overview

• Cross-Region read replicas (p. 441)


• Differences among read replicas for DB engines (p. 441)
• Read replica storage types (p. 444)
• Restrictions for creating a replica from a replica (p. 444)
• Considerations when deleting replicas (p. 445)

Use cases for read replicas


Deploying one or more read replicas for a given source DB instance might make sense in a variety of
scenarios, including the following:

• Scaling beyond the compute or I/O capacity of a single DB instance for read-heavy database
workloads. You can direct this excess read traffic to one or more read replicas.
• Serving read traffic while the source DB instance is unavailable. In some cases, your source DB instance
might not be able to take I/O requests, for example due to I/O suspension for backups or scheduled
maintenance. In these cases, you can direct read traffic to your read replicas. For this use case, keep in
mind that the data on the read replica might be "stale" because the source DB instance is unavailable.
• Business reporting or data warehousing scenarios where you might want business reporting queries to
run against a read replica, rather than your production DB instance.
• Implementing disaster recovery. You can promote a read replica to a standalone instance as a disaster
recovery solution if the primary DB instance fails.

How read replicas work


When you create a read replica, you first specify an existing DB instance as the source. Then Amazon RDS
takes a snapshot of the source instance and creates a read-only instance from the snapshot. Amazon RDS
then uses the asynchronous replication method for the DB engine to update the read replica whenever
there is a change to the primary DB instance.

The read replica operates as a DB instance that allows only read-only connections. An exception is the
RDS for Oracle DB engine, which supports replica databases in mounted mode. A mounted replica
doesn't accept user connections and so can't serve a read-only workload. The primary use for mounted
replicas is cross-Region disaster recovery. For more information, see Working with read replicas for
Amazon RDS for Oracle (p. 1973).

Applications connect to a read replica just as they do to any DB instance. Amazon RDS replicates all
databases from the source DB instance.

Read replicas in a Multi-AZ deployment


You can configure a read replica for a DB instance that also has a standby replica configured for high
availability in a Multi-AZ deployment. Replication with the standby replica is synchronous. Unlike a read
replica, a standby replica can't serve read traffic.

In the following scenario, clients have read/write access to a primary DB instance in one AZ. The
primary instance copies updates asynchronously to a read replica in a second AZ and also copies them
synchronously to a standby replica in a third AZ. Clients have read access only to the read replica.

440
Amazon Relational Database Service User Guide
Overview

For more information about high availability and standby replicas, see Configuring and managing a
Multi-AZ deployment (p. 492).

Cross-Region read replicas


In some cases, a read replica resides in a different AWS Region from its primary DB instance. In these
cases, Amazon RDS sets up a secure communications channel between the primary DB instance and
the read replica. Amazon RDS establishes any AWS security configurations needed to enable the secure
channel, such as adding security group entries. For more information about cross-Region read replicas,
see Creating a read replica in a different AWS Region (p. 452).

The information in this chapter applies to creating Amazon RDS read replicas either in the same AWS
Region as the source DB instance, or in a separate AWS Region. The following information doesn't apply
to setting up replication with an instance that is running on an Amazon EC2 instance or that is on-
premises.

Differences among read replicas for DB engines


Because Amazon RDS DB engines implement replication differently, there are several significant
differences you should know about, as shown in the following table.

441
Amazon Relational Database Service User Guide
Overview

Feature or MySQL and MariaDB Oracle PostgreSQL SQL Server


behavior

What is the Logical replication. Physical replication. Physical replication. Physical


replication replication.
method?

How are RDS for MySQL and If a primary DB PostgreSQL has The Virtual
transaction RDS for MariaDB keep instance has no the parameter Log File
logs purged? any binary logs that cross-Region read wal_keep_segments (VLF) of the
haven't been applied. replicas, Amazon that dictates how transaction
RDS for Oracle keeps many write ahead log file on
a minimum of two log (WAL) files are the primary
hours of transaction kept to provide data replica can
logs on the source to the read replicas. be truncated
DB instance. Logs The parameter value after it is
are purged from the specifies the number no longer
source DB instance of logs to keep. required for
after two hours or the secondary
after the archive replicas.
log retention hours
setting has passed, The VLF
whichever is longer. can only
Logs are purged be marked
from the read replica as inactive
after the archive when the
log retention hours log records
setting has passed have been
only if they have been hardened in
successfully applied the replicas.
to the database. Regardless
of how fast
In some cases, a the disk
primary DB instance subsystems
might have one or are in the
more cross-Region primary
read replicas. If replica, the
so, Amazon RDS transaction
for Oracle keeps log will keep
the transaction the VLFs until
logs on the source the slowest
DB instance until replica has
they have been hardened it.
transmitted and
applied to all cross-
Region read replicas.

For information about


setting archive log
retention hours, see
Retaining archived
redo logs (p. 1893).

Can a replica Yes. You can enable No. An Oracle read No. A PostgreSQL No. A SQL
be made the MySQL or replica is a physical read replica is a Server read
writable? MariaDB read replica copy, and Oracle physical copy, and replica is
to be writable. doesn't allow for PostgreSQL doesn't a physical

442
Amazon Relational Database Service User Guide
Overview

Feature or MySQL and MariaDB Oracle PostgreSQL SQL Server


behavior
writes in a read allow for a read copy and also
replica. You can replica to be made doesn't allow
promote the read writable. for writes.
replica to make You can
it writable. The promote the
promoted read replica read replica
has the replicated to make it
data to the point writable. The
when the request was promoted
made to promote it. read replica
has the
replicated
data up to
the point
when the
request was
made to
promote it.

Can backups Yes. Automatic Yes. Automatic Yes, you can create a No.
be performed backups and manual backups and manual manual snapshot of Automatic
on the snapshots are snapshots are RDS for PostgreSQL backups
replica? supported on RDS supported on RDS for read replicas. and manual
for MySQL or RDS for Oracle read replicas. Automated backups snapshots
MariaDB read replicas. for read replicas are aren't
supported for RDS for supported on
PostgreSQL 14.1 and RDS for SQL
higher versions only. Server read
You can't turn on replicas.
automated backups
for PostgreSQL
read replicas for
RDS for PostgreSQL
versions earlier than
14.1. For RDS for
PostgreSQL 13 and
earlier versions,
create a snapshot
from a read replica if
you want a backup of
it.

Can you Yes. All supported Yes. Redo log data is No. PostgreSQL Yes. Redo log
use parallel MariaDB and MySQL always transmitted has a single process data is always
replication? versions allow for in parallel from the handling replication. transmitted
parallel replication primary database to in parallel
threads. all of its read replicas. from the
primary
database to
all of its read
replicas.

443
Amazon Relational Database Service User Guide
Overview

Feature or MySQL and MariaDB Oracle PostgreSQL SQL Server


behavior

Can you No. Yes. The primary use No. No.


maintain a for mounted replicas
replica in a is cross-Region
mounted disaster recovery. An
rather than Active Data Guard
a read-only license isn't required
state? for mounted replicas.
For more information,
see Working with
read replicas for
Amazon RDS for
Oracle (p. 1973).

Read replica storage types


By default, a read replica is created with the same storage type as the source DB instance. However, you
can create a read replica that has a different storage type from the source DB instance based on the
options listed in the following table.

Source DB instance storage Source DB instance storage Read replica storage type
type allocation options

Provisioned IOPS 100 GiB–64 TiB Provisioned IOPS, General


Purpose, Magnetic

General Purpose 100 GiB–64 TiB Provisioned IOPS, General


Purpose, Magnetic

General Purpose <100 GiB General Purpose, Magnetic

Magnetic 100 GiB–6 TiB Provisioned IOPS, General


Purpose, Magnetic

Magnetic <100 GiB General Purpose, Magnetic

Note
When you increase the allocated storage of a read replica, it must be by at least 10 percent. If
you try to increase the value by less than 10 percent, you get an error.

Restrictions for creating a replica from a replica


Amazon RDS doesn't support circular replication. You can't configure a DB instance to serve as a
replication source for an existing DB instance. You can only create a new read replica from an existing
DB instance. For example, if MySourceDBInstance replicates to ReadReplica1, you can't configure
ReadReplica1 to replicate back to MySourceDBInstance.

For RDS for MariaDB and RDS for MySQL, and for certain versions of RDS for PostgreSQL, you can create
a read replica from an existing read replica. For example, you can create new read replica ReadReplica2
from existing replica ReadReplica1. For RDS for Oracle and RDS for SQL Server, you can't create a read
replica from an existing read replica.

444
Amazon Relational Database Service User Guide
Creating a read replica

Considerations when deleting replicas


If you no longer need read replicas, you can explicitly delete them using the same mechanisms for
deleting a DB instance. If you delete a source DB instance without deleting its read replicas in the same
AWS Region, each read replica is promoted to a standalone DB instance. For information about deleting
a DB instance, see Deleting a DB instance (p. 489). For information about read replica promotion, see
Promoting a read replica to be a standalone DB instance (p. 447).

If you have cross-Region read replicas, see Cross-Region replication considerations (p. 456) for
information related to deleting the source DB instance for a cross-Region read replica.

Creating a read replica


You can create a read replica from an existing DB instance using the AWS Management Console, AWS CLI,
or RDS API. You create a read replica by specifying SourceDBInstanceIdentifier, which is the DB
instance identifier of the source DB instance that you want to replicate from.

When you create a read replica, Amazon RDS takes a DB snapshot of your source DB instance and begins
replication. As a result, you experience a brief I/O suspension on your source DB instance while the DB
snapshot occurs.
Note
The I/O suspension typically lasts about one minute. You can avoid the I/O suspension if the
source DB instance is a Multi-AZ deployment, because in that case the snapshot is taken from
the secondary DB instance.

An active, long-running transaction can slow the process of creating the read replica. We recommend
that you wait for long-running transactions to complete before creating a read replica. If you create
multiple read replicas in parallel from the same source DB instance, Amazon RDS takes only one
snapshot at the start of the first create action.

When creating a read replica, there are a few things to consider. First, you must enable automatic
backups on the source DB instance by setting the backup retention period to a value other than 0. This
requirement also applies to a read replica that is the source DB instance for another read replica. To
enable automatic backups on an RDS for MySQL read replica, first create the read replica, then modify
the read replica to enable automatic backups.
Note
Within an AWS Region, we strongly recommend that you create all read replicas in the same
virtual private cloud (VPC) based on Amazon VPC as the source DB instance. If you create a read
replica in a different VPC from the source DB instance, classless inter-domain routing (CIDR)
ranges can overlap between the replica and the RDS system. CIDR overlap makes the replica
unstable, which can negatively impact applications connecting to it. If you receive an error when
creating the read replica, choose a different destination DB subnet group. For more information,
see Working with a DB instance in a VPC (p. 2688).
There is no direct way to create a read replica in another AWS account using the console or AWS
CLI.

Console
To create a read replica from a source DB instance

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases.
3. Choose the DB instance that you want to use as the source for a read replica.
4. For Actions, choose Create read replica.
5. For DB instance identifier, enter a name for the read replica.

445
Amazon Relational Database Service User Guide
Creating a read replica

6. Choose your instance configuration. We recommend that you use the same or larger DB instance
class and storage type as the source DB instance for the read replica.
7. For AWS Region, specify the destination Region for the read replica.
8. For Storage, specify the allocated storage size and whether you want to use storage autoscaling.
9. For Availability, choose whether to create a standby of your replica in another Availability Zone for
failover support for the replica.
Note
Creating your read replica as a Multi-AZ DB instance is independent of whether the source
database is a Multi-AZ DB instance.
10. Specify other DB instance settings. For information about each available setting, see Settings for DB
instances (p. 308).
11. To create an encrypted read replica, expand Additional configuration and specify the following
settings:

a. Choose Enable encryption.


b. For AWS KMS key, choose the AWS KMS key identifier of the KMS key.

Note
The source DB instance must be encrypted. To learn more about encrypting the source DB
instance, see Encrypting Amazon RDS resources (p. 2586).
12. Choose Create read replica.

After the read replica is created, you can see it on the Databases page in the RDS console. It shows
Replica in the Role column.

AWS CLI
To create a read replica from a source DB instance, use the AWS CLI command create-db-instance-read-
replica. This example also sets the allocated storage size and enables storage autoscaling.

You can specify other settings. For information about each setting, see Settings for DB instances (p. 308).

Example
For Linux, macOS, or Unix:

aws rds create-db-instance-read-replica \


--db-instance-identifier myreadreplica \
--source-db-instance-identifier mydbinstance \
--allocated-storage 100 \
--max-allocated-storage 1000

For Windows:

aws rds create-db-instance-read-replica ^


--db-instance-identifier myreadreplica ^
--source-db-instance-identifier mydbinstance ^
--allocated-storage 100 ^
--max-allocated-storage 1000

RDS API
To create a read replica from a source MySQL, MariaDB, Oracle, PostgreSQL, or SQL Server DB instance,
call the Amazon RDS API CreateDBInstanceReadReplica operation with the following required
parameters:

446
Amazon Relational Database Service User Guide
Promoting a read replica

• DBInstanceIdentifier
• SourceDBInstanceIdentifier

Promoting a read replica to be a standalone DB


instance
You can promote a read replica into a standalone DB instance. When you promote a read replica, the DB
instance is rebooted before it becomes available.

There are several reasons you might want to promote a read replica to a standalone DB instance:

447
Amazon Relational Database Service User Guide
Promoting a read replica

• Performing DDL operations (MySQL and MariaDB only) – DDL operations, such as creating or
rebuilding indexes, can take time and impose a significant performance penalty on your DB instance.
You can perform these operations on a MySQL or MariaDB read replica once the read replica is in sync
with its primary DB instance. Then you can promote the read replica and direct your applications to
use the promoted instance.
• Sharding – Sharding embodies the "share-nothing" architecture and essentially involves breaking a
large database into several smaller databases. One common way to split a database is splitting tables
that are not joined in the same query onto different hosts. Another method is duplicating a table
across multiple hosts and then using a hashing algorithm to determine which host receives a given
update. You can create read replicas corresponding to each of your shards (smaller databases) and
promote them when you decide to convert them into standalone shards. You can then carve out the
key space (if you are splitting rows) or distribution of tables for each of the shards depending on your
requirements.
• Implementing failure recovery – You can use read replica promotion as a data recovery scheme if
the primary DB instance fails. This approach complements synchronous replication, automatic failure
detection, and failover.

If you are aware of the ramifications and limitations of asynchronous replication and you still want to
use read replica promotion for data recovery, you can. To do this, first create a read replica and then
monitor the primary DB instance for failures. In the event of a failure, do the following:
1. Promote the read replica.
2. Direct database traffic to the promoted DB instance.
3. Create a replacement read replica with the promoted DB instance as its source.

When you promote a read replica, the new DB instance that is created retains the option group and the
parameter group of the former read replica. The promotion process can take several minutes or longer
to complete, depending on the size of the read replica. After you promote the read replica to a new DB
instance, it's just like any other DB instance. For example, you can create read replicas from the new DB
instance and perform point-in-time restore operations. Because the promoted DB instance is no longer
a read replica, you can't use it as a replication target. If a source DB instance has several read replicas,
promoting one of the read replicas to a DB instance has no effect on the other replicas.

Backup duration is a function of the number of changes to the database since the previous backup. If
you plan to promote a read replica to a standalone instance, we recommend that you enable backups
and complete at least one backup prior to promotion. In addition, you can't promote a read replica to
a standalone instance when it has the backing-up status. If you have enabled backups on your read
replica, configure the automated backup window so that daily backups don't interfere with read replica
promotion.

The following steps show the general process for promoting a read replica to a DB instance:

1. Stop any transactions from being written to the primary DB instance, and then wait for all updates to
be made to the read replica. Database updates occur on the read replica after they have occurred on
the primary DB instance, and this replication lag can vary significantly. Use the Replica Lag metric
to determine when all updates have been made to the read replica.
2. For MySQL and MariaDB only: If you need to make changes to the MySQL or MariaDB read replica, you
must set the read_only parameter to 0 in the DB parameter group for the read replica. You can then
perform all needed DDL operations, such as creating indexes, on the read replica. Actions taken on the
read replica don't affect the performance of the primary DB instance.
3. Promote the read replica by using the Promote option on the Amazon RDS console, the AWS CLI
command promote-read-replica, or the PromoteReadReplica Amazon RDS API operation.
Note
The promotion process takes a few minutes to complete. When you promote a read replica,
replication is stopped and the read replica is rebooted. When the reboot is complete, the read
replica is available as a new DB instance.

448
Amazon Relational Database Service User Guide
Monitoring read replication

4. (Optional) Modify the new DB instance to be a Multi-AZ deployment. For more information,
see Modifying an Amazon RDS DB instance (p. 401) and Configuring and managing a Multi-AZ
deployment (p. 492).

Console

To promote a read replica to a standalone DB instance

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the Amazon RDS console, choose Databases.

The Databases pane appears. Each read replica shows Replica in the Role column.
3. Choose the read replica that you want to promote.
4. For Actions, choose Promote.
5. On the Promote Read Replica page, enter the backup retention period and the backup window for
the newly promoted DB instance.
6. When the settings are as you want them, choose Continue.
7. On the acknowledgment page, choose Promote Read Replica.

AWS CLI
To promote a read replica to a standalone DB instance, use the AWS CLI promote-read-replica
command.

Example

For Linux, macOS, or Unix:

aws rds promote-read-replica \


--db-instance-identifier myreadreplica

For Windows:

aws rds promote-read-replica ^


--db-instance-identifier myreadreplica

RDS API
To promote a read replica to a standalone DB instance, call the Amazon RDS API PromoteReadReplica
operation with the required parameter DBInstanceIdentifier.

Monitoring read replication


You can monitor the status of a read replica in several ways. The Amazon RDS console shows the status
of a read replica in the Replication section of the Connectivity & security tab in the read replica details.
To view the details for a read replica, choose the name of the read replica in the list of DB instances in
the Amazon RDS console.

449
Amazon Relational Database Service User Guide
Monitoring read replication

You can also see the status of a read replica using the AWS CLI describe-db-instances command or
the Amazon RDS API DescribeDBInstances operation.

The status of a read replica can be one of the following:

• replicating – The read replica is replicating successfully.


• replication degraded (SQL Server only) – Replicas are receiving data from the primary instance, but
one or more databases might be not getting updates. This can occur, for example, when a replica is in
the process of setting up newly created databases.

The status doesn't transition from replication degraded to error, unless an error occurs during
the degraded state.
• error – An error has occurred with the replication. Check the Replication Error field in the
Amazon RDS console or the event log to determine the exact error. For more information about
troubleshooting a replication error, see Troubleshooting a MySQL read replica problem (p. 1718).
• terminated (MariaDB, MySQL, or PostgreSQL only) – Replication is terminated. This occurs if
replication is stopped for more than 30 consecutive days, either manually or due to a replication error.
In this case, Amazon RDS terminates replication between the primary DB instance and all read replicas.
Amazon RDS does this to prevent increased storage requirements on the source DB instance and long
failover times.

Broken replication can affect storage because the logs can grow in size and number due to the high
volume of errors messages being written to the log. Broken replication can also affect failure recovery
due to the time Amazon RDS requires to maintain and process the large number of logs during
recovery.
• terminated (Oracle only) – Replication is terminated. This occurs if replication is stopped for more
than 8 hours because there isn't enough storage remaining on the read replica. In this case, Amazon
RDS terminates replication between the primary DB instance and the affected read replica. This status
is a terminal state, and the read replica must be re-created.
• stopped (MariaDB or MySQL only) – Replication has stopped because of a customer-initiated request.
• replication stop point set (MySQL only) – A customer-initiated stop point was set using the
mysql.rds_start_replication_until (p. 1780) stored procedure and the replication is in progress.
• replication stop point reached (MySQL only) – A customer-initiated stop point was set using the
mysql.rds_start_replication_until (p. 1780) stored procedure and replication is stopped because the
stop point was reached.

450
Amazon Relational Database Service User Guide
Monitoring read replication

You can see where a DB instance is being replicated and if so, check its replication status. On the
Databases page in the RDS console, it shows Primary in the Role column. Choose its DB instance name.
On its detail page, on the Connectivity & security tab, its replication status is under Replication.

Monitoring replication lag


You can monitor replication lag in Amazon CloudWatch by viewing the Amazon RDS ReplicaLag
metric.

For MariaDB and MySQL, the ReplicaLag metric reports the value of the Seconds_Behind_Master
field of the SHOW REPLICA STATUS command. Common causes for replication lag for MySQL and
MariaDB are the following:

• A network outage.
• Writing to tables with indexes on a read replica. If the read_only parameter is not set to 0 on the
read replica, it can break replication.
• Using a nontransactional storage engine such as MyISAM. Replication is only supported for the InnoDB
storage engine on MySQL and the XtraDB storage engine on MariaDB.

Note
Previous versions of MariaDB and MySQL used SHOW SLAVE STATUS instead of SHOW REPLICA
STATUS. If you are using a MariaDB version before 10.5 or a MySQL version before 8.0.23, then
use SHOW SLAVE STATUS.

When the ReplicaLag metric reaches 0, the replica has caught up to the primary DB instance. If the
ReplicaLag metric returns -1, then replication is currently not active. ReplicaLag = -1 is equivalent
to Seconds_Behind_Master = NULL.

For Oracle, the ReplicaLag metric is the sum of the Apply Lag value and the difference between the
current time and the apply lag's DATUM_TIME value. The DATUM_TIME value is the last time the read
replica received data from its source DB instance. For more information, see V$DATAGUARD_STATS in
the Oracle documentation.

For SQL Server, the ReplicaLag metric is the maximum lag of databases that have fallen behind, in
seconds. For example, if you have two databases that lag 5 seconds and 10 seconds, respectively, then
ReplicaLag is 10 seconds. The ReplicaLag metric returns the value of the following query.

SELECT MAX(secondary_lag_seconds) max_lag FROM sys.dm_hadr_database_replica_states;

For more information, see secondary_lag_seconds in the Microsoft documentation.

ReplicaLag returns -1 if RDS can't determine the lag, such as during replica setup, or when the read
replica is in the error state.
Note
New databases aren't included in the lag calculation until they are accessible on the read replica.

For PostgreSQL, the ReplicaLag metric returns the value of the following query.

SELECT extract(epoch from now() - pg_last_xact_replay_timestamp()) AS reader_lag

PostgreSQL versions 9.5.2 and later use physical replication slots to manage write ahead log (WAL)
retention on the source instance. For each cross-Region read replica instance, Amazon RDS creates a
physical replication slot and associates it with the instance. Two Amazon CloudWatch metrics, Oldest

451
Amazon Relational Database Service User Guide
Cross-Region read replicas

Replication Slot Lag and Transaction Logs Disk Usage, show how far behind the most
lagging replica is in terms of WAL data received and how much storage is being used for WAL data. The
Transaction Logs Disk Usage value can substantially increase when a cross-Region read replica is
lagging significantly.

For more information about monitoring a DB instance with CloudWatch, see Monitoring Amazon RDS
metrics with Amazon CloudWatch (p. 706).

Creating a read replica in a different AWS Region


With Amazon RDS, you can create a read replica in a different AWS Region from the source DB instance.

You create a read replica in a different AWS Region to do the following:

• Improve your disaster recovery capabilities.


• Scale read operations into an AWS Region closer to your users.
• Make it easier to migrate from a data center in one AWS Region to a data center in another AWS
Region.

Creating a read replica in a different AWS Region from the source instance is similar to creating a replica
in the same AWS Region. You can use the AWS Management Console, run the create-db-instance-
read-replica command, or call the CreateDBInstanceReadReplica API operation.
Note
To create an encrypted read replica in a different AWS Region from the source DB instance, the
source DB instance must be encrypted.

Region and version availability


Feature availability and support varies across specific versions of each database engine, and across AWS
Regions. For more information on version and Region availability with cross-Region replication, see
Cross-Region read replicas (p. 119).

452
Amazon Relational Database Service User Guide
Cross-Region read replicas

Creating a cross-Region read replica


The following procedures show how to create a read replica from a source MariaDB, Microsoft SQL
Server, MySQL, Oracle, or PostgreSQL DB instance in a different AWS Region.

Console

You can create a read replica across AWS Regions using the AWS Management Console.

To create a read replica across AWS Regions with the console

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases.
3. Choose the MariaDB, Microsoft SQL Server, MySQL, Oracle, or PostgreSQL DB instance that you
want to use as the source for a read replica.
4. For Actions, choose Create read replica.
5. For DB instance identifier, enter a name for the read replica.
6. Choose the Destination Region.
7. Choose the instance specifications that you want to use. We recommend that you use the same or
larger DB instance class and storage type for the read replica.
8. To create an encrypted read replica in another AWS Region:

a. Choose Enable encryption.


b. For AWS KMS key, choose the AWS KMS key identifier of the KMS key in the destination AWS
Region.

Note
To create an encrypted read replica, the source DB instance must be encrypted. To
learn more about encrypting the source DB instance, see Encrypting Amazon RDS
resources (p. 2586).
9. Choose other options, such as storage autoscaling.
10. Choose Create read replica.

AWS CLI

To create a read replica from a source MySQL, Microsoft SQL Server, MariaDB, Oracle, or PostgreSQL DB
instance in a different AWS Region, you can use the create-db-instance-read-replica command.
In this case, you use create-db-instance-read-replica from the AWS Region where you want
the read replica (destination Region) and specify the Amazon Resource Name (ARN) for the source DB
instance. An ARN uniquely identifies a resource created in Amazon Web Services.

For example, if your source DB instance is in the US East (N. Virginia) Region, the ARN looks similar to this
example:

arn:aws:rds:us-east-1:123456789012:db:mydbinstance

For information about ARNs, see Working with Amazon Resource Names (ARNs) in Amazon
RDS (p. 471).

To create a read replica in a different AWS Region from the source DB instance, you can use the AWS CLI
create-db-instance-read-replica command from the destination AWS Region. The following
parameters are required for creating a read replica in another AWS Region:

453
Amazon Relational Database Service User Guide
Cross-Region read replicas

• --region – The destination AWS Region where the read replica is created.
• --source-db-instance-identifier – The DB instance identifier for the source DB instance. This
identifier must be in the ARN format for the source AWS Region.
• --db-instance-identifier – The identifier for the read replica in the destination AWS Region.

Example of a cross-Region read replica

The following code creates a read replica in the US West (Oregon) Region from a source DB instance in
the US East (N. Virginia) Region.

For Linux, macOS, or Unix:

aws rds create-db-instance-read-replica \


--db-instance-identifier myreadreplica \
--region us-west-2 \
--source-db-instance-identifier arn:aws:rds:us-east-1:123456789012:db:mydbinstance

For Windows:

aws rds create-db-instance-read-replica ^


--db-instance-identifier myreadreplica ^
--region us-west-2 ^
--source-db-instance-identifier arn:aws:rds:us-east-1:123456789012:db:mydbinstance

The following parameter is also required for creating an encrypted read replica in another AWS Region:

• --kms-key-id – The AWS KMS key identifier of the KMS key to use to encrypt the read replica in the
destination AWS Region.

Example of an encrypted cross-Region read replica

The following code creates an encrypted read replica in the US West (Oregon) Region from a source DB
instance in the US East (N. Virginia) Region.

For Linux, macOS, or Unix:

aws rds create-db-instance-read-replica \


--db-instance-identifier myreadreplica \
--region us-west-2 \
--source-db-instance-identifier arn:aws:rds:us-east-1:123456789012:db:mydbinstance \
--kms-key-id my-us-west-2-key

For Windows:

aws rds create-db-instance-read-replica ^


--db-instance-identifier myreadreplica ^
--region us-west-2 ^
--source-db-instance-identifier arn:aws:rds:us-east-1:123456789012:db:mydbinstance ^
--kms-key-id my-us-west-2-key

The --source-region option is required when you're creating an encrypted read replica between the
AWS GovCloud (US-East) and AWS GovCloud (US-West) Regions. For --source-region, specify the
AWS Region of the source DB instance.

454
Amazon Relational Database Service User Guide
Cross-Region read replicas

If --source-region isn't specified, specify a --pre-signed-url value. A presigned URL is a URL that
contains a Signature Version 4 signed request for the create-db-instance-read-replica command
that's called in the source AWS Region. To learn more about the pre-signed-url option, see create-
db-instance-read-replica in the AWS CLI Command Reference.

RDS API

To create a read replica from a source MySQL, Microsoft SQL Server, MariaDB, Oracle, or
PostgreSQL DB instance in a different AWS Region, you can call the Amazon RDS API operation
CreateDBInstanceReadReplica. In this case, you call CreateDBInstanceReadReplica from the AWS Region
where you want the read replica (destination Region) and specify the Amazon Resource Name (ARN) for
the source DB instance. An ARN uniquely identifies a resource created in Amazon Web Services.

To create an encrypted read replica in a different AWS Region from the source DB instance, you can use
the Amazon RDS API CreateDBInstanceReadReplica operation from the destination AWS Region. To
create an encrypted read replica in another AWS Region, you must specify a value for PreSignedURL.
PreSignedURL should contain a request for the CreateDBInstanceReadReplica operation to call
in the source AWS Region where the read replica is created in. To learn more about PreSignedUrl, see
CreateDBInstanceReadReplica.

For example, if your source DB instance is in the US East (N. Virginia) Region, the ARN looks similar to the
following.

arn:aws:rds:us-east-1:123456789012:db:mydbinstance

For information about ARNs, see Working with Amazon Resource Names (ARNs) in Amazon
RDS (p. 471).

Example

https://us-west-2.rds.amazonaws.com/
?Action=CreateDBInstanceReadReplica
&KmsKeyId=my-us-east-1-key
&PreSignedUrl=https%253A%252F%252Frds.us-west-2.amazonaws.com%252F
%253FAction%253DCreateDBInstanceReadReplica
%2526DestinationRegion%253Dus-east-1
%2526KmsKeyId%253Dmy-us-east-1-key
%2526SourceDBInstanceIdentifier%253Darn%25253Aaws%25253Ards%25253Aus-
west-2%123456789012%25253Adb%25253Amydbinstance
%2526SignatureMethod%253DHmacSHA256
%2526SignatureVersion%253D4%2526SourceDBInstanceIdentifier%253Darn%25253Aaws
%25253Ards%25253Aus-west-2%25253A123456789012%25253Ainstance%25253Amydbinstance
%2526Version%253D2014-10-31
%2526X-Amz-Algorithm%253DAWS4-HMAC-SHA256
%2526X-Amz-Credential%253DAKIADQKE4SARGYLE%252F20161117%252Fus-west-2%252Frds
%252Faws4_request
%2526X-Amz-Date%253D20161117T215409Z
%2526X-Amz-Expires%253D3600
%2526X-Amz-SignedHeaders%253Dcontent-type%253Bhost%253Buser-agent%253Bx-amz-
content-sha256%253Bx-amz-date
%2526X-Amz-Signature
%253D255a0f17b4e717d3b67fad163c3ec26573b882c03a65523522cf890a67fca613
&DBInstanceIdentifier=myreadreplica
&SourceDBInstanceIdentifier=&region-arn;rds:us-east-1:123456789012:db:mydbinstance
&Version=2012-01-15
&SignatureVersion=2
&SignatureMethod=HmacSHA256
&Timestamp=2012-01-20T22%3A06%3A23.624Z
&AWSAccessKeyId=<&AWS; Access Key ID>
&Signature=<Signature>

455
Amazon Relational Database Service User Guide
Cross-Region read replicas

How Amazon RDS does cross-Region replication


Amazon RDS uses the following process to create a cross-Region read replica. Depending on the AWS
Regions involved and the amount of data in the databases, this process can take hours to complete. You
can use this information to determine how far the process has proceeded when you create a cross-Region
read replica:

1. Amazon RDS begins configuring the source DB instance as a replication source and sets the status to
modifying.
2. Amazon RDS begins setting up the specified read replica in the destination AWS Region and sets the
status to creating.
3. Amazon RDS creates an automated DB snapshot of the source DB instance in the source AWS Region.
The format of the DB snapshot name is rds:<InstanceID>-<timestamp>, where <InstanceID>
is the identifier of the source instance, and <timestamp> is the date and time the copy started.
For example, rds:mysourceinstance-2013-11-14-09-24 was created from the instance
mysourceinstance at 2013-11-14-09-24. During the creation of an automated DB snapshot,
the source DB instance status remains modifying, the read replica status remains creating, and the DB
snapshot status is creating. The progress column of the DB snapshot page in the console reports how
far the DB snapshot creation has progressed. When the DB snapshot is complete, the status of both
the DB snapshot and source DB instance are set to available.
4. Amazon RDS begins a cross-Region snapshot copy for the initial data transfer. The snapshot copy is
listed as an automated snapshot in the destination AWS Region with a status of creating. It has the
same name as the source DB snapshot. The progress column of the DB snapshot display indicates how
far the copy has progressed. When the copy is complete, the status of the DB snapshot copy is set to
available.
5. Amazon RDS then uses the copied DB snapshot for the initial data load on the read replica. During this
phase, the read replica is in the list of DB instances in the destination, with a status of creating. When
the load is complete, the read replica status is set to available, and the DB snapshot copy is deleted.
6. When the read replica reaches the available status, Amazon RDS starts by replicating the changes
made to the source instance since the start of the create read replica operation. During this phase, the
replication lag time for the read replica is greater than 0.

For information about replication lag time, see Monitoring read replication (p. 449).

Cross-Region replication considerations


All of the considerations for performing replication within an AWS Region apply to cross-Region
replication. The following extra considerations apply when replicating between AWS Regions:

• A source DB instance can have cross-Region read replicas in multiple AWS Regions.
• You can replicate between the GovCloud (US-East) and GovCloud (US-West) Regions, but not into or
out of GovCloud (US).
• For Microsoft SQL Server, Oracle, and PostgreSQL DB instances, you can only create a cross-Region
Amazon RDS read replica from a source Amazon RDS DB instance that is not a read replica of another
Amazon RDS DB instance. This limitation doesn't apply to MariaDB and MySQL DB instances.
• You can expect to see a higher level of lag time for any read replica that is in a different AWS Region
than the source instance. This lag time comes from the longer network channels between regional
data centers.
• For cross-Region read replicas, any of the create read replica commands that specify the --db-
subnet-group-name parameter must specify a DB subnet group from the same VPC.
• Because of the limit on the number of access control list (ACL) entries for the source VPC, we can't
guarantee more than five cross-Region read replica instances.

456
Amazon Relational Database Service User Guide
Cross-Region read replicas

• In most cases, the read replica uses the default DB parameter group and DB option group for the
specified DB engine.

For the MySQL and Oracle DB engines, you can specify a custom parameter group for the read replica
in the --db-parameter-group-name option of the AWS CLI command create-db-instance-
read-replica. You can't specify a custom parameter group when you use the AWS Management
Console.
• The read replica uses the default security group.
• For MariaDB, Microsoft SQL Server, MySQL, and Oracle DB instances, when the source DB instance for
a cross-Region read replica is deleted, the read replica is promoted.
• For PostgreSQL DB instances, when the source DB instance for a cross-Region read replica is deleted,
the replication status of the read replica is set to terminated. The read replica isn't promoted.

You have to promote the read replica manually or delete it.

Requesting a cross-Region read replica


To communicate with the source Region to request the creation of a cross-Region read replica, the
requester (IAM role or IAM user) must have access to the source DB instance and the source Region.

Certain conditions in the requester's IAM policy can cause the request to fail. The following examples
assume that the source DB instance is in US East (Ohio) and the read replica is created in US East (N.
Virginia). These examples show conditions in the requester's IAM policy that cause the request to fail:

• The requester's policy has a condition for aws:RequestedRegion.

...
"Effect": "Allow",
"Action": "rds:CreateDBInstanceReadReplica",
"Resource": "*",
"Condition": {
"StringEquals": {
"aws:RequestedRegion": "us-east-1"
}
}

The request fails because the policy doesn't allow access to the source Region. For a successful request,
specify both the source and destination Regions.

...
"Effect": "Allow",
"Action": "rds:CreateDBInstanceReadReplica",
"Resource": "*",
"Condition": {
"StringEquals": {
"aws:RequestedRegion": [
"us-east-1",
"us-east-2"
]
}
}

• The requester's policy doesn't allow access to the source DB instance.

...
"Effect": "Allow",
"Action": "rds:CreateDBInstanceReadReplica",
"Resource": "arn:aws:rds:us-east-1:123456789012:db:myreadreplica"

457
Amazon Relational Database Service User Guide
Cross-Region read replicas

...

For a successful request, specify both the source instance and the replica.

...
"Effect": "Allow",
"Action": "rds:CreateDBInstanceReadReplica",
"Resource": [
"arn:aws:rds:us-east-1:123456789012:db:myreadreplica",
"arn:aws:rds:us-east-2:123456789012:db:mydbinstance"
]
...

• The requester's policy denies aws:ViaAWSService.

...
"Effect": "Allow",
"Action": "rds:CreateDBInstanceReadReplica",
"Resource": "*",
"Condition": {
"Bool": {"aws:ViaAWSService": "false"}
}

Communication with the source Region is made by RDS on the requester's behalf. For a successful
request, don't deny calls made by AWS services.
• The requester's policy has a condition for aws:SourceVpc or aws:SourceVpce.

These requests might fail because when RDS makes the call to the remote Region, it isn't from the
specified VPC or VPC endpoint.

If you need to use one of the previous conditions that would cause a request to fail, you can include a
second statement with aws:CalledVia in your policy to make the request succeed. For example, you
can use aws:CalledVia with aws:SourceVpce as shown here:

...
"Effect": "Allow",
"Action": "rds:CreateDBInstanceReadReplica",
"Resource": "*",
"Condition": {
"Condition" : {
"ForAnyValue:StringEquals" : {
"aws:SourceVpce": "vpce-1a2b3c4d"
}
}
},
{
"Effect": "Allow",
"Action": [
"rds:CreateDBInstanceReadReplica"
],
"Resource": "*",
"Condition": {
"ForAnyValue:StringEquals": {
"aws:CalledVia": [
"rds.amazonaws.com"
]
}
}
}

458
Amazon Relational Database Service User Guide
Cross-Region read replicas

For more information, see Policies and permissions in IAM in the IAM User Guide.

Authorizing the read replica


After a cross-Region DB read replica creation request returns success, RDS starts the replica creation in
the background. An authorization for RDS to access the source DB instance is created. This authorization
links the source DB instance to the read replica, and allows RDS to copy only to the specified read replica.

The authorization is verified by RDS using the rds:CrossRegionCommunication permission in the


service-linked IAM role. If the replica is authorized, RDS communicates with the source Region and
completes the replica creation.

RDS doesn't have access to DB instances that weren't authorized previously by a


CreateDBInstanceReadReplica request. The authorization is revoked when read replica creation
completes.

RDS uses the service-linked role to verify the authorization in the source Region. If you delete the
service-linked role during the replication creation process, the creation fails.

For more information, see Using service-linked roles in the IAM User Guide.

Using AWS Security Token Service credentials


Session tokens from the global AWS Security Token Service (AWS STS) endpoint are valid only in AWS
Regions that are enabled by default (commercial Regions). If you use credentials from the assumeRole
API operation in AWS STS, use the regional endpoint if the source Region is an opt-in Region. Otherwise,
the request fails. This happens because your credentials must be valid in both Regions, which is true for
opt-in Regions only when the regional AWS STS endpoint is used.

To use the global endpoint, make sure that it's enabled for both Regions in the operations. Set the global
endpoint to Valid in all AWS Regions in the AWS STS account settings.

The same rule applies to credentials in the presigned URL parameter.

For more information, see Managing AWS STS in an AWS Region in the IAM User Guide.

Cross-Region replication costs


The data transferred for cross-Region replication incurs Amazon RDS data transfer charges. These cross-
Region replication actions generate charges for the data transferred out of the source AWS Region:

• When you create a read replica, Amazon RDS takes a snapshot of the source instance and transfers the
snapshot to the read replica AWS Region.
• For each data modification made in the source databases, Amazon RDS transfers data from the source
AWS Region to the read replica AWS Region.

For more information about data transfer pricing, see Amazon RDS pricing.

For MySQL and MariaDB instances, you can reduce your data transfer costs by reducing the number of
cross-Region read replicas that you create. For example, suppose that you have a source DB instance in
one AWS Region and want to have three read replicas in another AWS Region. In this case, you create
only one of the read replicas from the source DB instance. You create the other two replicas from the
first read replica instead of the source DB instance.

For example, if you have source-instance-1 in one AWS Region, you can do the following:

• Create read-replica-1 in the new AWS Region, specifying source-instance-1 as the source.
• Create read-replica-2 from read-replica-1.

459
Amazon Relational Database Service User Guide
Cross-Region read replicas

• Create read-replica-3 from read-replica-1.

In this example, you are only charged for the data transferred from source-instance-1 to read-
replica-1. You aren't charged for the data transferred from read-replica-1 to the other two
replicas because they are all in the same AWS Region. If you create all three replicas directly from
source-instance-1, you are charged for the data transfers to all three replicas.

460
Amazon Relational Database Service User Guide
Tagging RDS resources

Tagging Amazon RDS resources


You can use Amazon RDS tags to add metadata to your Amazon RDS resources. You can use the tags
to add your own notations about database instances, snapshots, Aurora clusters, and so on. Doing
so can help you to document your Amazon RDS resources. You can also use the tags with automated
maintenance procedures.

In particular, you can use these tags with IAM policies. You can use them to manage access to RDS
resources and to control what actions can be applied to the RDS resources. You can also use these tags to
track costs by grouping expenses for similarly tagged resources.

You can tag the following Amazon RDS resources:

• DB instances
• DB clusters
• Read replicas
• DB snapshots
• DB cluster snapshots
• Reserved DB instances
• Event subscriptions
• DB option groups
• DB parameter groups
• DB cluster parameter groups
• DB subnet groups
• RDS Proxies
• RDS Proxy endpoints
• Blue/green deployments
• Zero-ETL integrations (preview)

Note
Currently, you can't tag RDS Proxies and RDS Proxy endpoints by using the AWS Management
Console.

Topics
• Overview of Amazon RDS resource tags (p. 461)
• Using tags for access control with IAM (p. 462)
• Using tags to produce detailed billing reports (p. 462)
• Adding, listing, and removing tags (p. 463)
• Using the AWS Tag Editor (p. 465)
• Copying tags to DB instance snapshots (p. 465)
• Tutorial: Use tags to specify which DB instances to stop (p. 466)
• Using tags to enable backups in AWS Backup (p. 468)

Overview of Amazon RDS resource tags


An Amazon RDS tag is a name-value pair that you define and associate with an Amazon RDS resource.
The name is referred to as the key. Supplying a value for the key is optional. You can use tags to assign

461
Amazon Relational Database Service User Guide
Using tags for access control with IAM

arbitrary information to an Amazon RDS resource. You can use a tag key, for example, to define a
category, and the tag value might be an item in that category. For example, you might define a tag
key of "project" and a tag value of "Salix". In this case, these indicate that the Amazon RDS resource is
assigned to the Salix project. You can also use tags to designate Amazon RDS resources as being used
for test or production by using a key such as environment=test or environment=production. We
recommend that you use a consistent set of tag keys to make it easier to track metadata associated with
Amazon RDS resources.

In addition, you can use conditions in your IAM policies to control access to AWS resources based on
the tags on that resource. You can do this by using the global aws:ResourceTag/tag-key condition
key. For more information, see Controlling access to AWS resources in the AWS Identity and Access
Management User Guide.

Each Amazon RDS resource has a tag set, which contains all the tags that are assigned to that Amazon
RDS resource. A tag set can contain as many as 50 tags, or it can be empty. If you add a tag to an RDS
resource with the same key as an existing resource tag, the new value overwrites the old.

AWS doesn't apply any semantic meaning to your tags; tags are interpreted strictly as character strings.
RDS can set tags on a DB instance or other RDS resources. Tag setting depends on the options that
you use when you create the resource. For example, Amazon RDS might add a tag indicating that a DB
instance is for production or for testing.

• The tag key is the required name of the tag. The string value can be from 1 to 128 Unicode characters
in length and cannot be prefixed with aws: or rds:. The string can contain only the set of Unicode
letters, digits, white space, '_', '.', ':', '/', '=', '+', '-', '@' (Java regex: "^([\\p{L}\\p{Z}\\p{N}_.:/=+\\-
@]*)$").

• The tag value is an optional string value of the tag. The string value can be from 1 to 256 Unicode
characters in length. The string can contain only the set of Unicode letters, digits, white space, '_', '.', ':',
'/', '=', '+', '-', '@' (Java regex: "^([\\p{L}\\p{Z}\\p{N}_.:/=+\\-@]*)$").

Values do not have to be unique in a tag set and can be null. For example, you can have a key-value
pair in a tag set of project=Trinity and cost-center=Trinity.

You can use the AWS Management Console, the AWS CLI, or the Amazon RDS API to add, list, and delete
tags on Amazon RDS resources. When using the CLI or API, make sure to provide the Amazon Resource
Name (ARN) for the RDS resource to work with. For more information about constructing an ARN, see
Constructing an ARN for Amazon RDS (p. 471).

Tags are cached for authorization purposes. Because of this, additions and updates to tags on Amazon
RDS resources can take several minutes before they are available.

Using tags for access control with IAM


You can use tags with IAM policies to manage access to Amazon RDS resources. You can also use tags to
control what actions can be applied to the Amazon RDS resources.

For information on managing access to tagged resources with IAM policies, see Identity and access
management for Amazon RDS (p. 2606).

Using tags to produce detailed billing reports


You can also use tags to track costs by grouping expenses for similarly tagged resources.

Use tags to organize your AWS bill to reflect your own cost structure. To do this, sign up to get your AWS
account bill with tag key values included. Then, to see the cost of combined resources, organize your

462
Amazon Relational Database Service User Guide
Adding, listing, and removing tags

billing information according to resources with the same tag key values. For example, you can tag several
resources with a specific application name, and then organize your billing information to see the total
cost of that application across several services. For more information, see Using Cost Allocation Tags in
the AWS Billing User Guide.
Note
You can add a tag to a snapshot, however, your bill will not reflect this grouping.

Adding, listing, and removing tags


The following procedures show how to perform typical tagging operations on resources related to DB
instances.

Console
The process to tag an Amazon RDS resource is similar for all resources. The following procedure shows
how to tag an Amazon RDS DB instance.

To add a tag to a DB instance

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases.
Note
To filter the list of DB instances in the Databases pane, enter a text string for Filter
databases. Only DB instances that contain the string appear.
3. Choose the name of the DB instance that you want to tag to show its details.
4. In the details section, scroll down to the Tags section.
5. Choose Add. The Add tags window appears.

6. Enter a value for Tag key and Value.


7. To add another tag, you can choose Add another Tag and enter a value for its Tag key and Value.

Repeat this step as many times as necessary.


8. Choose Add.

463
Amazon Relational Database Service User Guide
Adding, listing, and removing tags

To delete a tag from a DB instance

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases.
Note
To filter the list of DB instances in the Databases pane, enter a text string in the Filter
databases box. Only DB instances that contain the string appear.
3. Choose the name of the DB instance to show its details.
4. In the details section, scroll down to the Tags section.
5. Choose the tag you want to delete.

6. Choose Delete, and then choose Delete in the Delete tags window.

AWS CLI
You can add, list, or remove tags for a DB instance using the AWS CLI.

• To add one or more tags to an Amazon RDS resource, use the AWS CLI command add-tags-to-
resource.
• To list the tags on an Amazon RDS resource, use the AWS CLI command list-tags-for-resource.
• To remove one or more tags from an Amazon RDS resource, use the AWS CLI command remove-
tags-from-resource.

To learn more about how to construct the required ARN, see Constructing an ARN for Amazon
RDS (p. 471).

RDS API
You can add, list, or remove tags for a DB instance using the Amazon RDS API.

• To add a tag to an Amazon RDS resource, use the AddTagsToResource operation.


• To list tags that are assigned to an Amazon RDS resource, use the ListTagsForResource.
• To remove tags from an Amazon RDS resource, use the RemoveTagsFromResource operation.

To learn more about how to construct the required ARN, see Constructing an ARN for Amazon
RDS (p. 471).

When working with XML using the Amazon RDS API, tags use the following schema:

<Tagging>
<TagSet>

464
Amazon Relational Database Service User Guide
Using the AWS Tag Editor

<Tag>
<Key>Project</Key>
<Value>Trinity</Value>
</Tag>
<Tag>
<Key>User</Key>
<Value>Jones</Value>
</Tag>
</TagSet>
</Tagging>

The following table provides a list of the allowed XML tags and their characteristics. Values for Key and
Value are case-dependent. For example, project=Trinity and PROJECT=Trinity are two distinct tags.

Tagging element Description

TagSet A tag set is a container for all tags assigned to an Amazon RDS resource.
There can be only one tag set per resource. You work with a TagSet only
through the Amazon RDS API.

Tag A tag is a user-defined key-value pair. There can be from 1 to 50 tags in a


tag set.

Key A key is the required name of the tag. The string value can be from 1 to 128
Unicode characters in length and cannot be prefixed with aws: or rds:. The
string can only contain only the set of Unicode letters, digits, white space, '_',
'.', '/', '=', '+', '-' (Java regex: "^([\\p{L}\\p{Z}\\p{N}_.:/=+\\-]*)$").

Keys must be unique to a tag set. For example, you cannot have a key-pair
in a tag set with the key the same but with different values, such as project/
Trinity and project/Xanadu.

Value A value is the optional value of the tag. The string value can be from 1 to
256 Unicode characters in length and cannot be prefixed with aws: or rds:.
The string can only contain only the set of Unicode letters, digits, white
space, '_', '.', '/', '=', '+', '-' (Java regex: "^([\\p{L}\\p{Z}\\p{N}_.:/=+\\-]*)$").

Values do not have to be unique in a tag set and can be null. For example,
you can have a key-value pair in a tag set of project/Trinity and cost-center/
Trinity.

Using the AWS Tag Editor


You can browse and edit the tags on your RDS resources in the AWS Management Console by using the
AWS Tag editor. For more information, see Tag Editor in the AWS Resource Groups User Guide.

Copying tags to DB instance snapshots


When you create or restore a DB instance, you can specify that the tags from the DB instance are copied
to snapshots of the DB instance. Copying tags ensures that the metadata for the DB snapshots matches
that of the source DB instance. It also ensures that any access policies for the DB snapshots also match
those of the source DB instance.

You can specify that tags are copied to DB snapshots for the following actions:

• Creating a DB instance.

465
Amazon Relational Database Service User Guide
Tutorial: Use tags to specify which DB instances to stop

• Restoring a DB instance.
• Creating a read replica.
• Copying a DB snapshot.

In most cases, tags aren't copied by default. However, when you restore a DB instance from a DB
snapshot, RDS checks whether you specify new tags. If yes, the new tags are added to the restored DB
instance. If there are no new tags, RDS adds the tags from the source DB instance at the time of snapshot
creation to the restored DB instance.

To prevent tags from source DB instances from being added to restored DB instances, we recommend
that you specify new tags when restoring a DB instance.
Note
In some cases, you might include a value for the --tag-key parameter of the create-db-
snapshot AWS CLI command. Or you might supply at least one tag to the CreateDBSnapshot
API operation. In these cases, RDS doesn't copy tags from the source DB instance to the new DB
snapshot. This functionality applies even if the source DB instance has the --copy-tags-to-
snapshot (CopyTagsToSnapshot) option turned on.
If you take this approach, you can create a copy of a DB instance from a DB snapshot. This
approach avoids adding tags that don't apply to the new DB instance. You create your DB
snapshot using the AWS CLI create-db-snapshot command (or the CreateDBSnapshot
RDS API operation). After you create your DB snapshot, you can add tags as described later in
this topic.

Tutorial: Use tags to specify which DB instances to


stop
Suppose that you're creating a number of DB instances in a development or test environment. You
need to keep all of these DB instances for several days. Some of the DB instances run tests overnight.
Other DB instances can be stopped overnight and started again the next day. The following example
shows how to assign a tag to those DB instances that are suitable to stop overnight. Then the example
shows how a script can detect which DB instances have that tag and then stop those DB instances. In this
example, the value portion of the key-value pair doesn't matter. The presence of the stoppable tag
signifies that the DB instance has this user-defined property.

To specify which DB instances to stop

1. Determine the ARN of a DB instance that you want to designate as stoppable.

The commands and APIs for tagging work with ARNs. That way, they can work seamlessly across
AWS Regions, AWS accounts, and different types of resources that might have identical short
names. You can specify the ARN instead of the DB instance ID in CLI commands that operate on
DB instances. Substitute the name of your own DB instances for dev-test-db-instance. In
subsequent commands that use ARN parameters, substitute the ARN of your own DB instance. The
ARN includes your own AWS account ID and the name of the AWS Region where your DB instance is
located.

$ aws rds describe-db-instances --db-instance-identifier dev-test-db-instance \


--query "*[].{DBInstance:DBInstanceArn}" --output text
arn:aws:rds:us-east-1:123456789102:db:dev-test-db-instance

2. Add the tag stoppable to this DB instance.

You choose the name for this tag. This approach means that you can avoid devising a naming
convention that encodes all relevant information in names. In such a convention, you might encode
information in the DB instance name or names of other resources. Because this example treats

466
Amazon Relational Database Service User Guide
Tutorial: Use tags to specify which DB instances to stop

the tag as an attribute that is either present or absent, it omits the Value= part of the --tags
parameter.

$ aws rds add-tags-to-resource \


--resource-name arn:aws:rds:us-east-1:123456789102:db:dev-test-db-instance \
--tags Key=stoppable

3. Confirm that the tag is present in the DB instance.

These commands retrieve the tag information for the DB instance in JSON format and in plain tab-
separated text.

$ aws rds list-tags-for-resource \


--resource-name arn:aws:rds:us-east-1:123456789102:db:dev-test-db-instance
{
"TagList": [
{
"Key": "stoppable",
"Value": ""

}
]
}
aws rds list-tags-for-resource \
--resource-name arn:aws:rds:us-east-1:123456789102:db:dev-test-db-instance --output
text
TAGLIST stoppable

4. To stop all the DB instances that are designated as stoppable, prepare a list of all your DB
instances. Loop through the list and check if each DB instance is tagged with the relevant attribute.

This Linux example uses shell scripting. This scripting saves the list of DB instance ARNs to a
temporary file and then performs CLI commands for each DB instance.

$ aws rds describe-db-instances --query "*[].[DBInstanceArn]" --output text >/tmp/


db_instance_arns.lst
$ for arn in $(cat /tmp/db_instance_arns.lst)
do
match="$(aws rds list-tags-for-resource --resource-name $arn --output text | grep
stoppable)"
if [[ ! -z "$match" ]]
then
echo "DB instance $arn is tagged as stoppable. Stopping it now."
# Note that you need to get the DB instance identifier from the ARN.
dbid=$(echo $arn | sed -e 's/.*://')
aws rds stop-db-instance --db-instance-identifier $dbid
fi
done

DB instance arn:arn:aws:rds:us-east-1:123456789102:db:dev-test-db-instance is tagged as


stoppable. Stopping it now.
{
"DBInstance": {
"DBInstanceIdentifier": "dev-test-db-instance",
"DBInstanceClass": "db.t3.medium",
...

You can run a script like this at the end of each day to make sure that nonessential DB instances are
stopped. You might also schedule a job using a utility such as cron to perform such a check each night.
For example, you might do this in case some DB instances were left running by mistake. Here, you might
fine-tune the command that prepares the list of DB instances to check.

467
Amazon Relational Database Service User Guide
Enabling backups

The following command produces a list of your DB instances, but only the ones in available state. The
script can ignore DB instances that are already stopped, because they will have different status values
such as stopped or stopping.

$ aws rds describe-db-instances \


--query '*[].{DBInstanceArn:DBInstanceArn,DBInstanceStatus:DBInstanceStatus}|[?
DBInstanceStatus == `available`]|[].{DBInstanceArn:DBInstanceArn}' \
--output text
arn:aws:rds:us-east-1:123456789102:db:db-instance-2447
arn:aws:rds:us-east-1:123456789102:db:db-instance-3395
arn:aws:rds:us-east-1:123456789102:db:dev-test-db-instance
arn:aws:rds:us-east-1:123456789102:db:pg2-db-instance

Tip
You can use assigning tags and finding DB instances with those tags to reduce costs in other
ways. For example, take this scenario with DB instances used for development and testing. In
this case, you might designate some DB instances to be deleted at the end of each day. Or you
might designate them to have their DB instances changed to small DB instance classes during
times of expected low usage.

Using tags to enable backups in AWS Backup


AWS Backup is a fully managed backup service that makes it easy to centralize and automate the backup
of data across AWS services in the cloud and on premises. You can manage backups of your Amazon RDS
DB instances in AWS Backup.

To enable backups in AWS Backup, you use resource tagging to associate your DB instance with a backup
plan.

This example assumes that you have already created a backup plan in AWS Backup. You use exactly the
same tag for your DB instance that is in your backup plan, as shown in the following figure.

For more information about AWS Backup, see the AWS Backup Developer Guide.

You can assign a tag to a DB instance using the AWS Management Console, the AWS CLI, or the RDS API.
The following examples are for the console and CLI.

Console

To assign a tag to a DB instance

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases.
3. Choose the link for the DB instance to which you want to assign a tag.
4. On the database details page, choose the Tags tab.
5. Under Tags, choose Add tags.
6. Under Add tags:

468
Amazon Relational Database Service User Guide
Enabling backups

a. For Tag key, enter BackupPlan.


b. For Value, enter Test.
c. Choose Add.

The result is shown under Tags.

CLI
To assign a tag to a DB instance

• Use the following CLI command:

For Linux, macOS, or Unix:

aws rds add-tags-to-resource \


--resource-name arn:aws:rds:us-east-1:123456789012:db:new-orcl-db \
--tags Key=BackupPlan,Value=Test

For Windows:

aws rds add-tags-to-resource ^


--resource-name arn:aws:rds:us-east-1:123456789012:db:new-orcl-db ^
--tags Key=BackupPlan,Value=Test

The add-tags-to-resource CLI command returns no output.

To confirm that the DB instance is tagged

• Use the following CLI command:

For Linux, macOS, or Unix:

aws rds list-tags-for-resource \


--resource-name arn:aws:rds:us-east-1:123456789012:db:new-orcl-db

For Windows:

469
Amazon Relational Database Service User Guide
Enabling backups

aws rds list-tags-for-resource ^


--resource-name arn:aws:rds:us-east-1:123456789012:db:new-orcl-db

The list-tags-for-resource CLI command returns the following output:

{
"TagList": [
{
"Key": "BackupPlan",
"Value": "Test"
}
]
}

470
Amazon Relational Database Service User Guide
Working with ARNs

Working with Amazon Resource Names (ARNs) in


Amazon RDS
Resources created in Amazon Web Services are each uniquely identified with an Amazon Resource Name
(ARN). For certain Amazon RDS operations, you must uniquely identify an Amazon RDS resource by
specifying its ARN. For example, when you create an RDS DB instance read replica, you must supply the
ARN for the source DB instance.

Constructing an ARN for Amazon RDS


Resources created in Amazon Web Services are each uniquely identified with an Amazon Resource Name
(ARN). You can construct an ARN for an Amazon RDS resource using the following syntax.

arn:aws:rds:<region>:<account number>:<resourcetype>:<name>

Region Region Endpoint Protocol


Name

US East us-east-2 rds.us-east-2.amazonaws.com HTTPS


(Ohio)
rds-fips.us-east-2.api.aws HTTPS

rds.us-east-2.api.aws HTTPS

rds-fips.us-east-2.amazonaws.com HTTPS

US East (N. us-east-1 rds.us-east-1.amazonaws.com HTTPS


Virginia)
rds-fips.us-east-1.api.aws HTTPS

rds-fips.us-east-1.amazonaws.com HTTPS

rds.us-east-1.api.aws HTTPS

US us-west-1 rds.us-west-1.amazonaws.com HTTPS


West (N.
California) rds.us-west-1.api.aws HTTPS

rds-fips.us-west-1.amazonaws.com HTTPS

rds-fips.us-west-1.api.aws HTTPS

US West us-west-2 rds.us-west-2.amazonaws.com HTTPS


(Oregon)
rds-fips.us-west-2.amazonaws.com HTTPS

rds.us-west-2.api.aws HTTPS

rds-fips.us-west-2.api.aws HTTPS

Africa af-south-1 rds.af-south-1.amazonaws.com HTTPS


(Cape
Town) rds.af-south-1.api.aws HTTPS

Asia ap-east-1 rds.ap-east-1.amazonaws.com HTTPS


Pacific
rds.ap-east-1.api.aws HTTPS

471
Amazon Relational Database Service User Guide
Constructing an ARN

Region Region Endpoint Protocol


Name
(Hong
Kong)

Asia ap- rds.ap-south-2.amazonaws.com HTTPS


Pacific south-2
(Hyderabad) rds.ap-south-2.api.aws HTTPS

Asia ap- rds.ap-southeast-3.amazonaws.com HTTPS


Pacific southeast-3
(Jakarta) rds.ap-southeast-3.api.aws HTTPS

Asia ap- rds.ap-southeast-4.amazonaws.com HTTPS


Pacific southeast-4
(Melbourne) rds.ap-southeast-4.api.aws HTTPS

Asia ap- rds.ap-south-1.amazonaws.com HTTPS


Pacific south-1
(Mumbai) rds.ap-south-1.api.aws HTTPS

Asia ap- rds.ap-northeast-3.amazonaws.com HTTPS


Pacific northeast-3
(Osaka) rds.ap-northeast-3.api.aws HTTPS

Asia ap- rds.ap-northeast-2.amazonaws.com HTTPS


Pacific northeast-2
(Seoul) rds.ap-northeast-2.api.aws HTTPS

Asia ap- rds.ap-southeast-1.amazonaws.com HTTPS


Pacific southeast-1
(Singapore) rds.ap-southeast-1.api.aws HTTPS

Asia ap- rds.ap-southeast-2.amazonaws.com HTTPS


Pacific southeast-2
(Sydney) rds.ap-southeast-2.api.aws HTTPS

Asia ap- rds.ap-northeast-1.amazonaws.com HTTPS


Pacific northeast-1
(Tokyo) rds.ap-northeast-1.api.aws HTTPS

Canada ca- rds.ca-central-1.amazonaws.com HTTPS


(Central) central-1
rds.ca-central-1.api.aws HTTPS

rds-fips.ca-central-1.api.aws HTTPS

rds-fips.ca-central-1.amazonaws.com HTTPS

Europe eu- rds.eu-central-1.amazonaws.com HTTPS


(Frankfurt) central-1
rds.eu-central-1.api.aws HTTPS

Europe eu-west-1 rds.eu-west-1.amazonaws.com HTTPS


(Ireland)
rds.eu-west-1.api.aws HTTPS

Europe eu-west-2 rds.eu-west-2.amazonaws.com HTTPS


(London)
rds.eu-west-2.api.aws HTTPS

472
Amazon Relational Database Service User Guide
Constructing an ARN

Region Region Endpoint Protocol


Name

Europe eu- rds.eu-south-1.amazonaws.com HTTPS


(Milan) south-1
rds.eu-south-1.api.aws HTTPS

Europe eu-west-3 rds.eu-west-3.amazonaws.com HTTPS


(Paris)
rds.eu-west-3.api.aws HTTPS

Europe eu- rds.eu-south-2.amazonaws.com HTTPS


(Spain) south-2
rds.eu-south-2.api.aws HTTPS

Europe eu-north-1 rds.eu-north-1.amazonaws.com HTTPS


(Stockholm)
rds.eu-north-1.api.aws HTTPS

Europe eu- rds.eu-central-2.amazonaws.com HTTPS


(Zurich) central-2
rds.eu-central-2.api.aws HTTPS

Israel (Tel il- rds.il-central-1.amazonaws.com HTTPS


Aviv) central-1
rds.il-central-1.api.aws HTTPS

Middle me- rds.me-south-1.amazonaws.com HTTPS


East south-1
(Bahrain) rds.me-south-1.api.aws HTTPS

Middle me- rds.me-central-1.amazonaws.com HTTPS


East (UAE) central-1
rds.me-central-1.api.aws HTTPS

South sa-east-1 rds.sa-east-1.amazonaws.com HTTPS


America
(São rds.sa-east-1.api.aws HTTPS
Paulo)

AWS us-gov- rds.us-gov-east-1.amazonaws.com HTTPS


GovCloud east-1
(US-East) rds.us-gov-east-1.api.aws HTTPS

AWS us-gov- rds.us-gov-west-1.amazonaws.com HTTPS


GovCloud west-1
(US-West) rds.us-gov-west-1.api.aws HTTPS

The following table shows the format that you should use when constructing an ARN for a particular
Amazon RDS resource type.

Resource type ARN format

DB instance arn:aws:rds:<region>:<account>:db:<name>

For example:

arn:aws:rds:us-east-2:123456789012:db:my-mysql-instance-1

473
Amazon Relational Database Service User Guide
Constructing an ARN

Resource type ARN format

DB cluster arn:aws:rds:<region>:<account>:cluster:<name>

For example:

arn:aws:rds:us-east-2:123456789012:cluster:my-aurora-
cluster-1

Event subscription arn:aws:rds:<region>:<account>:es:<name>

For example:

arn:aws:rds:us-east-2:123456789012:es:my-subscription

DB option group arn:aws:rds:<region>:<account>:og:<name>

For example:

arn:aws:rds:us-east-2:123456789012:og:my-og

DB parameter group arn:aws:rds:<region>:<account>:pg:<name>

For example:

arn:aws:rds:us-east-2:123456789012:pg:my-param-enable-logs

DB cluster parameter group arn:aws:rds:<region>:<account>:cluster-pg:<name>

For example:

arn:aws:rds:us-east-2:123456789012:cluster-pg:my-cluster-
param-timezone

Reserved DB instance arn:aws:rds:<region>:<account>:ri:<name>

For example:

arn:aws:rds:us-east-2:123456789012:ri:my-reserved-
postgresql

DB security group arn:aws:rds:<region>:<account>:secgrp:<name>

For example:

arn:aws:rds:us-east-2:123456789012:secgrp:my-public

Automated DB snapshot arn:aws:rds:<region>:<account>:snapshot:rds:<name>

For example:

arn:aws:rds:us-east-2:123456789012:snapshot:rds:my-mysql-
db-2019-07-22-07-23

474
Amazon Relational Database Service User Guide
Getting an existing ARN

Resource type ARN format

Automated DB cluster snapshot arn:aws:rds:<region>:<account>:cluster-snapshot:rds:<name>

For example:

arn:aws:rds:us-east-2:123456789012:cluster-snapshot:rds:my-
aurora-cluster-2019-07-22-16-16

Manual DB snapshot arn:aws:rds:<region>:<account>:snapshot:<name>

For example:

arn:aws:rds:us-east-2:123456789012:snapshot:my-mysql-db-
snap

Manual DB cluster snapshot arn:aws:rds:<region>:<account>:cluster-snapshot:<name>

For example:

arn:aws:rds:us-east-2:123456789012:cluster-snapshot:my-
aurora-cluster-snap

DB subnet group arn:aws:rds:<region>:<account>:subgrp:<name>

For example:

arn:aws:rds:us-east-2:123456789012:subgrp:my-subnet-10

Getting an existing ARN


You can get the ARN of an RDS resource by using the AWS Management Console, AWS Command Line
Interface (AWS CLI), or RDS API.

Console
To get an ARN from the AWS Management Console, navigate to the resource you want an ARN for, and
view the details for that resource.

For example, you can get the ARN for a DB instance from the Configuration tab of the DB instance
details.

AWS CLI
To get an ARN from the AWS CLI for a particular RDS resource, you use the describe command for
that resource. The following table shows each AWS CLI command, and the ARN property used with the
command to get an ARN.

AWS CLI command ARN property

describe-event-subscriptions EventSubscriptionArn

describe-certificates CertificateArn

475
Amazon Relational Database Service User Guide
Getting an existing ARN

AWS CLI command ARN property

describe-db-parameter-groups DBParameterGroupArn

describe-db-cluster-parameter- DBClusterParameterGroupArn
groups

describe-db-instances DBInstanceArn

describe-db-security-groups DBSecurityGroupArn

describe-db-snapshots DBSnapshotArn

describe-events SourceArn

describe-reserved-db-instances ReservedDBInstanceArn

describe-db-subnet-groups DBSubnetGroupArn

describe-option-groups OptionGroupArn

describe-db-clusters DBClusterArn

describe-db-cluster-snapshots DBClusterSnapshotArn

For example, the following AWS CLI command gets the ARN for a DB instance.

Example

For Linux, macOS, or Unix:

aws rds describe-db-instances \


--db-instance-identifier DBInstanceIdentifier \
--region us-west-2 \
--query "*[].{DBInstanceIdentifier:DBInstanceIdentifier,DBInstanceArn:DBInstanceArn}"

For Windows:

aws rds describe-db-instances ^


--db-instance-identifier DBInstanceIdentifier ^
--region us-west-2 ^
--query "*[].{DBInstanceIdentifier:DBInstanceIdentifier,DBInstanceArn:DBInstanceArn}"

The output of that command is like the following:

[
{
"DBInstanceArn": "arn:aws:rds:us-west-2:account_id:db:instance_id",
"DBInstanceIdentifier": "instance_id"
}
]

RDS API
To get an ARN for a particular RDS resource, you can call the following RDS API operations and use the
ARN properties shown following.

476
Amazon Relational Database Service User Guide
Getting an existing ARN

RDS API operation ARN property

DescribeEventSubscriptions EventSubscriptionArn

DescribeCertificates CertificateArn

DescribeDBParameterGroups DBParameterGroupArn

DescribeDBClusterParameterGroups DBClusterParameterGroupArn

DescribeDBInstances DBInstanceArn

DescribeDBSecurityGroups DBSecurityGroupArn

DescribeDBSnapshots DBSnapshotArn

DescribeEvents SourceArn

DescribeReservedDBInstances ReservedDBInstanceArn

DescribeDBSubnetGroups DBSubnetGroupArn

DescribeOptionGroups OptionGroupArn

DescribeDBClusters DBClusterArn

DescribeDBClusterSnapshots DBClusterSnapshotArn

477
Amazon Relational Database Service User Guide
Working with storage

Working with storage for Amazon RDS DB


instances
To specify how you want your data stored in Amazon RDS, choose a storage type and provide a storage
size when you create or modify a DB instance. Later, you can increase the amount or change the type of
storage by modifying the DB instance. For more information about which storage type to use for your
workload, see Amazon RDS storage types (p. 101).

Topics
• Increasing DB instance storage capacity (p. 478)
• Managing capacity automatically with Amazon RDS storage autoscaling (p. 480)
• Modifying settings for Provisioned IOPS SSD storage (p. 484)
• I/O-intensive storage modifications (p. 486)
• Modifying settings for General Purpose SSD (gp3) storage (p. 486)

Increasing DB instance storage capacity


If you need space for additional data, you can scale up the storage of an existing DB instance. To do so,
you can use the Amazon RDS Management Console, the Amazon RDS API, or the AWS Command Line
Interface (AWS CLI). For information about storage limits, see Amazon RDS DB instance storage (p. 101).
Note
Scaling storage for Amazon RDS for Microsoft SQL Server DB instances is supported only for
General Purpose SSD or Provisioned IOPS SSD storage types.

To monitor the amount of free storage for your DB instance so you can respond when necessary, we
recommend that you create an Amazon CloudWatch alarm. For more information on setting CloudWatch
alarms, see Using CloudWatch alarms.

Scaling storage usually doesn't cause any outage or performance degradation of the DB instance. After
you modify the storage size for a DB instance, the status of the DB instance is storage-optimization.
Note
Storage optimization can take several hours. You can't make further storage modifications for
either six (6) hours or until storage optimization has completed on the instance, whichever is
longer. You can view the storage optimization progress in the AWS Management Console or by
using the describe-db-instances AWS CLI command.

However, a special case is if you have a SQL Server DB instance and haven't modified the storage
configuration since November 2017. In this case, you might experience a short outage of a few minutes
when you modify your DB instance to increase the allocated storage. After the outage, the DB instance
is online but in the storage-optimization state. Performance might be degraded during storage
optimization.
Note
You can't reduce the amount of storage for a DB instance after storage has been allocated.
When you increase the allocated storage, it must be by at least 10 percent. If you try to increase
the value by less than 10 percent, you get an error.

Console
To increase storage for a DB instance

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.

478
Amazon Relational Database Service User Guide
Increasing DB instance storage capacity

2. In the navigation pane, choose Databases.


3. Choose the DB instance that you want to modify.
4. Choose Modify.
5. Enter a new value for Allocated storage. It must be greater than the current value.

6. Choose Continue to move to the next screen.


7. Choose Apply immediately in the Scheduling of modifications section to apply the storage
changes to the DB instance immediately.

Or choose Apply during the next scheduled maintenance window to apply the changes during the
next maintenance window.
8. When the settings are as you want them, choose Modify DB instance.

AWS CLI
To increase the storage for a DB instance, use the AWS CLI command modify-db-instance. Set the
following parameters:

• --allocated-storage – Amount of storage to be allocated for the DB instance, in gibibytes.


• --apply-immediately – Use --apply-immediately to apply the storage changes immediately.

Or use --no-apply-immediately (the default) to apply the changes during the next maintenance
window. An immediate outage occurs when the changes are applied.

For more information about storage, see Amazon RDS DB instance storage (p. 101).

RDS API
To increase storage for a DB instance, use the Amazon RDS API operation ModifyDBInstance. Set the
following parameters:

• AllocatedStorage – Amount of storage to be allocated for the DB instance, in gibibytes.


• ApplyImmediately – Set this option to True to apply the storage changes immediately. Set
this option to False (the default) to apply the changes during the next maintenance window. An
immediate outage occurs when the changes are applied.

For more information about storage, see Amazon RDS DB instance storage (p. 101).

479
Amazon Relational Database Service User Guide
Managing capacity automatically with storage autoscaling

Managing capacity automatically with Amazon RDS


storage autoscaling
If your workload is unpredictable, you can enable storage autoscaling for an Amazon RDS DB instance. To
do so, you can use the Amazon RDS console, the Amazon RDS API, or the AWS CLI.

For example, you might use this feature for a new mobile gaming application that users are adopting
rapidly. In this case, a rapidly increasing workload might exceed the available database storage. To avoid
having to manually scale up database storage, you can use Amazon RDS storage autoscaling.

With storage autoscaling enabled, when Amazon RDS detects that you are running out of free database
space it automatically scales up your storage. Amazon RDS starts a storage modification for an
autoscaling-enabled DB instance when these factors apply:

• Free available space is less than or equal to 10 percent of the allocated storage.
• The low-storage condition lasts at least five minutes.
• At least six hours have passed since the last storage modification, or storage optimization has
completed on the instance, whichever is longer.

The additional storage is in increments of whichever of the following is greater:

• 10 GiB
• 10 percent of currently allocated storage
• Predicted storage growth exceeding the current allocated storage size in the next 7 hours based on
the FreeStorageSpace metrics from the past hour. For more information on metrics, see Monitoring
with Amazon CloudWatch.

The maximum storage threshold is the limit that you set for autoscaling the DB instance. It has the
following constraints:

• You must set the maximum storage threshold to at least 10% more than the current allocated storage.
We recommend setting it to at least 26% more to avoid receiving an event notification (p. 886) that
the storage size is approaching the maximum storage threshold.

For example, if you have DB instance with 1000 GiB of allocated storage, then set the maximum
storage threshold to at least 1100 GiB. If you don't, you get an error such as Invalid max storage size
for engine_name. However, we recommend that you set the maximum storage threshold to at least
1260 GiB to avoid the event notification.
• For a DB instance that uses Provisioned IOPS storage, the ratio of IOPS to maximum storage threshold
(in GiB) must be from 1–50 on RDS for SQL Server, and 0.5–50 on other RDS DB engines.
• You can't set the maximum storage threshold for autoscaling-enabled instances to a value greater
than the maximum allocated storage for the database engine and DB instance class.

For example, SQL Server Standard Edition on db.m5.xlarge has a default allocated storage for the
instance of 20 GiB (the minimum) and a maximum allocated storage of 16,384 GiB. The default
maximum storage threshold for autoscaling is 1,000 GiB. If you use this default, the instance doesn't
autoscale above 1,000 GiB. This is true even though the maximum allocated storage for the instance is
16,384 GiB.

Note
We recommend that you carefully choose the maximum storage threshold based on usage
patterns and customer needs. If there are any aberrations in the usage patterns, the maximum
storage threshold can prevent scaling storage to an unexpectedly high value when autoscaling

480
Amazon Relational Database Service User Guide
Managing capacity automatically with storage autoscaling

predicts a very high threshold. After a DB instance has been autoscaled, its allocated storage
can't be reduced.

Topics
• Limitations (p. 481)
• Enabling storage autoscaling for a new DB instance (p. 481)
• Changing the storage autoscaling settings for a DB instance (p. 482)
• Turning off storage autoscaling for a DB instance (p. 483)

Limitations
The following limitations apply to storage autoscaling:

• Autoscaling doesn't occur if the maximum storage threshold would be equaled or exceeded by the
storage increment.
• When autoscaling, RDS predicts the storage size for subsequent autoscaling operations. If a
subsequent operation is predicted to exceed the maximum storage threshold, then RDS autoscales to
the maximum storage threshold.
• Autoscaling can't completely prevent storage-full situations for large data loads. This is because
further storage modifications can't be made for either six (6) hours or until storage optimization has
completed on the instance, whichever is longer.

If you perform a large data load, and autoscaling doesn't provide enough space, the database might
remain in the storage-full state for several hours. This can harm the database.
• If you start a storage scaling operation at the same time that Amazon RDS starts an autoscaling
operation, your storage modification takes precedence. The autoscaling operation is canceled.
• Autoscaling can't be used with magnetic storage.
• Autoscaling can't be used with the following previous-generation instance classes that have less than 6
TiB of orderable storage: db.m3.large, db.m3.xlarge, and db.m3.2xlarge.
• Autoscaling operations aren't logged by AWS CloudTrail. For more information on CloudTrail, see
Monitoring Amazon RDS API calls in AWS CloudTrail (p. 940).

Although automatic scaling helps you to increase storage on your Amazon RDS DB instance dynamically,
you should still configure the initial storage for your DB instance to an appropriate size for your typical
workload.

Enabling storage autoscaling for a new DB instance


When you create a new Amazon RDS DB instance, you can choose whether to enable storage autoscaling.
You can also set an upper limit on the storage that Amazon RDS can allocate for the DB instance.
Note
When you clone an Amazon RDS DB instance that has storage autoscaling enabled, that setting
isn't automatically inherited by the cloned instance. The new DB instance has the same amount
of allocated storage as the original instance. You can turn storage autoscaling on again for the
new instance if the cloned instance continues to increase its storage requirements.

Console

To enable storage autoscaling for a new DB instance

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.

481
Amazon Relational Database Service User Guide
Managing capacity automatically with storage autoscaling

2. In the upper-right corner of the Amazon RDS console, choose the AWS Region where you want to
create the DB instance.
3. In the navigation pane, choose Databases.
4. Choose Create database. On the Select engine page, choose your database engine and specify your
DB instance information as described in Getting started with Amazon RDS (p. 180).
5. In the Storage autoscaling section, set the Maximum storage threshold value for the DB instance.
6. Specify the rest of your DB instance information as described in Getting started with Amazon
RDS (p. 180).

AWS CLI
To enable storage autoscaling for a new DB instance, use the AWS CLI command create-db-instance.
Set the following parameter:

• --max-allocated-storage – Turns on storage autoscaling and sets the upper limit on storage size,
in gibibytes.

To verify that Amazon RDS storage autoscaling is available for your DB instance, use the AWS CLI
describe-valid-db-instance-modifications command. To check based on the instance class
before creating an instance, use the describe-orderable-db-instance-options command. Check
the following field in the return value:

• SupportsStorageAutoscaling – Indicates whether the DB instance or instance class supports


storage autoscaling.

For more information about storage, see Amazon RDS DB instance storage (p. 101).

RDS API
To enable storage autoscaling for a new DB instance, use the Amazon RDS API operation
CreateDBInstance. Set the following parameter:

• MaxAllocatedStorage – Turns on Amazon RDS storage autoscaling and sets the upper limit on
storage size, in gibibytes.

To verify that Amazon RDS storage autoscaling is available for your DB instance, use the Amazon
RDS API DescribeValidDbInstanceModifications operation for an existing instance, or the
DescribeOrderableDBInstanceOptions operation before creating an instance. Check the following
field in the return value:

• SupportsStorageAutoscaling – Indicates whether the DB instance supports storage autoscaling.

For more information about storage, see Amazon RDS DB instance storage (p. 101).

Changing the storage autoscaling settings for a DB instance


You can turn storage autoscaling on for an existing Amazon RDS DB instance. You can also change the
upper limit on the storage that Amazon RDS can allocate for the DB instance.

Console

To change the storage autoscaling settings for a DB instance

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.

482
Amazon Relational Database Service User Guide
Managing capacity automatically with storage autoscaling

2. In the navigation pane, choose Databases.


3. Choose the DB instance that you want to modify, and choose Modify. The Modify DB instance page
appears.
4. Change the storage limit in the Autoscaling section. For more information, see Modifying an
Amazon RDS DB instance (p. 401).
5. When all the changes are as you want them, choose Continue and check your modifications.
6. On the confirmation page, review your changes. If they're correct, choose Modify DB Instance to
save your changes. If they aren't correct, choose Back to edit your changes or Cancel to cancel your
changes.

Changing the storage autoscaling limit occurs immediately. This setting ignores the Apply
immediately setting.

AWS CLI

To change the storage autoscaling settings for a DB instance, use the AWS CLI command modify-db-
instance. Set the following parameter:

• --max-allocated-storage – Sets the upper limit on storage size, in gibibytes. If the value is
greater than the --allocated-storage parameter, storage autoscaling is turned on. If the value is
the same as the --allocated-storage parameter, storage autoscaling is turned off.

To verify that Amazon RDS storage autoscaling is available for your DB instance, use the AWS CLI
describe-valid-db-instance-modifications command. To check based on the instance class
before creating an instance, use the describe-orderable-db-instance-options command. Check
the following field in the return value:

• SupportsStorageAutoscaling – Indicates whether the DB instance supports storage autoscaling.

For more information about storage, see Amazon RDS DB instance storage (p. 101).

RDS API

To change the storage autoscaling settings for a DB instance, use the Amazon RDS API operation
ModifyDBInstance. Set the following parameter:

• MaxAllocatedStorage – Sets the upper limit on storage size, in gibibytes.

To verify that Amazon RDS storage autoscaling is available for your DB instance, use the Amazon
RDS API DescribeValidDbInstanceModifications operation for an existing instance, or the
DescribeOrderableDBInstanceOptions operation before creating an instance. Check the following
field in the return value:

• SupportsStorageAutoscaling – Indicates whether the DB instance supports storage autoscaling.

For more information about storage, see Amazon RDS DB instance storage (p. 101).

Turning off storage autoscaling for a DB instance


If you no longer need Amazon RDS to automatically increase the storage for an Amazon RDS DB
instance, you can turn off storage autoscaling. After you do, you can still manually increase the amount
of storage for your DB instance.

483
Amazon Relational Database Service User Guide
Modifying Provisioned IOPS settings

Console

To turn off storage autoscaling for a DB instance

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases.
3. Choose the DB instance that you want to modify and choose Modify. The Modify DB instance page
appears.
4. Clear the Enable storage autoscaling check box in the Storage autoscaling section. For more
information, see Modifying an Amazon RDS DB instance (p. 401).
5. When all the changes are as you want them, choose Continue and check the modifications.
6. On the confirmation page, review your changes. If they're correct, choose Modify DB Instance to
save your changes. If they aren't correct, choose Back to edit your changes or Cancel to cancel your
changes.

Changing the storage autoscaling limit occurs immediately. This setting ignores the Apply immediately
setting.

AWS CLI

To turn off storage autoscaling for a DB instance, use the AWS CLI command modify-db-instance and
the following parameter:

• --max-allocated-storage – Specify a value equal to the --allocated-storage setting to


prevent further Amazon RDS storage autoscaling for the specified DB instance.

For more information about storage, see Amazon RDS DB instance storage (p. 101).

RDS API

To turn off storage autoscaling for a DB instance, use the Amazon RDS API operation
ModifyDBInstance. Set the following parameter:

• MaxAllocatedStorage – Specify a value equal to the AllocatedStorage setting to prevent


further Amazon RDS storage autoscaling for the specified DB instance.

For more information about storage, see Amazon RDS DB instance storage (p. 101).

Modifying settings for Provisioned IOPS SSD storage


You can modify the settings for a DB instance that uses Provisioned IOPS SSD storage by using the
Amazon RDS console, AWS CLI, or Amazon RDS API. Specify the storage type, allocated storage, and the
amount of Provisioned IOPS that you require. The range depends on your database engine and instance
type.

Although you can reduce the amount of IOPS provisioned for your instance, you can't reduce the storage
size.

In most cases, scaling storage doesn't require any outage and doesn't degrade performance of the
server. After you modify the storage IOPS for a DB instance, the status of the DB instance is storage-
optimization.

484
Amazon Relational Database Service User Guide
Modifying Provisioned IOPS settings

Note
Storage optimization can take several hours. You can't make further storage modifications for
either six (6) hours or until storage optimization has completed on the instance, whichever is
longer.

For information on the ranges of allocated storage and Provisioned IOPS available for each database
engine, see Provisioned IOPS SSD storage (p. 104).

Console
To change the Provisioned IOPS settings for a DB instance

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases.

To filter the list of DB instances, for Filter databases enter a text string for Amazon RDS to use to
filter the results. Only DB instances whose names contain the string appear.
3. Choose the DB instance with Provisioned IOPS that you want to modify.
4. Choose Modify.
5. On the Modify DB instance page, choose Provisioned IOPS SSD (io1) for Storage type.
6. For Provisioned IOPS, enter a value.

If the value that you specify for either Allocated storage or Provisioned IOPS is outside the limits
supported by the other parameter, a warning message is displayed. This message gives the range of
values required for the other parameter.
7. Choose Continue.
8. Choose Apply immediately in the Scheduling of modifications section to apply the changes to the
DB instance immediately. Or choose Apply during the next scheduled maintenance window to
apply the changes during the next maintenance window.
9. Review the parameters to be changed, and choose Modify DB instance to complete the
modification.

The new value for allocated storage or for Provisioned IOPS appears in the Status column.

AWS CLI
To change the Provisioned IOPS setting for a DB instance, use the AWS CLI command modify-db-
instance. Set the following parameters:

• --storage-type – Set to io1 for Provisioned IOPS.


• --allocated-storage – Amount of storage to be allocated for the DB instance, in gibibytes.
• --iops – The new amount of Provisioned IOPS for the DB instance, expressed in I/O operations per
second.
• --apply-immediately – Use --apply-immediately to apply changes immediately. Use --no-
apply-immediately (the default) to apply changes during the next maintenance window.

RDS API
To change the Provisioned IOPS settings for a DB instance, use the Amazon RDS API operation
ModifyDBInstance. Set the following parameters:

• StorageType – Set to io1 for Provisioned IOPS.

485
Amazon Relational Database Service User Guide
I/O-intensive storage modifications

• AllocatedStorage – Amount of storage to be allocated for the DB instance, in gibibytes.


• Iops – The new IOPS rate for the DB instance, expressed in I/O operations per second.
• ApplyImmediately – Set this option to True to apply changes immediately. Set this option to False
(the default) to apply changes during the next maintenance window.

I/O-intensive storage modifications


Amazon RDS DB instances use Amazon Elastic Block Store (EBS) volumes for database and log storage.
Depending on the amount of storage requested, RDS (except for RDS for SQL Server) automatically
stripes across multiple Amazon EBS volumes to enhance performance. RDS DB instances with SSD
storage types are backed by either one or four striped Amazon EBS volumes in a RAID 0 configuration.
By design, storage modification operations for an RDS DB instance have minimal impact on ongoing
database operations.

In most cases, storage scaling modifications are completely offloaded to the Amazon EBS layer and
are transparent to the database. This process is typically completed within a few minutes. However,
some older RDS storage volumes require a different process for modifying the size, Provisioned IOPS, or
storage type. This involves making a full copy of the data using a potentially I/O-intensive operation.

Storage modification uses an I/O-intensive operation if any of the following factors apply:

• The source storage type is magnetic. Magnetic storage doesn't support elastic volume modification.
• The RDS DB instance isn't on a one- or four-volume Amazon EBS layout. You can view the number
of Amazon EBS volumes in use on your RDS DB instances by using Enhanced Monitoring metrics. For
more information, see Viewing OS metrics in the RDS console (p. 802).
• The target size of the modification request increases the allocated storage above 400 GiB for RDS
for MariaDB, MySQL, and PostgreSQL instances, and 200 GiB for RDS for Oracle. Storage autoscaling
operations have the same effect when they increase the allocated storage size of your DB instance
above these thresholds.

If your storage modification involves an I/O-intensive operation, it consumes I/O resources and increases
the load on your DB instance. Storage modifications with I/O-intensive operations involving General
Purpose SSD (gp2) storage can deplete your I/O credit balance, resulting in longer conversion times.

We recommend as a best practice to schedule these storage modification requests outside of peak hours
to help reduce the time required to complete the storage modification operation. Alternatively, you can
create a read replica of the DB instance and perform the storage modification on the read replica. Then
promote the read replica to be the primary DB instance. For more information, see Working with DB
instance read replicas (p. 438).

For more information, see Why is an Amazon RDS DB instance stuck in the modifying state when I try to
increase the allocated storage?

Modifying settings for General Purpose SSD (gp3)


storage
You can modify the settings for a DB instance that uses General Purpose SSD (gp3) storage by using the
Amazon RDS console, AWS CLI, or Amazon RDS API. Specify the storage type, allocated storage, amount
of Provisioned IOPS, and storage throughput that you require. Although you can reduce the amount of
IOPS provisioned for your instance, you can't reduce the storage size.

In most cases, scaling storage doesn't require any outage. After you modify the storage IOPS for a DB
instance, the status of the DB instance is storage-optimization. You can expect elevated latencies,

486
Amazon Relational Database Service User Guide
Modifying General Purpose (gp3) settings

but still within the single-digit millisecond range, during storage optimization. The DB instance is fully
operational after a storage modification.
Note
You can't make further storage modifications until six (6) hours after storage optimization has
completed on the instance.

For information on the ranges of allocated storage, Provisioned IOPS, and storage throughput available
for each database engine, see gp3 storage (p. 103).

Console
To change the storage performance settings for a DB instance

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases.

To filter the list of DB instances, for Filter databases enter a text string for Amazon RDS to use to
filter the results. Only DB instances whose names contain the string appear.
3. Choose the DB instance with gp3 storage that you want to modify.
4. Choose Modify.
5. On the Modify DB Instance page, choose General Purpose SSD (gp3) for Storage type, then do the
following:

a. For Provisioned IOPS, choose a value.

If the value that you specify for either Allocated storage or Provisioned IOPS is outside the
limits supported by the other parameter, a warning message appears. This message gives the
range of values required for the other parameter.
b. For Storage throughput, choose a value.

If the value that you specify for either Provisioned IOPS or Storage throughput is outside the
limits supported by the other parameter, a warning message appears. This message gives the
range of values required for the other parameter.
6. Choose Continue.
7. Choose Apply immediately in the Scheduling of modifications section to apply the changes to the
DB instance immediately. Or choose Apply during the next scheduled maintenance window to
apply the changes during the next maintenance window.
8. Review the parameters to be changed, and choose Modify DB instance to complete the
modification.

The new value for Provisioned IOPS appears in the Status column.

AWS CLI
To change the storage performance settings for a DB instance, use the AWS CLI command modify-db-
instance. Set the following parameters:

• --storage-type – Set to gp3 for General Purpose SSD (gp3).


• --allocated-storage – Amount of storage to be allocated for the DB instance, in gibibytes.
• --iops – The new amount of Provisioned IOPS for the DB instance, expressed in I/O operations per
second.
• --storage-throughput – The new storage throughput for the DB instance, expressed in MiBps.

487
Amazon Relational Database Service User Guide
Modifying General Purpose (gp3) settings

• --apply-immediately – Use --apply-immediately to apply changes immediately. Use --no-


apply-immediately (the default) to apply changes during the next maintenance window.

RDS API
To change the storage performance settings for a DB instance, use the Amazon RDS API operation
ModifyDBInstance. Set the following parameters:

• StorageType – Set to gp3 for General Purpose SSD (gp3).


• AllocatedStorage – Amount of storage to be allocated for the DB instance, in gibibytes.
• Iops – The new IOPS rate for the DB instance, expressed in I/O operations per second.
• StorageThroughput – The new storage throughput for the DB instance, expressed in MiBps.
• ApplyImmediately – Set this option to True to apply changes immediately. Set this option to False
(the default) to apply changes during the next maintenance window.

488
Amazon Relational Database Service User Guide
Deleting a DB instance

Deleting a DB instance
You can delete a DB instance using the AWS Management Console, the AWS CLI, or the RDS API. If you
want to delete a DB instance in an Aurora DB cluster, see Deleting Aurora DB clusters and DB instances.

Topics
• Prerequisites for deleting a DB instance (p. 489)
• Considerations when deleting a DB instance (p. 489)
• Deleting a DB instance (p. 490)

Prerequisites for deleting a DB instance


Before you try to delete your DB instance, make sure that deletion protection is turned off. By default,
deletion protection is turned on for a DB instance that was created with the console.

If your DB instance has deletion protection turned on, you can turn it off by modifying your instance
settings. Choose Modify in the database details page or call the modify-db-instance command. This
operation doesn't cause an outage. For more information, see Settings for DB instances (p. 402).

Considerations when deleting a DB instance


Deleting a DB instance has an effect on instance recoverability, backup availability, and read replica
status. Consider the following issues:

• You can choose whether to create a final DB snapshot. You have the following options:
• If you take a final snapshot, you can use it to restore your deleted DB instance. RDS retains both
the final snapshot and any manual snapshots that you took previously. You can't create a final DB
snapshot of your DB instance if it isn't in the Available state. For more information, see Viewing
Amazon RDS DB instance status (p. 684).
• If you don't take a final snapshot, deletion is faster. However, you can't use a final snapshot to
restore your DB instance. If you later decide to restore your deleted DB instance, either retain
automated backups or use an earlier manual snapshot to restore your DB instance to the point in
time of the snapshot.
• You can choose whether to retain automated backups. You have the following options:
• If you retain automated backups, RDS keeps them for the retention period that is in effect for the DB
instance at the time when you delete it. You can use automated backups to restore your DB instance
to a time during but not after your retention period. The retention period is in effect regardless
of whether you create a final DB snapshot. To delete a retained automated backup, see Deleting
retained automated backups (p. 596).
• Retained automated backups and manual snapshots incur billing charges until they're deleted. For
more information, see Retention costs (p. 596).
• If you don't retain automated backups, RDS deletes the automated backups that reside in the
same AWS Region as your DB instance. You can't recover these backups. If your automated backups
have been replicated to another AWS Region, RDS keeps them even if you don't choose to retain
automated backups. For more information, see Replicating automated backups to another AWS
Region (p. 602).
Note
Typically, if you create a final DB snapshot, you don't need to retain automated backups.
• When you delete your DB instance, RDS doesn't delete manual DB snapshots. For more information,
see Creating a DB snapshot (p. 613).
• If you want to delete all RDS resources, note that the following resources incur billing charges:

489
Amazon Relational Database Service User Guide
Deleting a DB instance

• DB instances
• DB snapshots
• DB clusters

If you purchased reserved instances, then they are billed according to contract that you agreed to
when you purchased the instance. For more information, see Reserved DB instances for Amazon
RDS (p. 165). You can get billing information for all your AWS resources by using the AWS Cost
Explorer. For more information, see Analyzing your costs with AWS Cost Explorer.
• If you delete a DB instance that has read replicas in the same AWS Region, each read replica is
automatically promoted to a standalone DB instance. For more information, see Promoting a read
replica to be a standalone DB instance (p. 447). If your DB instance has read replicas in different AWS
Regions, see Cross-Region replication considerations (p. 456) for information related to deleting the
source DB instance for a cross-Region read replica.
• When the status for a DB instance is deleting, its CA certificate value doesn't appear in the RDS
console or in output for AWS CLI commands or RDS API operations. For more information about CA
certificates, see Using SSL/TLS to encrypt a connection to a DB instance (p. 2591).
• The time required to delete a DB instance varies depending on the backup retention period (that is,
how many backups to delete), how much data is deleted, and whether a final snapshot is taken.

Deleting a DB instance
You can delete a DB instance using the AWS Management Console, the AWS CLI, or the RDS API. You
must do the following:

• Provide the name of the DB instance


• Enable or disable the option to take a final DB snapshot of the instance
• Enable or disable the option to retain automated backups

Note
You can't delete a DB instance when deletion protection is turned on. For more information, see
Prerequisites for deleting a DB instance (p. 489).

Console

To delete a DB instance

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases, and then choose the DB instance that you want to delete.
3. For Actions, choose Delete.
4. To create a final DB snapshot for the DB instance, choose Create final snapshot?.
5. If you chose to create a final snapshot, enter the Final snapshot name.
6. To retain automated backups, choose Retain automated backups.
7. Enter delete me in the box.
8. Choose Delete.

AWS CLI
To find the instance IDs of the DB instances in your account, call the describe-db-instances command:

490
Amazon Relational Database Service User Guide
Deleting a DB instance

aws rds describe-db-instances --query 'DBInstances[*].[DBInstanceIdentifier]' --output text

To delete a DB instance by using the AWS CLI, call the delete-db-instance command with the following
options:

• --db-instance-identifier
• --final-db-snapshot-identifier or --skip-final-snapshot

Example With a final snapshot and no retained automated backups

For Linux, macOS, or Unix:

aws rds delete-db-instance \


--db-instance-identifier mydbinstance \
--final-db-snapshot-identifier mydbinstancefinalsnapshot \
--delete-automated-backups

For Windows:

aws rds delete-db-instance ^


--db-instance-identifier mydbinstance ^
--final-db-snapshot-identifier mydbinstancefinalsnapshot ^
--delete-automated-backups

Example With retained automated backups and no final snapshot

For Linux, macOS, or Unix:

aws rds delete-db-instance \


--db-instance-identifier mydbinstance \
--skip-final-snapshot \
--no-delete-automated-backups

For Windows:

aws rds delete-db-instance ^


--db-instance-identifier mydbinstance ^
--skip-final-snapshot ^
--no-delete-automated-backups

RDS API
To delete a DB instance by using the Amazon RDS API, call the DeleteDBInstance operation with the
following parameters:

• DBInstanceIdentifier
• FinalDBSnapshotIdentifier or SkipFinalSnapshot

491
Amazon Relational Database Service User Guide

Configuring and managing a Multi-


AZ deployment
Multi-AZ deployments can have one standby or two standby DB instances. When the deployment
has one standby DB instance, it's called a Multi-AZ DB instance deployment. A Multi-AZ DB instance
deployment has one standby DB instance that provides failover support, but doesn't serve read traffic.
When the deployment has two standby DB instances, it's called a Multi-AZ DB cluster deployment. A
Multi-AZ DB cluster deployment has standby DB instances that provide failover support and can also
serve read traffic.

You can use the AWS Management Console to determine whether a Multi-AZ deployment is a Multi-AZ
DB instance deployment or a Multi-AZ DB cluster deployment. In the navigation pane, choose Databases,
and then choose a DB identifier.

• A Multi-AZ DB instance deployment has the following characteristics:


• There is only one row for the DB instance.
• The value of Role is Instance or Primary.
• The value of Multi-AZ is Yes.
• A Multi-AZ DB cluster deployment has the following characteristics:
• There is a cluster-level row with three DB instance rows under it.
• For the cluster-level row, the value of Role is Multi-AZ DB cluster.
• For each instance-level row, the value of Role is Writer instance or Reader instance.
• For each instance-level row, the value of Multi-AZ is 3 Zones.

Topics
• Multi-AZ DB instance deployments (p. 493)
• Multi-AZ DB cluster deployments (p. 499)

In addition, the following topics apply to both DB instances and Multi-AZ DB clusters:

• the section called “Tagging RDS resources” (p. 461)


• the section called “Working with ARNs” (p. 471)
• the section called “Working with storage” (p. 478)
• the section called “Maintaining a DB instance” (p. 418)
• the section called “Upgrading the engine version” (p. 429)

492
Amazon Relational Database Service User Guide
Multi-AZ DB instance deployments

Multi-AZ DB instance deployments


Amazon RDS provides high availability and failover support for DB instances using Multi-AZ deployments
with a single standby DB instance. This type of deployment is called a Multi-AZ DB instance deployment.
Amazon RDS uses several different technologies to provide this failover support. Multi-AZ deployments
for MariaDB, MySQL, Oracle, PostgreSQL, and RDS Custom for SQL Server DB instances use the Amazon
failover technology. Microsoft SQL Server DB instances use SQL Server Database Mirroring (DBM) or
Always On Availability Groups (AGs). For information on SQL Server version support for Multi-AZ, see
Multi-AZ deployments for Amazon RDS for Microsoft SQL Server (p. 1450). For information on working
with RDS Custom for SQL Server for Multi-AZ, see Managing a Multi-AZ deployment for RDS Custom for
SQL Server (p. 1147).

In a Multi-AZ DB instance deployment, Amazon RDS automatically provisions and maintains a


synchronous standby replica in a different Availability Zone. The primary DB instance is synchronously
replicated across Availability Zones to a standby replica to provide data redundancy and minimize
latency spikes during system backups. Running a DB instance with high availability can enhance
availability during planned system maintenance. It can also help protect your databases against DB
instance failure and Availability Zone disruption. For more information on Availability Zones, see Regions,
Availability Zones, and Local Zones (p. 110).
Note
The high availability option isn't a scaling solution for read-only scenarios. You can't use a
standby replica to serve read traffic. To serve read-only traffic, use a Multi-AZ DB cluster or a
read replica instead. For more information about Multi-AZ DB clusters, see Multi-AZ DB cluster
deployments (p. 499). For more information about read replicas, see Working with DB instance
read replicas (p. 438).

Using the RDS console, you can create a Multi-AZ DB instance deployment by simply specifying Multi-
AZ when creating a DB instance. You can use the console to convert existing DB instances to Multi-AZ

493
Amazon Relational Database Service User Guide
Modifying a DB instance to be a
Multi-AZ DB instance deployment

DB instance deployments by modifying the DB instance and specifying the Multi-AZ option. You can
also specify a Multi-AZ DB instance deployment with the AWS CLI or Amazon RDS API. Use the create-
db-instance or modify-db-instance CLI command, or the CreateDBInstance or ModifyDBInstance API
operation.

The RDS console shows the Availability Zone of the standby replica (called the secondary AZ). You can
also use the describe-db-instances CLI command or the DescribeDBInstances API operation to find the
secondary AZ.

DB instances using Multi-AZ DB instance deployments can have increased write and commit latency
compared to a Single-AZ deployment. This can happen because of the synchronous data replication that
occurs. You might have a change in latency if your deployment fails over to the standby replica, although
AWS is engineered with low-latency network connectivity between Availability Zones. For production
workloads, we recommend that you use Provisioned IOPS (input/output operations per second) for fast,
consistent performance. For more information about DB instance classes, see DB instance classes (p. 11).

Modifying a DB instance to be a Multi-AZ DB instance


deployment
If you have a DB instance in a Single-AZ deployment and modify it to a Multi-AZ DB instance deployment
(for engines other than Amazon Aurora), Amazon RDS performs several actions:

1. Takes a snapshot of the primary DB instance's Amazon Elastic Block Store (EBS) volumes.
2. Creates new volumes for the standby replica from the snapshot. These volumes initialize in the
background, and maximum volume performance is achieved after the data is fully initialized.
3. Turns on synchronous block-level replication between the volumes of the primary and standby
replicas.

Important
Using a snapshot to create the standby instance avoids downtime when you convert from
Single-AZ to Multi-AZ, but you can experience a performance impact during and after
converting to Multi-AZ. This impact can be significant for workloads that are sensitive to write
latency.
While this capability lets large volumes be restored from snapshots quickly, it can cause a
significant increase in the latency of I/O operations because of the synchronous replication. This
latency can impact your database performance. We highly recommend as a best practice not to
perform Multi-AZ conversion on a production DB instance.
To avoid the performance impact on the DB instance currently serving the sensitive workload,
create a read replica and enable backups on the read replica. Convert the read replica to Multi-
AZ, and run queries that load the data into the read replica's volumes (on both AZs). Then
promote the read replica to be the primary DB instance. For more information, see Working with
DB instance read replicas (p. 438).

There are two ways to modify a DB instance to be a Multi-AZ DB instance deployment:

Topics
• Convert to a Multi-AZ DB instance deployment with the RDS console (p. 494)
• Modifying a DB instance to be a Multi-AZ DB instance deployment (p. 495)

Convert to a Multi-AZ DB instance deployment with the RDS


console
You can use the RDS console to convert a DB instance to a Multi-AZ DB instance deployment.

494
Amazon Relational Database Service User Guide
Failover process for Amazon RDS

You can only use the console to complete the conversion. To use the AWS CLI or RDS API, follow the
instructions in Modifying a DB instance to be a Multi-AZ DB instance deployment (p. 495).

To convert to a Multi-AZ DB instance deployment with the RDS console

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases, and then choose the DB instance that you want to
modify.
3. From Actions, choose Convert to Multi-AZ deployment.
4. On the confirmation page, choose Apply immediately to apply the changes immediately. Choosing
this option doesn't cause downtime, but there is a possible performance impact. Alternatively, you
can choose to apply the update during the next maintenance window. For more information, see
Using the Apply Immediately setting (p. 402).
5. Choose Convert to Multi-AZ.

Modifying a DB instance to be a Multi-AZ DB instance


deployment
You can modify a DB instance to be a MultiAZ DB instance deployment in the following ways:

• Using the RDS console, modify the DB instance, and set Multi-AZ deployment to Yes.
• Using the AWS CLI, call the modify-db-instance command, and set the --multi-az option.
• Using the RDS API, call the ModifyDBInstance operation, and set the MultiAZ parameter to true.

For information about modifying a DB instance, see Modifying an Amazon RDS DB instance (p. 401).
After the modification is complete, Amazon RDS triggers an event (RDS-EVENT-0025) that indicates
the process is complete. You can monitor Amazon RDS events. For more information about events, see
Working with Amazon RDS event notification (p. 855).

Failover process for Amazon RDS


If a planned or unplanned outage of your DB instance results from an infrastructure defect, Amazon RDS
automatically switches to a standby replica in another Availability Zone if you have turned on Multi-AZ.
The time that it takes for the failover to complete depends on the database activity and other conditions
at the time the primary DB instance became unavailable. Failover times are typically 60–120 seconds.
However, large transactions or a lengthy recovery process can increase failover time. When the failover is
complete, it can take additional time for the RDS console to reflect the new Availability Zone.
Note
You can force a failover manually when you reboot a DB instance. For more information, see
Rebooting a DB instance (p. 436).

Amazon RDS handles failovers automatically so you can resume database operations as quickly as
possible without administrative intervention. The primary DB instance switches over automatically to
the standby replica if any of the conditions described in the following table occurs. You can view these
failover reasons in the event log.

Failover reason Description

The operating system underlying the RDS A failover was triggered during the maintenance
database instance is being patched in an offline window for an OS patch or a security update.
operation.

495
Amazon Relational Database Service User Guide
Failover process for Amazon RDS

Failover reason Description


For more information, see Maintaining a DB
instance (p. 418).

The primary host of the RDS Multi-AZ instance is The Multi-AZ DB instance deployment detected an
unhealthy. impaired primary DB instance and failed over.

The primary host of the RDS Multi-AZ instance is RDS monitoring detected a network reachability
unreachable due to loss of network connectivity. failure to the primary DB instance and triggered a
failover.

The RDS instance was modified by customer. An RDS DB instance modification triggered a
failover.

For more information, see Modifying an Amazon


RDS DB instance (p. 401).

The RDS Multi-AZ primary instance is busy and The primary DB instance is unresponsive. We
unresponsive. recommend that you do the following:

• Examine the event and CloudWatch logs


for excessive CPU, memory, or swap space
usage. For more information, see Working with
Amazon RDS event notification (p. 855) and
Creating a rule that triggers on an Amazon RDS
event (p. 870).
• Evaluate your workload to determine whether
you're using the appropriate DB instance
class. For more information, see DB instance
classes (p. 11).
• Use Enhanced Monitoring for real-time
operating system metrics. For more
information, see Monitoring OS metrics with
Enhanced Monitoring (p. 797).
• Use Performance Insights to help analyze
any issues that affect your DB instance's
performance. For more information, see
Monitoring DB load with Performance Insights
on Amazon RDS (p. 720).

For more information on these recommendations,


see Overview of monitoring metrics in Amazon
RDS (p. 679) and Best practices for Amazon
RDS (p. 286).

The storage volume underlying the primary The Multi-AZ DB instance deployment detected
host of the RDS Multi-AZ instance experienced a a storage issue on the primary DB instance and
failure. failed over.

The user requested a failover of the DB instance. You rebooted the DB instance and chose Reboot
with failover.

For more information, see Rebooting a DB


instance (p. 436).

To determine if your Multi-AZ DB instance has failed over, you can do the following:

496
Amazon Relational Database Service User Guide
Failover process for Amazon RDS

• Set up DB event subscriptions to notify you by email or SMS that a failover has been initiated. For
more information about events, see Working with Amazon RDS event notification (p. 855).
• View your DB events by using the RDS console or API operations.
• View the current state of your Multi-AZ DB instance deployment by using the RDS console or API
operations.

For information on how you can respond to failovers, reduce recovery time, and other best practices for
Amazon RDS, see Best practices for Amazon RDS (p. 286).

Setting the JVM TTL for DNS name lookups


The failover mechanism automatically changes the Domain Name System (DNS) record of the DB
instance to point to the standby DB instance. As a result, you need to re-establish any existing
connections to your DB instance. In a Java virtual machine (JVM) environment, due to how the Java DNS
caching mechanism works, you might need to reconfigure JVM settings.

The JVM caches DNS name lookups. When the JVM resolves a host name to an IP address, it caches the IP
address for a specified period of time, known as the time-to-live (TTL).

Because AWS resources use DNS name entries that occasionally change, we recommend that you
configure your JVM with a TTL value of no more than 60 seconds. Doing this makes sure that when a
resource's IP address changes, your application can receive and use the resource's new IP address by
requerying the DNS.

On some Java configurations, the JVM default TTL is set so that it never refreshes DNS entries until
the JVM is restarted. Thus, if the IP address for an AWS resource changes while your application is still
running, it can't use that resource until you manually restart the JVM and the cached IP information
is refreshed. In this case, it's crucial to set the JVM's TTL so that it periodically refreshes its cached IP
information.

You can get the JVM default TTL by retrieving the networkaddress.cache.ttl property value:

String ttl = java.security.Security.getProperty("networkaddress.cache.ttl");

Note
The default TTL can vary according to the version of your JVM and whether a security manager
is installed. Many JVMs provide a default TTL less than 60 seconds. If you're using such a JVM
and not using a security manager, you can ignore the rest of this topic. For more information on
security managers in Oracle, see The security manager in the Oracle documentation.

To modify the JVM's TTL, set the networkaddress.cache.ttl property value. Use one of the
following methods, depending on your needs:

• To set the property value globally for all applications that use the JVM, set
networkaddress.cache.ttl in the $JAVA_HOME/jre/lib/security/java.security file.

networkaddress.cache.ttl=60

• To set the property locally for your application only, set networkaddress.cache.ttl in your
application's initialization code before any network connections are established.

java.security.Security.setProperty("networkaddress.cache.ttl" , "60");

497
Amazon Relational Database Service User Guide
Failover process for Amazon RDS

498
Amazon Relational Database Service User Guide
Multi-AZ DB cluster deployments

Multi-AZ DB cluster deployments


A Multi-AZ DB cluster deployment is a semisynchronous, high availability deployment mode of Amazon
RDS with two readable standby DB instances. A Multi-AZ DB cluster has a writer DB instance and two
reader DB instances in three separate Availability Zones in the same AWS Region. Multi-AZ DB clusters
provide high availability, increased capacity for read workloads, and lower write latency when compared
to Multi-AZ DB instance deployments.

You can import data from an on-premises database to a Multi-AZ DB cluster by following the instructions
in Importing data to an Amazon RDS MariaDB or MySQL database with reduced downtime (p. 1690).

You can purchase reserved DB instances for a Multi-AZ DB cluster. For more information, see Reserved
DB instances for a Multi-AZ DB cluster (p. 168).

Topics
• Region and version availability (p. 499)
• Instance class availability (p. 499)
• Overview of Multi-AZ DB clusters (p. 500)
• Limitations for Multi-AZ DB clusters (p. 501)
• Managing a Multi-AZ DB cluster with the AWS Management Console (p. 502)
• Working with parameter groups for Multi-AZ DB clusters (p. 503)
• Upgrading the engine version of a Multi-AZ DB cluster (p. 503)
• Replica lag and Multi-AZ DB clusters (p. 504)
• Failover process for Multi-AZ DB clusters (p. 505)
• Creating a Multi-AZ DB cluster (p. 508)
• Connecting to a Multi-AZ DB cluster (p. 522)
• Automatically connecting an AWS compute resource and a Multi-AZ DB cluster (p. 525)
• Modifying a Multi-AZ DB cluster (p. 539)
• Renaming a Multi-AZ DB cluster (p. 550)
• Rebooting a Multi-AZ DB cluster and reader DB instances (p. 552)
• Working with Multi-AZ DB cluster read replicas (p. 554)
• Using PostgreSQL logical replication with Multi-AZ DB clusters (p. 561)
• Deleting a Multi-AZ DB cluster (p. 563)

Important
Multi-AZ DB clusters aren't the same as Aurora DB clusters. For information about Aurora DB
clusters, see the Amazon Aurora User Guide.

Region and version availability


Feature availability and support varies across specific versions of each database engine, and across AWS
Regions. For more information on version and Region availability of Amazon RDS with Multi-AZ DB
clusters, see Multi-AZ DB clusters (p. 147).

Instance class availability


Multi-AZ DB cluster deployments are supported for a subset of DB instance classes. For a list
of supported instance classes, see the DB instance class row in the section called “Available
settings” (p. 514).

499
Amazon Relational Database Service User Guide
Overview of Multi-AZ DB clusters

For more information about DB instance classes, see the section called “DB instance classes” (p. 11).

Overview of Multi-AZ DB clusters


With a Multi-AZ DB cluster, Amazon RDS replicates data from the writer DB instance to both of the
reader DB instances using the DB engine's native replication capabilities. When a change is made on the
writer DB instance, it's sent to each reader DB instance.

Multi-AZ DB cluster deployments use semisynchronous replication, which requires acknowledgment


from at least one reader DB instance in order for a change to be committed. It doesn't require
acknowledgment that events have been fully executed and committed on all replicas.

Reader DB instances act as automatic failover targets and also serve read traffic to increase application
read throughput. If an outage occurs on your writer DB instance, RDS manages failover to one of the
reader DB instances. RDS does this based on which reader DB instance has the most recent change
record.

The following diagram shows a Multi-AZ DB cluster.

Multi-AZ DB clusters typically have lower write latency when compared to Multi-AZ DB instance
deployments. They also allow read-only workloads to run on reader DB instances. The RDS console

500
Amazon Relational Database Service User Guide
Limitations for Multi-AZ DB clusters

shows the Availability Zone of the writer DB instance and the Availability Zones of the reader DB
instances. You can also use the describe-db-clusters CLI command or the DescribeDBClusters API
operation to find this information.
Important
To prevent replication errors in RDS for MySQL Multi-AZ DB clusters, we strongly recommend
that all tables have a primary key.

Limitations for Multi-AZ DB clusters


The following limitations apply to Multi-AZ DB clusters:

• Multi-AZ DB clusters only support Provisioned IOPS storage.


• You can't change a Single-AZ DB instance deployment or Multi-AZ DB instance deployment into
a Multi-AZ DB cluster. As an alternative, you can restore a snapshot of a Single-AZ DB instance
deployment or Multi-AZ DB instance deployment to a Multi-AZ DB cluster.
• You can't change a Multi-AZ DB cluster deployment into a Single-AZ DB instance or Multi-AZ DB
instance. As an alternative, you can restore a snapshot of a Multi-AZ DB cluster deployment to a
Single-AZ DB instance deployment or Multi-AZ DB instance deployment.
• Multi-AZ DB clusters don't support modifications at the DB instance level because all modifications are
done at the DB cluster level.
• RDS for MySQL Multi-AZ DB clusters support replicating from an external MySQL
source only if the source has GTID enabled. For more information, see the section called
“mysql.rds_set_external_master_with_auto_position” (p. 1772). Position-based binlog replication is
not supported.
• Multi-AZ DB clusters don't support the following features:
• Amazon RDS Proxy
• Support for IPv6 connections (dual-stack mode)
• Cross-Region automated backups
• Exporting Multi-AZ DB cluster snapshot data to an Amazon S3 bucket
• IAM DB authentication
• Kerberos authentication
• Modifying the port

As an alternative, you can restore a Multi-AZ DB cluster to a point in time and specify a different
port.
• Option groups
• Point-in-time-recovery (PITR) for deleted clusters
• Restoring a Multi-AZ DB cluster snapshot from an Amazon S3 bucket
• Storage autoscaling by setting the maximum allocated storage

As an alternative, you can scale storage manually.


• Stopping and starting the DB cluster
• Copying a snapshot of a Multi-AZ DB cluster
• Encrypting an unencrypted Multi-AZ DB cluster
• RDS for MySQL Multi-AZ DB clusters don't support replication to an external target database.
• RDS for MySQL Multi-AZ DB clusters support only the following system stored procedures:
• mysql.rds_rotate_general_log
• mysql.rds_rotate_slow_log
• mysql.rds_show_configuration
• mysql.rds_set_external_master_with_auto_position

501
Amazon Relational Database Service User Guide
Managing a Multi-AZ DB cluster
with the AWS Management Console

RDS for MySQL Multi-AZ DB clusters don't support other system stored procedures. For information
about these procedures, see RDS for MySQL stored procedure reference (p. 1757).
• RDS for PostgreSQL Multi-AZ DB clusters don't support the following PostgreSQL extensions: aws_s3
and pg_transport.
• RDS for PostgreSQL Multi-AZ DB clusters don't support using a custom DNS server for outbound
network access.

Managing a Multi-AZ DB cluster with the AWS


Management Console
You can manage a Multi-AZ DB cluster with the console.

To manage a Multi-AZ DB cluster with the console

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases, and then choose the Multi-AZ DB cluster that you want to
manage.

The following image shows a Multi-AZ DB cluster in the console.

The available actions in the Actions menu depend on whether the Multi-AZ DB cluster is selected or a DB
instance in the cluster is selected.

Choose the Multi-AZ DB cluster to view the cluster details and perform actions at the cluster level.

502
Amazon Relational Database Service User Guide
Working with parameter groups for Multi-AZ DB clusters

Choose a DB instance in a Multi-AZ DB cluster to view the DB instance details and perform actions at the
DB instance level.

Working with parameter groups for Multi-AZ DB


clusters
In a Multi-AZ DB cluster, a DB cluster parameter group acts as a container for engine configuration values
that are applied to every DB instance in the Multi-AZ DB cluster.

In a Multi-AZ DB cluster, a DB parameter group is set to the default DB parameter group for the DB
engine and DB engine version. The settings in the DB cluster parameter group are used for all of the DB
instances in the cluster.

For information about parameter groups, see Working with parameter groups (p. 347).

Upgrading the engine version of a Multi-AZ DB


cluster
Amazon RDS provides newer versions of each supported database engine so you can keep your Multi-
AZ DB cluster up-to-date. When Amazon RDS supports a new version of a database engine, you can
choose how and when to upgrade your Multi-AZ DB cluster. When you initiate an upgrade, the writer DB
instance is upgraded first, then the reader DB instances are upgraded simultaneously.

There are two kinds of upgrades: major version upgrades and minor version upgrades. In general, a
major engine version upgrade can introduce changes that aren't compatible with existing applications.
In contrast, a minor version upgrade includes only changes that are backward-compatible with existing
applications.
Note
Currently, major version upgrades are only supported for RDS for PostgreSQL Multi-AZ DB
clusters. Minor version upgrades are supported for all DB engines that support Multi-AZ DB
clusters.

Amazon RDS doesn't automatically upgrade Multi-AZ DB cluster read replicas. For minor version
upgrades, you must first manually upgrade all read replicas and then upgrade the cluster, otherwise the
upgrade is blocked. When you perform a major version upgrade of a cluster, the replication state of all
read replicas changes to terminated. You must delete and recreate the read replicas after the upgrade
completes. For more information, see the section called “Monitoring read replication” (p. 449).

The process for upgrading the engine version of a Multi-AZ DB cluster is the same as the process for
upgrading a DB instance engine version. For instructions, see the section called “Upgrading the engine

503
Amazon Relational Database Service User Guide
Replica lag and Multi-AZ DB clusters

version” (p. 429). The only difference is that when using the AWS CLI, you use the modify-db-cluster
command and specify the --db-cluster-identifier parameter (as well as the --allow-major-
version-upgrade parameter).

For more information about major and minor version upgrades for RDS for PostgreSQL, see the
following documentation for your DB engine:

• the section called “Upgrading the PostgreSQL DB engine” (p. 2197)


• the section called “Upgrading the MySQL DB engine” (p. 1664)

Replica lag and Multi-AZ DB clusters


Replica lag is the difference in time between the latest transaction on the writer DB instance and the
latest applied transaction on a reader DB instance. The Amazon CloudWatch metric ReplicaLag
represents this time difference. For more information about CloudWatch metrics, see Monitoring Amazon
RDS metrics with Amazon CloudWatch (p. 706).

Although Multi-AZ DB clusters allow for high write performance, replica lag can still occur due to the
nature of engine-based replication. Because any failover must first resolve the replica lag before it
promotes a new writer DB instance, monitoring and managing this replica lag is a consideration.

For RDS for MySQL Multi-AZ DB clusters, failover time depends on replica lag of both remaining reader
DB instances. Both the reader DB instances must apply unapplied transactions before one of them is
promoted to the new writer DB instance.

For RDS for PostgreSQL Multi-AZ DB clusters, failover time depends on the lowest replica lag of the two
remaining reader DB instances. The reader DB instance with the lowest replica lag must apply unapplied
transactions before it is promoted to the new writer DB instance.

For a tutorial that shows you how to create a CloudWatch alarm when replica lag exceeds a set
amount of time, see Tutorial: Creating an Amazon CloudWatch alarm for Multi-AZ DB cluster replica
lag (p. 713).

Common causes of replica lag


In general, replica lag occurs when the write workload is too high for the reader DB instances to apply
the transactions efficiently. Various workloads can incur temporary or continuous replica lag. Some
examples of common causes are the following:

• High write concurrency or heavy batch updating on the writer DB instance, causing the apply process
on the reader DB instances to fall behind.
• Heavy read workload that is using resources on one or more reader DB instances. Running slow or
large queries can affect the apply process and can cause replica lag.
• Transactions that modify large amounts of data or DDL statements can sometimes cause a temporary
increase in replica lag because the database must preserve commit order.

Mitigating replica lag


For Multi-AZ DB clusters for RDS for MySQL and RDS for PostgreSQL, you can mitigate replica lag by
reducing the load on your writer DB instance. You can also use flow control to reduce replica lag. Flow
control works by throttling writes on the writer DB instance, which ensures that replica lag doesn't
continue to grow unbounded. Write throttling is accomplished by adding a delay into the end of a
transaction, which decreases the write throughput on the writer DB instance. Although flow control
doesn't guarantee lag elimination, it can help reduce overall lag in many workloads. The following
sections provide information about using flow control with RDS for MySQL and RDS for PostgreSQL.

504
Amazon Relational Database Service User Guide
Failover process for Multi-AZ DB clusters

Mitigating replica lag with flow control for RDS for MySQL
When you are using RDS for MySQL Multi-AZ DB clusters, flow control is turned on by default using the
dynamic parameter rpl_semi_sync_master_target_apply_lag. This parameter specifies the upper
limit that you want for replica lag. As replica lag approaches this configured limit, flow control throttles
the write transactions on the writer DB Instance to try to contain the replica lag below the specified
value. In some cases, replica lag can exceed the specified limit. By default, this parameter is set to 120
seconds. To turn off flow control, set this parameter to its maximum value of 86400 seconds (one day).

To view the current delay injected by flow control, show the parameter
Rpl_semi_sync_master_flow_control_current_delay by running the following query.

SHOW GLOBAL STATUS like '%flow_control%';

Your output looks similar to the following.

+-------------------------------------------------+-------+
| Variable_name | Value |
+-------------------------------------------------+-------+
| Rpl_semi_sync_master_flow_control_current_delay | 2010 |
+-------------------------------------------------+-------+
1 row in set (0.00 sec)

Note
The delay is shown in microseconds.

When you have Performance Insights turned on for an RDS for MySQL Multi-AZ DB cluster, you can
monitor the wait event corresponding to a SQL statement indicating that the queries were delayed
by a flow control. When a delay was introduced by a flow control, you can view the wait event /
wait/synch/cond/semisync/semi_sync_flow_control_delay_cond corresponding to the
SQL statement on the Performance Insights dashboard. To view these metrics, make sure that the
Performance Schema is turned on. For information about Performance Insights, see Monitoring DB load
with Performance Insights on Amazon RDS (p. 720).

Mitigating replica lag with flow control for RDS for PostgreSQL
When you are using RDS for PostgreSQL Multi-AZ DB clusters, flow control is deployed as an extension. It
turns on a background worker for all DB instances in the DB cluster. By default, the background workers
on the reader DB instances communicate the current replica lag with the background worker on the
writer DB instance. If the lag exceeds two minutes on any reader DB instance, the background worker
on the writer DB instance adds a delay at the end of a transaction. To control the lag threshold, use the
parameter flow_control.target_standby_apply_lag.

When a flow control throttles a PostgreSQL process, the Extension wait event in pg_stat_activity
and Performance Insights indicates that. The function get_flow_control_stats displays details
about how much delay is currently being added.

Flow control can benefit most online transaction processing (OLTP) workloads that have short but highly
concurrent transactions. If the lag is caused by long-running transactions, such as batch operations, flow
control doesn't provide as strong a benefit.

You can turn off flow control by removing the extension from the preload_shared_libraries and
rebooting your DB instance.

Failover process for Multi-AZ DB clusters


If there is a planned or unplanned outage of your writer DB instance in a Multi-AZ DB cluster, Amazon
RDS automatically fails over to a reader DB instance in different Availability Zone. The time it takes for

505
Amazon Relational Database Service User Guide
Failover process for Multi-AZ DB clusters

the failover to complete depends on the database activity and other conditions when the writer DB
instance became unavailable. Failover times are typically under 35 seconds. Failover completes when
both reader DB instances have applied outstanding transactions from the failed writer. When the failover
is complete, it can take additional time for the RDS console to reflect the new Availability Zone.

Topics
• Automatic failovers (p. 506)
• Manually failing over a Multi-AZ DB cluster (p. 506)
• Determining whether a Multi-AZ DB cluster has failed over (p. 506)
• Setting the JVM TTL for DNS name lookups (p. 507)

Automatic failovers
Amazon RDS handles failovers automatically so you can resume database operations as quickly as
possible without administrative intervention. To fail over, the writer DB instance switches automatically
to a reader DB instance.

Manually failing over a Multi-AZ DB cluster


You can fail over a Multi-AZ DB cluster manually using the AWS Management Console, the AWS CLI, or
the RDS API.

Console

To fail over a Multi-AZ DB cluster manually

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases.
3. Choose the Multi-AZ DB cluster that you want to fail over.
4. For Actions, choose Failover.

The Failover DB Cluster page appears.


5. Choose Failover to confirm the manual failover.

AWS CLI
To fail over a Multi-AZ DB cluster manually, use the AWS CLI command failover-db-cluster.

Example

aws rds failover-db-cluster --db-cluster-identifier mymultiazdbcluster

RDS API
To fail over a Multi-AZ DB cluster manually, call the Amazon RDS API FailoverDBCluster and specify the
DBClusterIdentifier.

Determining whether a Multi-AZ DB cluster has failed over


To determine if your Multi-AZ DB cluster has failed over, you can do the following:

• Set up DB event subscriptions to notify you by email or SMS that a failover has been initiated. For
more information about events, see Working with Amazon RDS event notification (p. 855).

506
Amazon Relational Database Service User Guide
Failover process for Multi-AZ DB clusters

• View your DB events by using the Amazon RDS console or API operations.
• View the current state of your Multi-AZ DB cluster by using the Amazon RDS console, the AWS CLI, and
the RDS API.

For information on how you can respond to failovers, reduce recovery time, and other best practices for
Amazon RDS, see Best practices for Amazon RDS (p. 286).

Setting the JVM TTL for DNS name lookups


The failover mechanism automatically changes the Domain Name System (DNS) record of the DB
instance to point to the reader DB instance. As a result, you need to re-establish any existing connections
to your DB instance. In a Java virtual machine (JVM) environment, due to how the Java DNS caching
mechanism works, you might need to reconfigure JVM settings.

The JVM caches DNS name lookups. When the JVM resolves a host name to an IP address, it caches the IP
address for a specified period of time, known as the time-to-live (TTL).

Because AWS resources use DNS name entries that occasionally change, we recommend that you
configure your JVM with a TTL value of no more than 60 seconds. Doing this makes sure that when a
resource's IP address changes, your application can receive and use the resource's new IP address by
requerying the DNS.

On some Java configurations, the JVM default TTL is set so that it never refreshes DNS entries until
the JVM is restarted. Thus, if the IP address for an AWS resource changes while your application is still
running, it can't use that resource until you manually restart the JVM and the cached IP information
is refreshed. In this case, it's crucial to set the JVM's TTL so that it periodically refreshes its cached IP
information.
Note
The default TTL can vary according to the version of your JVM and whether a security manager
is installed. Many JVMs provide a default TTL less than 60 seconds. If you're using such a JVM
and not using a security manager, you can ignore the rest of this topic. For more information on
security managers in Oracle, see The security manager in the Oracle documentation.

To modify the JVM's TTL, set the networkaddress.cache.ttl property value. Use one of the
following methods, depending on your needs:

• To set the property value globally for all applications that use the JVM, set
networkaddress.cache.ttl in the $JAVA_HOME/jre/lib/security/java.security file.

networkaddress.cache.ttl=60

• To set the property locally for your application only, set networkaddress.cache.ttl in your
application's initialization code before any network connections are established.

java.security.Security.setProperty("networkaddress.cache.ttl" , "60");

507
Amazon Relational Database Service User Guide
Creating a Multi-AZ DB cluster

Creating a Multi-AZ DB cluster


A Multi-AZ DB cluster has a writer DB instance and two reader DB instances in three separate Availability
Zones. Multi-AZ DB clusters provide high availability, increased capacity for read workloads, and lower
latency when compared to Multi-AZ deployments. For more information about Multi-AZ DB clusters, see
Multi-AZ DB cluster deployments (p. 499).
Note
Multi-AZ DB clusters are supported only for the MySQL and PostgreSQL DB engines.

DB cluster prerequisites
Important
Before you can create a Multi-AZ DB cluster, you must complete the tasks in Setting up for
Amazon RDS (p. 174).

The following are prerequisites to complete before creating a Multi-AZ DB cluster.

Topics
• Configure the network for the DB cluster (p. 508)
• Additional prerequisites (p. 511)

Configure the network for the DB cluster


You can create a Multi-AZ DB cluster only in a virtual private cloud (VPC) based on the Amazon VPC
service. It must be in an AWS Region that has at least three Availability Zones. The DB subnet group that
you choose for the DB cluster must cover at least three Availability Zones. This configuration ensures that
each DB instance in the DB cluster is in a different Availability Zone.

To set up connectivity between your new DB cluster and an Amazon EC2 instance in the same VPC, do so
when you create the DB cluster. To connect to your DB cluster from resources other than EC2 instances in
the same VPC, configure the network connections manually.

Topics
• Configure automatic network connectivity with an EC2 instance (p. 508)
• Configure the network manually (p. 510)

Configure automatic network connectivity with an EC2 instance

When you create a Multi-AZ DB cluster, you can use the AWS Management Console to set up connectivity
between an EC2 instance and the new DB cluster. When you do so, RDS configures your VPC and network
settings automatically. The DB cluster is created in the same VPC as the EC2 instance so that the EC2
instance can access the DB cluster.

The following are requirements for connecting an EC2 instance with the DB cluster:

• The EC2 instance must exist in the AWS Region before you create the DB cluster.

If no EC2 instances exist in the AWS Region, the console provides a link to create one.
• The user who is creating the DB cluster must have permissions to perform the following operations:
• ec2:AssociateRouteTable
• ec2:AuthorizeSecurityGroupEgress
• ec2:AuthorizeSecurityGroupIngress
• ec2:CreateRouteTable

508
Amazon Relational Database Service User Guide
Creating a Multi-AZ DB cluster

• ec2:CreateSubnet
• ec2:CreateSecurityGroup
• ec2:DescribeInstances
• ec2:DescribeNetworkInterfaces
• ec2:DescribeRouteTables
• ec2:DescribeSecurityGroups
• ec2:DescribeSubnets
• ec2:ModifyNetworkInterfaceAttribute
• ec2:RevokeSecurityGroupEgress

Using this option creates a private DB cluster. The DB cluster uses a DB subnet group with only private
subnets to restrict access to resources within the VPC.

To connect an EC2 instance to the DB cluster, choose Connect to an EC2 compute resource in the
Connectivity section on the Create database page.

When you choose Connect to an EC2 compute resource, RDS sets the following options automatically.
You can't change these settings unless you choose not to set up connectivity with an EC2 instance by
choosing Don't connect to an EC2 compute resource.

Console option Automatic setting

Virtual Private Cloud (VPC) RDS sets the VPC to the one associated with the EC2 instance.

DB subnet group RDS requires a DB subnet group with a private subnet in the same
Availability Zone as the EC2 instance. If a DB subnet group that meets this
requirement exists, then RDS uses the existing DB subnet group. By default,
this option is set to Automatic setup.

When you choose Automatic setup and there is no DB subnet group that
meets this requirement, the following action happens. RDS uses three
available private subnets in three Availability Zones where one of the
Availability Zones is the same as the EC2 instance. If a private subnet
isn’t available in an Availability Zone, RDS creates a private subnet in the
Availability Zone. Then RDS creates the DB subnet group.

509
Amazon Relational Database Service User Guide
Creating a Multi-AZ DB cluster

Console option Automatic setting


When a private subnet is available, RDS uses the route table associated
with the subnet and adds any subnets it creates to this route table. When
no private subnet is available, RDS creates a route table without internet
gateway access and adds the subnets it creates to the route table.

RDS also allows you to use existing DB subnet groups. Select Choose
existing if you want to use an existing DB subnet group of your choice.

Public access RDS chooses No so that the DB cluster isn't publicly accessible.

For security, it is a best practice to keep the database private and make sure
it isn't accessible from the internet.

VPC security group (firewall) RDS creates a new security group that is associated with the DB cluster. The
security group is named rds-ec2-n, where n is a number. This security
group includes an inbound rule with the EC2 VPC security group (firewall)
as the source. This security group that is associated with the DB cluster
allows the EC2 instance to access the DB cluster.

RDS also creates a new security group that is associated with the EC2
instance. The security group is named ec2-rds-n, where n is a number.
This security group includes an outbound rule with the VPC security group
of the DB cluster as the source. This security group allows the EC2 instance
to send traffic to the DB cluster.

You can add another new security group by choosing Create new and
typing the name of the new security group.

You can add existing security groups by choosing Choose existing and
selecting security groups to add.

Availability Zone RDS chooses the Availability Zone of the EC2 instance for one DB instance
in the Multi-AZ DB cluster deployment. RDS randomly chooses a different
Availability Zone for both of the other DB instances. The writer DB instance
is created in the same Availability Zone as the EC2 instance. There is the
possibility of cross Availability Zone costs if a failover occurs and the writer
DB instance is in a different Availability Zone.

For more information about these settings, see Settings for creating Multi-AZ DB clusters (p. 514).

If you change these settings after the DB cluster is created, the changes might affect the connection
between the EC2 instance and the DB cluster.

Configure the network manually

To connect to your DB cluster from resources other than EC2 instances in the same VPC, configure the
network connections manually. If you use the AWS Management Console to create your Multi-AZ DB
cluster, you can have Amazon RDS automatically create a VPC for you. Or you can use an existing VPC
or create a new VPC for your Multi-AZ DB cluster. Your VPC must have at least one subnet in each of at
least three Availability Zones for you to use it with a Multi-AZ DB cluster. For information on VPCs, see
Amazon VPC VPCs and Amazon RDS (p. 2688).

If you don't have a default VPC or you haven't created a VPC, and you don't plan to use the console, do
the following:

510
Amazon Relational Database Service User Guide
Creating a Multi-AZ DB cluster

• Create a VPC with at least one subnet in each of at least three of the Availability Zones in the AWS
Region where you want to deploy your DB cluster. For more information, see Working with a DB
instance in a VPC (p. 2689).
• Specify a VPC security group that authorizes connections to your DB cluster. For more information, see
Provide access to your DB instance in your VPC by creating a security group (p. 177) and Controlling
access with security groups (p. 2680).
• Specify an RDS DB subnet group that defines at least three subnets in the VPC that can be used by the
Multi-AZ DB cluster. For more information, see Working with DB subnet groups (p. 2689).

For information about limitations that apply to Multi-AZ DB clusters, see Limitations for Multi-AZ DB
clusters (p. 501).

If you want to connect to a resource that isn't in the same VPC as the Multi-AZ DB cluster, see the
appropriate scenarios in Scenarios for accessing a DB instance in a VPC (p. 2701).

Additional prerequisites
Before you create your Multi-AZ DB cluster, consider the following additional prerequisites:

• To connect to AWS using AWS Identity and Access Management (IAM) credentials, your AWS account
must have certain IAM policies. These grant the permissions required to perform Amazon RDS
operations. For more information, see Identity and access management for Amazon RDS (p. 2606).

If you use IAM to access the RDS console, first sign in to the AWS Management Console with your IAM
user credentials. Then go to the RDS console at https://console.aws.amazon.com/rds/.
• To tailor the configuration parameters for your DB cluster, specify a DB cluster parameter group with
the required parameter settings. For information about creating or modifying a DB cluster parameter
group, see Working with parameter groups for Multi-AZ DB clusters (p. 503).
• Determine the TCP/IP port number to specify for your DB cluster. The firewalls at some companies
block connections to the default ports. If your company firewall blocks the default port, choose
another port for your DB cluster. All DB instances in a DB cluster use the same port.

Creating a DB cluster
You can create a Multi-AZ DB cluster using the AWS Management Console, the AWS CLI, or the RDS API.

Console

You can create a Multi-AZ DB cluster by choosing Multi-AZ DB cluster in the Availability and durability
section.

To create a Multi-AZ DB cluster using the console

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the upper-right corner of the AWS Management Console, choose the AWS Region in which you
want to create the DB cluster.

For information about the AWS Regions that support Multi-AZ DB clusters, see Limitations for Multi-
AZ DB clusters (p. 501).
3. In the navigation pane, choose Databases.
4. Choose Create database.

To create a Multi-AZ DB cluster, make sure that Standard Create is selected and Easy Create isn't.

511
Amazon Relational Database Service User Guide
Creating a Multi-AZ DB cluster

5. In Engine type, choose MySQL or PostgreSQL.


6. For Version, choose the DB engine version.

For information about the DB engine versions that support Multi-AZ DB clusters, see Limitations for
Multi-AZ DB clusters (p. 501).
7. In Templates, choose the appropriate template for your deployment.
8. In Availability and durability, choose Multi-AZ DB cluster.

9. In DB cluster identifier, enter the identifier for your DB cluster.


10. In Master username, enter your master user name, or keep the default setting.
11. Enter your master password:

a. In the Settings section, open Credential Settings.


b. If you want to specify a password, clear the Auto generate a password box if it is selected.
c. (Optional) Change the Master username value.
d. Enter the same password in Master password and Confirm password.
12. For DB instance class, choose a DB instance class. For supported instance classes, see the DB
instance class row in the section called “Available settings” (p. 514).
13. (Optional) Set up a connection to a compute resource for this DB cluster.

You can configure connectivity between an Amazon EC2 instance and the new DB cluster during DB
cluster creation. For more information, see Configure automatic network connectivity with an EC2
instance (p. 508).
14. In the Connectivity section under VPC security group (firewall), if you select Create new, a VPC
security group is created with an inbound rule that allows your local computer's IP address to access
the database.
15. For the remaining sections, specify your DB cluster settings. For information about each setting, see
Settings for creating Multi-AZ DB clusters (p. 514).
16. Choose Create database.

If you chose to use an automatically generated password, the View credential details button
appears on the Databases page.

To view the master user name and password for the DB cluster, choose View credential details.

To connect to the DB cluster as the master user, use the user name and password that appear.
Important
You can't view the master user password again.
17. For Databases, choose the name of the new DB cluster.

512
Amazon Relational Database Service User Guide
Creating a Multi-AZ DB cluster

On the RDS console, the details for the new DB cluster appear. The DB cluster has a status of Creating
until the DB cluster is created and ready for use. When the state changes to Available, you can connect
to the DB cluster. Depending on the DB cluster class and storage allocated, it can take several minutes
for the new DB cluster to be available.

AWS CLI

Before you create a Multi-AZ DB cluster using the AWS CLI, make sure to fulfill the required prerequisites.
These include creating a VPC and an RDS DB subnet group. For more information, see DB cluster
prerequisites (p. 508).

To create a Multi-AZ DB cluster by using the AWS CLI, call the create-db-cluster command. Specify the --
db-cluster-identifier. For the --engine option, specify either mysql or postgres.

For information about each option, see Settings for creating Multi-AZ DB clusters (p. 514).

For information about the AWS Regions, DB engines, and DB engine versions that support Multi-AZ DB
clusters, see Limitations for Multi-AZ DB clusters (p. 501).

The create-db-cluster command creates the writer DB instance for your DB cluster, and two reader
DB instances. Each DB instance is in a different Availability Zone.

For example, the following command creates a MySQL 8.0 Multi-AZ DB cluster named mysql-multi-
az-db-cluster.

Example

For Linux, macOS, or Unix:

aws rds create-db-cluster \


--db-cluster-identifier mysql-multi-az-db-cluster \
--engine mysql \
--engine-version 8.0.28 \
--master-username admin \
--manage-master-user-password \
--port 3306 \
--backup-retention-period 1 \
--db-subnet-group-name default \
--allocated-storage 4000 \
--storage-type io1 \
--iops 10000 \
--db-cluster-instance-class db.m5d.xlarge

For Windows:

aws rds create-db-cluster ^


--db-cluster-identifier mysql-multi-az-db-cluster ^
--engine mysql ^
--engine-version 8.0.28 ^
--manage-master-user-password ^
--master-username admin ^
--port 3306 ^
--backup-retention-period 1 ^
--db-subnet-group-name default ^
--allocated-storage 4000 ^
--storage-type io1 ^
--iops 10000 ^
--db-cluster-instance-class db.m5d.xlarge

The following command creates a PostgreSQL 13.4 Multi-AZ DB cluster named postgresql-multi-
az-db-cluster.

513
Amazon Relational Database Service User Guide
Creating a Multi-AZ DB cluster

Example

For Linux, macOS, or Unix:

aws rds create-db-cluster \


--db-cluster-identifier postgresql-multi-az-db-cluster \
--engine postgres \
--engine-version 13.4 \
--manage-master-user-password \
--master-username postgres \
--port 5432 \
--backup-retention-period 1 \
--db-subnet-group-name default \
--allocated-storage 4000 \
--storage-type io1 \
--iops 10000 \
--db-cluster-instance-class db.m5d.xlarge

For Windows:

aws rds create-db-cluster ^


--db-cluster-identifier postgresql-multi-az-db-cluster ^
--engine postgres ^
--engine-version 13.4 ^
--manage-master-user-password ^
--master-username postgres ^
--port 5432 ^
--backup-retention-period 1 ^
--db-subnet-group-name default ^
--allocated-storage 4000 ^
--storage-type io1 ^
--iops 10000 ^
--db-cluster-instance-class db.m5d.xlarge

RDS API

Before you can create a Multi-AZ DB cluster using the RDS API, make sure to fulfill the required
prerequisites, such as creating a VPC and an RDS DB subnet group. For more information, see DB cluster
prerequisites (p. 508).

To create a Multi-AZ DB cluster by using the RDS API, call the CreateDBCluster operation. Specify the
DBClusterIdentifier. For the Engine parameter, specify either mysql or postgresql.

For information about each option, see Settings for creating Multi-AZ DB clusters (p. 514).

The CreateDBCluster operation creates the writer DB instance for your DB cluster, and two reader DB
instances. Each DB instance is in a different Availability Zone.

Settings for creating Multi-AZ DB clusters


For details about settings that you choose when you create a Multi-AZ DB cluster, see the following
table. For more information about the AWS CLI options, see create-db-cluster. For more information
about the RDS API parameters, see CreateDBCluster.

Console setting Setting description CLI option and RDS API parameter

Allocated storage The amount of storage to allocate for CLI option:


each DB instance in your DB cluster (in
gibibyte). --allocated-storage

514
Amazon Relational Database Service User Guide
Creating a Multi-AZ DB cluster

Console setting Setting description CLI option and RDS API parameter
For more information, see Amazon RDS API parameter:
DB instance storage (p. 101).
AllocatedStorage

Auto minor version Enable auto minor version upgrade to CLI option:
upgrade have your DB cluster receive preferred
minor DB engine version upgrades --auto-minor-version-upgrade
automatically when they become
available. Amazon RDS performs --no-auto-minor-version-upgrade
automatic minor version upgrades in the
API parameter:
maintenance window.
AutoMinorVersionUpgrade

Backup retention The number of days that you want CLI option:
period automatic backups of your DB cluster to
be retained. For a Multi-AZ DB cluster, this --backup-retention-period
value must be set to 1 or greater.
API parameter:
For more information, see Working with
backups (p. 591). BackupRetentionPeriod

Backup window The time period during which Amazon CLI option:
RDS automatically takes a backup of
your DB cluster. Unless you have a --preferred-backup-window
specific time that you want to have your
database backed up, use the default of No API parameter:
preference.
PreferredBackupWindow
For more information, see Working with
backups (p. 591).

Copy tags to This option copies any DB cluster tags to a CLI option:
snapshots DB snapshot when you create a snapshot.
-copy-tags-to-snapshot
For more information, see Tagging
Amazon RDS resources (p. 461). -no-copy-tags-to-snapshot

RDS API parameter:

CopyTagsToSnapshot

Database For Multi-AZ DB clusters, only Password None because password authentication is the
authentication authentication is supported. default.

Database port The port that you want to access the DB CLI option:
cluster through. The default port is shown.
--port
The port can't be changed after the DB
cluster is created. RDS API parameter:

The firewalls at some companies block Port


connections to the default ports. If your
company firewall blocks the default port,
enter another port for your DB cluster.

515
Amazon Relational Database Service User Guide
Creating a Multi-AZ DB cluster

Console setting Setting description CLI option and RDS API parameter

DB cluster The name for your DB cluster. Name your CLI option:
identifier DB clusters in the same way that you
name your on-premises servers. Your --db-cluster-identifier
DB cluster identifier can contain up to
63 alphanumeric characters, and must RDS API parameter:
be unique for your account in the AWS
DBClusterIdentifier
Region you chose.

DB instance class The compute and memory capacity of CLI option:


each DB instance in the Multi-AZ DB
cluster, for example db.m5d.xlarge. --db-cluster-instance-class

If possible, choose a DB instance class RDS API parameter:


large enough that a typical query working
set can be held in memory. When working DBClusterInstanceClass
sets are held in memory the system can
avoid writing to disk, which improves
performance.

Currently, Multi-AZ DB clusters only


support db.m5d, db.m6gd, db.r5d,
db.m5d, and db.x2iedn DB instance
classes. For more information about
DB instance classes, see DB instance
classes (p. 11).

DB cluster The DB cluster parameter group that you CLI option:


parameter group want associated with the DB cluster.
--db-cluster-parameter-group-name
For more information, see Working
with parameter groups for Multi-AZ DB RDS API parameter:
clusters (p. 503).
DBClusterParameterGroupName

DB engine version The version of database engine that you CLI option:
want to use.
--engine-version

RDS API parameter:

EngineVersion

DB parameter The DB instance parameter group that you Not applicable. Amazon RDS associates each
group want associated with the DB instances in DB instance with the appropriate default
the DB cluster. parameter group.

For more information, see Working


with parameter groups for Multi-AZ DB
clusters (p. 503).

516
Amazon Relational Database Service User Guide
Creating a Multi-AZ DB cluster

Console setting Setting description CLI option and RDS API parameter

DB subnet group The DB subnet group you want to use for CLI option:
the DB cluster.
Select Choose existing to use an existing --db-subnet-group-name
DB subnet group. Then choose the
required subnet group from the Existing RDS API parameter:
DB subnet groups dropdown list.
DBSubnetGroupName
Choose Automatic setup to let RDS select
a compatible DB subnet group. If none
exist, RDS creates a new subnet group for
your cluster.

For more information, see Working with


DB subnet groups (p. 2689).

Deletion protection Enable deletion protection to prevent CLI option:


your DB cluster from being deleted. If you
create a production DB cluster with the --deletion-protection
console, deletion protection is turned on
by default. --no-deletion-protection

For more information, see Deleting a DB RDS API parameter:


instance (p. 489).
DeletionProtection

Encryption Enable Encryption to turn on encryption CLI options:


at rest for this DB cluster.
--kms-key-id
Encryption is turned on by default for
Multi-AZ DB clusters. --storage-encrypted

For more information, see Encrypting --no-storage-encrypted


Amazon RDS resources (p. 2586).
RDS API parameters:

KmsKeyId

StorageEncrypted

Enhanced Enable enhanced monitoring to turn CLI options:


Monitoring on metrics gathering in real time for the
operating system that your DB cluster runs --monitoring-interval
on.
--monitoring-role-arn
For more information, see Monitoring
OS metrics with Enhanced RDS API parameters:
Monitoring (p. 797).
MonitoringInterval

MonitoringRoleArn

517
Amazon Relational Database Service User Guide
Creating a Multi-AZ DB cluster

Console setting Setting description CLI option and RDS API parameter

Initial database The name for the database on your DB CLI option:
name cluster. If you don't provide a name,
Amazon RDS doesn't create a database --database-name
on the DB cluster for MySQL. However, it
does create a database on the DB cluster RDS API parameter:
for PostgreSQL. The name can't be a word
DatabaseName
reserved by the database engine. It has
other constraints depending on the DB
engine.

MySQL:

• It must contain 1–64 alphanumeric


characters.

PostgreSQL:

• It must contain 1–63 alphanumeric


characters.
• It must begin with a letter or an
underscore. Subsequent characters can
be letters, underscores, or digits (0-9).
• The initial database name is postgres.

Log exports The types of database log files to publish CLI option:
to Amazon CloudWatch Logs.
-enable-cloudwatch-logs-exports
For more information, see Publishing
database logs to Amazon CloudWatch RDS API parameter:
Logs (p. 898).
EnableCloudwatchLogsExports

Maintenance The 30-minute window in which pending CLI option:


window modifications to your DB cluster are
applied. If the time period doesn't matter, --preferred-maintenance-window
choose No preference.
RDS API parameter:
For more information, see The Amazon
RDS maintenance window (p. 423). PreferredMaintenanceWindow

Manage master Select Manage master credentials in CLI option:


credentials in AWS AWS Secrets Manager to manage the
Secrets Manager master user password in a secret in Secrets --manage-master-user-password | --
Manager. no-manage-master-user-password

Optionally, choose a KMS key to use to --master-user-secret-kms-key-id


protect the secret. Choose from the KMS
keys in your account, or enter the key from RDS API parameter:
a different account.
ManageMasterUserPassword
For more information, see Password
MasterUserSecretKmsKeyId
management with Amazon RDS and AWS
Secrets Manager (p. 2568).

518
Amazon Relational Database Service User Guide
Creating a Multi-AZ DB cluster

Console setting Setting description CLI option and RDS API parameter

Master password The password for your master user CLI option:
account.
--master-user-password

RDS API parameter:

MasterUserPassword

Master username The name that you use as the master user CLI option:
name to log on to your DB cluster with all
database privileges. --master-username

• It can contain 1–16 alphanumeric RDS API parameter:


characters and underscores.
MasterUsername
• Its first character must be a letter.
• It can't be a word reserved by the
database engine.

You can't change the master user name


after the Multi-AZ DB cluster is created.

For more information on privileges


granted to the master user, see Master
user account privileges (p. 2682).

Performance Enable Performance Insights to monitor CLI options:


Insights your DB cluster load so that you can
analyze and troubleshoot your database --enable-performance-insights
performance.
--no-enable-performance-insights
Choose a retention period to determine
how much Performance Insights data --performance-insights-retention-
history to keep. The retention setting in period
the free tier is Default (7 days). To retain
--performance-insights-kms-key-id
your performance data for longer, specify
1–24 months. For more information RDS API parameters:
about retention periods, see Pricing
and data retention for Performance EnablePerformanceInsights
Insights (p. 726).
PerformanceInsightsRetentionPeriod
Choose a master key to use to protect the
key used to encrypt this database volume. PerformanceInsightsKMSKeyId
Choose from the master keys in your
account, or enter the key from a different
account.

For more information, see Monitoring


DB load with Performance Insights on
Amazon RDS (p. 720).

519
Amazon Relational Database Service User Guide
Creating a Multi-AZ DB cluster

Console setting Setting description CLI option and RDS API parameter

Provisioned IOPS The amount of Provisioned IOPS (input/ CLI option:


output operations per second) to be
initially allocated for the DB cluster. This --iops
setting is available only if Provisioned
IOPS (io1) is selected as the storage type. RDS API parameter:

For more information, see Provisioned Iops


IOPS SSD storage (p. 104).

Public access Publicly accessible to give the DB cluster CLI option:


a public IP address, meaning that it's
accessible outside the VPC. To be publicly --publicly-accessible
accessible, the DB cluster also has to be in
a public subnet in the VPC. --no-publicly-accessible

Not publicly accessible to make the DB RDS API parameter:


cluster accessible only from inside the
PubliclyAccessible
VPC.

For more information, see Hiding


a DB instance in a VPC from the
internet (p. 2695).

To connect to a DB cluster from outside


of its VPC, the DB cluster must be publicly
accessible. Also, access must be granted
using the inbound rules of the DB cluster's
security group, and other requirements
must be met. For more information,
see Can't connect to Amazon RDS DB
instance (p. 2727).

If your DB cluster isn't publicly accessible,


you can use an AWS Site-to-Site VPN
connection or an AWS Direct Connect
connection to access it from a private
network. For more information, see
Internetwork traffic privacy (p. 2605).

Storage type The storage type for your DB cluster. CLI option:

Only Provisioned IOPS (io1) storage is --storage-type


supported. General Purpose (gp2) and
General Purpose (gp3) storage aren't RDS API parameter:
supported.
StorageType
For more information, see Amazon RDS
storage types (p. 101).

Virtual Private A VPC based on the Amazon VPC service For the CLI and API, you specify the VPC
Cloud (VPC) to associate with this DB cluster. security group IDs.

For more information, see Amazon VPC


VPCs and Amazon RDS (p. 2688).

520
Amazon Relational Database Service User Guide
Creating a Multi-AZ DB cluster

Console setting Setting description CLI option and RDS API parameter

VPC security group The security groups to associate with the CLI option:
(firewall) DB cluster.
--vpc-security-group-ids
For more information, see Overview of
VPC security groups (p. 2680). RDS API parameter:

VpcSecurityGroupIds

Settings that don't apply when creating Multi-AZ DB clusters


The following settings in the AWS CLI command create-db-cluster and the RDS API operation
CreateDBCluster don't apply to Multi-AZ DB clusters.

You also can't specify these settings for Multi-AZ DB clusters in the console.

AWS CLI setting RDS API setting

--availability-zones AvailabilityZones

--backtrack-window BacktrackWindow

--character-set-name CharacterSetName

--domain Domain

--domain-iam-role-name DomainIAMRoleName

--enable-global-write-forwarding | -- EnableGlobalWriteForwarding
no-enable-global-write-forwarding

--enable-http-endpoint | --no-enable- EnableHttpEndpoint


http-endpoint

--enable-iam-database-authentication EnableIAMDatabaseAuthentication
| --no-enable-iam-database-
authentication

--global-cluster-identifier GlobalClusterIdentifier

--option-group-name OptionGroupName

--pre-signed-url PreSignedUrl

--replication-source-identifier ReplicationSourceIdentifier

--scaling-configuration ScalingConfiguration

521
Amazon Relational Database Service User Guide
Connecting to a Multi-AZ DB cluster

Connecting to a Multi-AZ DB cluster


A Multi-AZ DB cluster has three DB instances instead of a single DB instance. Each connection is handled
by a specific DB instance. When you connect to a Multi-AZ DB cluster, the hostname and port that you
specify point to a fully qualified domain name called an endpoint. The Multi-AZ DB cluster uses the
endpoint mechanism to abstract these connections so that you don't need to specify exactly which DB
instance in the DB cluster to connect to. Thus, you don't have to hardcode all the hostnames or write
your own logic for rerouting connections when some DB instances aren't available.

The writer endpoint connects to the writer DB instance of the DB cluster, which supports both read and
write operations. The reader endpoint connects to either of the two reader DB instances, which support
only read operations.

Using endpoints, you can map each connection to the appropriate DB instance or group of DB instances
based on your use case. For example, to perform DDL and DML statements, you can connect to
whichever DB instance is the writer DB instance. To perform queries, you can connect to the reader
endpoint, with the Multi-AZ DB cluster automatically managing connections among the reader DB
instances. For diagnosis or tuning, you can connect to a specific DB instance endpoint to examine details
about a specific DB instance.

For information about connecting to a DB instance, see Connecting to an Amazon RDS DB


instance (p. 325).

Topics
• Types of Multi-AZ DB cluster endpoints (p. 522)
• Viewing the endpoints for a Multi-AZ DB cluster (p. 523)
• Using the cluster endpoint (p. 523)
• Using the reader endpoint (p. 524)
• Using the instance endpoints (p. 524)
• How Multi-AZ DB endpoints work with high availability (p. 524)

Types of Multi-AZ DB cluster endpoints


An endpoint is represented by a unique identifier that contains a host address. The following types of
endpoints are available from a Multi-AZ DB cluster:

Cluster endpoint

A cluster endpoint (or writer endpoint) for a Multi-AZ DB cluster connects to the current writer DB
instance for that DB cluster. This endpoint is the only one that can perform write operations such as
DDL and DML statements. This endpoint can also perform read operations.

Each Multi-AZ DB cluster has one cluster endpoint and one writer DB instance.

You use the cluster endpoint for all write operations on the DB cluster, including inserts, updates,
deletes, and DDL changes. You can also use the cluster endpoint for read operations, such as queries.

If the current writer DB instance of a DB cluster fails, the Multi-AZ DB cluster automatically fails over
to a new writer DB instance. During a failover, the DB cluster continues to serve connection requests
to the cluster endpoint from the new writer DB instance, with minimal interruption of service.

The following example illustrates a cluster endpoint for a Multi-AZ DB cluster.

mydbcluster.cluster-123456789012.us-east-1.rds.amazonaws.com

522
Amazon Relational Database Service User Guide
Connecting to a Multi-AZ DB cluster

Reader endpoint

A reader endpoint for a Multi-AZ DB cluster provides support for read-only connections to the
DB cluster. Use the reader endpoint for read operations, such as SELECT queries. By processing
those statements on the reader DB instances, this endpoint reduces the overhead on the writer DB
instance. It also helps the cluster to scale the capacity to handle simultaneous SELECT queries. Each
Multi-AZ DB cluster has one reader endpoint.

The reader endpoint sends each connection request to one of the reader DB instances. When you
use the reader endpoint for a session, you can only perform read-only statements such as SELECT in
that session.

The following example illustrates a reader endpoint for a Multi-AZ DB cluster. The read-only intent
of a reader endpoint is denoted by the -ro within the cluster endpoint name.

mydbcluster.cluster-ro-123456789012.us-east-1.rds.amazonaws.com
Instance endpoint

An instance endpoint connects to a specific DB instance within a Multi-AZ DB cluster. Each DB


instance in a DB cluster has its own unique instance endpoint. So there is one instance endpoint for
the current writer DB instance of the DB cluster, and there is one instance endpoint for each of the
reader DB instances in the DB cluster.

The instance endpoint provides direct control over connections to the DB cluster. This control can
help you address scenarios where using the cluster endpoint or reader endpoint might not be
appropriate. For example, your client application might require more fine-grained load balancing
based on workload type. In this case, you can configure multiple clients to connect to different
reader DB instances in a DB cluster to distribute read workloads.

The following example illustrates an instance endpoint for a DB instance in a Multi-AZ DB cluster.

mydbinstance.123456789012.us-east-1.rds.amazonaws.com

Viewing the endpoints for a Multi-AZ DB cluster


In the AWS Management Console, you see the cluster endpoint and the reader endpoint on the details
page for each Multi-AZ DB cluster. You see the instance endpoint in the details page for each DB
instance.

With the AWS CLI, you see the writer and reader endpoints in the output of the describe-db-clusters
command. For example, the following command shows the endpoint attributes for all clusters in your
current AWS Region.

aws rds describe-db-cluster-endpoints

With the Amazon RDS API, you retrieve the endpoints by calling the DescribeDBClusterEndpoints action.
The output also shows Amazon Aurora DB cluster endpoints, if any exist.

Using the cluster endpoint


Each Multi-AZ DB cluster has a single built-in cluster endpoint, whose name and other attributes are
managed by Amazon RDS. You can't create, delete, or modify this kind of endpoint.

You use the cluster endpoint when you administer your DB cluster, perform extract, transform, load (ETL)
operations, or develop and test applications. The cluster endpoint connects to the writer DB instance of
the cluster. The writer DB instance is the only DB instance where you can create tables and indexes, run
INSERT statements, and perform other DDL and DML operations.

523
Amazon Relational Database Service User Guide
Connecting to a Multi-AZ DB cluster

The physical IP address pointed to by the cluster endpoint changes when the failover mechanism
promotes a new DB instance to be the writer DB instance for the cluster. If you use any form of
connection pooling or other multiplexing, be prepared to flush or reduce the time-to-live for any cached
DNS information. Doing so ensures that you don't try to establish a read/write connection to a DB
instance that became unavailable or is now read-only after a failover.

Using the reader endpoint


You use the reader endpoint for read-only connections to your Multi-AZ DB cluster. This endpoint helps
your DB cluster handle a query-intensive workload. The reader endpoint is the endpoint that you supply
to applications that do reporting or other read-only operations on the cluster. The reader endpoint sends
connections to available reader DB instances in a Multi-AZ DB cluster.

Each Multi-AZ cluster has a single built-in reader endpoint, whose name and other attributes are
managed by Amazon RDS. You can't create, delete, or modify this kind of endpoint.

Using the instance endpoints


Each DB instance in a Multi-AZ DB cluster has its own built-in instance endpoint, whose name and other
attributes are managed by Amazon RDS. You can't create, delete, or modify this kind of endpoint. With
a Multi-AZ DB cluster, you typically use the writer and reader endpoints more often than the instance
endpoints.

In day-to-day operations, the main way that you use instance endpoints is to diagnose capacity or
performance issues that affect one specific DB instance in a Multi-AZ DB cluster. While connected to a
specific DB instance, you can examine its status variables, metrics, and so on. Doing this can help you
determine what's happening for that DB instance that's different from what's happening for other DB
instances in the cluster.

How Multi-AZ DB endpoints work with high availability


For Multi-AZ DB clusters where high availability is important, use the writer endpoint for read/write or
general-purpose connections and the reader endpoint for read-only connections. The writer and reader
endpoints manage DB instance failover better than instance endpoints do. Unlike the instance endpoints,
the writer and reader endpoints automatically change which DB instance they connect to if a DB instance
in your cluster becomes unavailable.

If the writer DB instance of a DB cluster fails, Amazon RDS automatically fails over to a new writer DB
instance. It does so by promoting a reader DB instance to a new writer DB instance. If a failover occurs,
you can use the writer endpoint to reconnect to the newly promoted writer DB instance. Or you can
use the reader endpoint to reconnect to one of the reader DB instances in the DB cluster. During a
failover, the reader endpoint might direct connections to the new writer DB instance of a DB cluster for
a short time after a reader DB instance is promoted to the new writer DB instance. If you design your
own application logic to manage instance endpoint connections, you can manually or programmatically
discover the resulting set of available DB instances in the DB cluster.

524
Amazon Relational Database Service User Guide
Connecting an AWS compute
resource and a Multi-AZ DB cluster

Automatically connecting an AWS compute resource


and a Multi-AZ DB cluster
You can automatically connect a Multi-AZ DB cluster and AWS compute resources such as Amazon Elastic
Compute Cloud (Amazon EC2) instances and AWS Lambda functions.

Topics
• Automatically connecting an EC2 instance and a Multi-AZ DB cluster (p. 525)
• Automatically connecting a Lambda function and a Multi-AZ DB cluster (p. 530)

Automatically connecting an EC2 instance and a Multi-AZ DB


cluster
You can use the Amazon RDS console to simplify setting up a connection between an Amazon Elastic
Compute Cloud (Amazon EC2) instance and a Multi-AZ DB cluster. Often, your Multi-AZ DB cluster is in
a private subnet and your EC2 instance is in a public subnet within a VPC. You can use a SQL client on
your EC2 instance to connect to your Multi-AZ DB cluster. The EC2 instance can also run web servers or
applications that access your private Multi-AZ DB cluster.

If you want to connect to an EC2 instance that isn't in the same VPC as the Multi-AZ DB cluster, see the
scenarios in the section called “Scenarios for accessing a DB instance in a VPC” (p. 2701).

Topics
• Overview of automatic connectivity with an EC2 instance (p. 525)
• Connecting an EC2 instance and a Multi-AZ DB cluster automatically (p. 528)
• Viewing connected compute resources (p. 529)

Overview of automatic connectivity with an EC2 instance


When you set up a connection between an EC2 instance and a Multi-AZ DB cluster automatically,
Amazon RDS configures the VPC security group for your EC2 instance and for your DB cluster.

The following are requirements for connecting an EC2 instance with a Multi-AZ DB cluster:

• The EC2 instance must exist in the same VPC as the Multi-AZ DB cluster.

If no EC2 instances exist in the same VPC, the console provides a link to create one.

525
Amazon Relational Database Service User Guide
Connecting an AWS compute
resource and a Multi-AZ DB cluster

• The user who is setting up connectivity must have permissions to perform the following EC2
operations:
• ec2:AuthorizeSecurityGroupEgress
• ec2:AuthorizeSecurityGroupIngress
• ec2:CreateSecurityGroup
• ec2:DescribeInstances
• ec2:DescribeNetworkInterfaces
• ec2:DescribeSecurityGroups
• ec2:ModifyNetworkInterfaceAttribute
• ec2:RevokeSecurityGroupEgress

When you set up a connection to an EC2 instance, Amazon RDS acts according to the current
configuration of the security groups associated with the Multi-AZ DB cluster and EC2 instance, as
described in the following table.

Current RDS security group Current EC2 security group RDS action
configuration configuration

There are one or more security There are one or more security Amazon RDS takes no action.
groups associated with the Multi-AZ groups associated with the EC2
DB cluster with a name that matches instance with a name that matches A connection was already configured
the pattern rds-ec2-n (where n the pattern rds-ec2-n (where n automatically between the EC2
is a number). A security group that is a number). A security group that instance and Multi-AZ DB cluster.
matches the pattern hasn't been matches the pattern hasn't been Because a connection already exists
modified. This security group has modified. This security group has between the EC2 instance and the
only one inbound rule with the VPC only one outbound rule with the RDS database, the security groups
security group of the EC2 instance VPC security group of the Multi-AZ aren't modified.
as the source. DB cluster as the source.

Either of the following conditions Either of the following conditions RDS action: create new security
apply: apply: groups

• There is no security group • There is no security group


associated with the Multi-AZ DB associated with the EC2 instance
cluster with a name that matches with a name that matches the
the pattern rds-ec2-n. pattern ec2-rds-n.
• There are one or more security • There are one or more security
groups associated with the Multi- groups associated with the
AZ DB cluster with a name that EC2 instance with a name that
matches the pattern rds-ec2-n. matches the pattern ec2-
However, none of these security rds-n. However, none of these
groups can be used for the security groups can be used for
connection with the EC2 instance. the connection with the Multi-
A security group can't be used if AZ DB cluster. A security group
it doesn't have one inbound rule can't be used if it doesn't have
with the VPC security group of one outbound rule with the VPC
the EC2 instance as the source. A security group of the Multi-AZ DB
security group also can't be used cluster as the source. A security
if it has been modified. Examples group also can't be used if it has
of modifications include adding been modified.
a rule or changing the port of an
existing rule.

526
Amazon Relational Database Service User Guide
Connecting an AWS compute
resource and a Multi-AZ DB cluster

Current RDS security group Current EC2 security group RDS action
configuration configuration

There are one or more security There are one or more security RDS action: create new security
groups associated with the Multi-AZ groups associated with the EC2 groups
DB cluster with a name that matches instance with a name that matches
the pattern rds-ec2-n. A security the pattern ec2-rds-n. However,
group that matches the pattern none of these security groups can
hasn't been modified. This security be used for the connection with the
group has only one inbound rule Multi-AZ DB cluster. A security group
with the VPC security group of the can't be used if it doesn't have one
EC2 instance as the source. outbound rule with the VPC security
group of the Multi-AZ DB cluster
as the source. A security group also
can't be used if it has been modified.

There are one or more security A valid EC2 security group for the RDS action: associate EC2 security
groups associated with the Multi-AZ connection exists, but it is not group
DB cluster with a name that matches associated with the EC2 instance.
the pattern rds-ec2-n. A security This security group has a name that
group that matches the pattern matches the pattern rds-ec2-n.
hasn't been modified. This security It hasn't been modified. It has only
group has only one inbound rule one outbound rule with the VPC
with the VPC security group of the security group of the Multi-AZ DB
EC2 instance as the source. cluster as the source.

Either of the following conditions There are one or more security RDS action: create new security
apply: groups associated with the EC2 groups
instance with a name that matches
• There is no security group the pattern rds-ec2-n. A security
associated with the Multi-AZ DB group that matches the pattern
cluster with a name that matches hasn't been modified. This security
the pattern rds-ec2-n. group has only one outbound rule
• There are one or more security with the VPC security group of the
groups associated with the Multi- Multi-AZ DB cluster as the source.
AZ DB cluster with a name that
matches the pattern rds-ec2-n.
However, none of these security
groups can be used for the
connection with the EC2 instance.
A security group can't be used if
it doesn't have one inbound rule
with the VPC security group of
the EC2 instance as the source. A
security group also can't be used if
it has been modified.

RDS action: create new security groups

Amazon RDS takes the following actions:

• Creates a new security group that matches the pattern rds-ec2-n. This security group has an
inbound rule with the VPC security group of the EC2 instance as the source. This security group is
associated with the Multi-AZ DB cluster and allows the EC2 instance to access the Multi-AZ DB cluster.
• Creates a new security group that matches the pattern ec2-rds-n. This security group has an
outbound rule with the VPC security group of the Multi-AZ DB cluster as the source. This security

527
Amazon Relational Database Service User Guide
Connecting an AWS compute
resource and a Multi-AZ DB cluster

group is associated with the EC2 instance and allows the EC2 instance to send traffic to the Multi-AZ
DB cluster.

RDS action: associate EC2 security group

Amazon RDS associates the valid, existing EC2 security group with the EC2 instance. This security group
allows the EC2 instance to send traffic to the Multi-AZ DB cluster.

Connecting an EC2 instance and a Multi-AZ DB cluster automatically


Before setting up a connection between an EC2 instance and an RDS database, make sure you meet the
requirements described in Overview of automatic connectivity with an EC2 instance (p. 386).

If you make changes to security groups after you configure connectivity, the changes might affect the
connection between the EC2 instance and the RDS database.
Note
You can only set up a connection between an EC2 instance and an RDS database automatically
by using the AWS Management Console. You can't set up a connection automatically with the
AWS CLI or RDS API.

To connect an EC2 instance and an RDS database automatically

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases, and then choose the RDS database.
3. From Actions, choose Set up EC2 connection.

The Set up EC2 connection page appears.


4. On the Set up EC2 connection page, choose the EC2 instance.

If no EC2 instances exist in the same VPC, choose Create EC2 instance to create one. In this case,
make sure the new EC2 instance is in the same VPC as the RDS database.
5. Choose Continue.

The Review and confirm page appears.

528
Amazon Relational Database Service User Guide
Connecting an AWS compute
resource and a Multi-AZ DB cluster

6. On the Review and confirm page, review the changes that RDS will make to set up connectivity with
the EC2 instance.

If the changes are correct, choose Confirm and set up.

If the changes aren't correct, choose Previous or Cancel.

Viewing connected compute resources


You can use the AWS Management Console to view the compute resources that are connected to an RDS
database. The resources shown include compute resource connections that were set up automatically.
You can set up connectivity with compute resources automatically in the following ways:

• You can select the compute resource when you create the database.

For more information, see Creating an Amazon RDS DB instance (p. 300) and Creating a Multi-AZ DB
cluster (p. 508).

529
Amazon Relational Database Service User Guide
Connecting an AWS compute
resource and a Multi-AZ DB cluster

• You can set up connectivity between an existing database and a compute resource.

For more information, see Automatically connecting an EC2 instance and an RDS database (p. 388).

The listed compute resources don't include ones that were connected to the database manually. For
example, you can allow a compute resource to access a database manually by adding a rule to the VPC
security group associated with the database.

For a compute resource to be listed, the following conditions must apply:

• The name of the security group associated with the compute resource matches the pattern ec2-
rds-n (where n is a number).
• The security group associated with the compute resource has an outbound rule with the port range set
to the port that the RDS database uses.
• The security group associated with the compute resource has an outbound rule with the source set to a
security group associated with the RDS database.
• The name of the security group associated with the RDS database matches the pattern rds-ec2-n
(where n is a number).
• The security group associated with the RDS database has an inbound rule with the port range set to
the port that the RDS database uses.
• The security group associated with the RDS database has an inbound rule with the source set to a
security group associated with the compute resource.

To view compute resources connected to an RDS database

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases, and then choose the name of the RDS database.
3. On the Connectivity & security tab, view the compute resources in the Connected compute
resources.

Automatically connecting a Lambda function and a Multi-AZ DB


cluster
You can use the RDS console to simplify setting up a connection between a Lambda function and a
Multi-AZ DB cluster. You can use the RDS console to simplify setting up a connection between a Lambda
function and a Multi-AZ DB cluster. Often, your Multi-AZ DB cluster is in a private subnet within a VPC.
The Lambda function can be used by applications to access your private Multi-AZ DB cluster.

The following image shows a direct connection between your Multi-AZ DB cluster and your Lambda
function.

530
Amazon Relational Database Service User Guide
Connecting an AWS compute
resource and a Multi-AZ DB cluster

You can set up the connection between your Lambda function and your database through RDS Proxy
to improve your database performance and resiliency. Often, Lambda functions make frequent, short
database connections that benefit from connection pooling that RDS Proxy offers. You can take
advantage of any IAM authentication that you already have for Lambda functions, instead of managing
database credentials in your Lambda application code. For more information, see Using Amazon RDS
Proxy (p. 1199).

You can use the console to automatically create a proxy for your connection. You can also select existing
proxies. The console updates the proxy security group to allow connections from your database and
Lambda function. You can input your database credentials or select the Secrets Manager secret you
require to access the database.

Topics
• Overview of automatic connectivity with a Lambda function (p. 532)

531
Amazon Relational Database Service User Guide
Connecting an AWS compute
resource and a Multi-AZ DB cluster

• Automatically connecting a Lambda function and a Multi-AZ DB cluster (p. 537)


• Viewing connected compute resources (p. 538)

Overview of automatic connectivity with a Lambda function


When you set up a connection between a Lambda function and a Multi-AZ DB cluster automatically,
Amazon RDS configures the VPC security group for your Lambda function and for your DB cluster.

The following are requirements for connecting a Lambda function with a Multi-AZ DB cluster:

• The Lambda function must exist in the same VPC as the Multi-AZ DB cluster.

If no Lambda function exists in the same VPC, the console provides a link to create one.
• The user who sets up connectivity must have permissions to perform the following Amazon RDS,
Amazon EC2, Lambda, Secrets Manager, and IAM operations:
• Amazon RDS
• rds:CreateDBProxies
• rds:DescribeDBInstances
• rds:DescribeDBProxies
• rds:ModifyDBInstance
• rds:ModifyDBProxy
• rds:RegisterProxyTargets
• Amazon EC2
• ec2:AuthorizeSecurityGroupEgress
• ec2:AuthorizeSecurityGroupIngress
• ec2:CreateSecurityGroup
• ec2:DeleteSecurityGroup
• ec2:DescribeSecurityGroups
• ec2:RevokeSecurityGroupEgress
• ec2:RevokeSecurityGroupIngress
• Lambda
• lambda:CreateFunctions
• lambda:ListFunctions
• lambda:UpdateFunctionConfiguration
• Secrets Manager
• sercetsmanager:CreateSecret
• secretsmanager:DescribeSecret
• IAM
• iam:AttachPolicy
• iam:CreateRole
• iam:CreatePolicy
• AWS KMS
• kms:describeKey

When you set up a connection between a Lambda function and a Multi-AZ DB cluster, Amazon RDS
configures the VPC security group for your function
532 and for your Multi-AZ DB cluster. If you use RDS
Proxy, then Amazon RDS also configures the VPC security group for the proxy. Amazon RDS acts
Amazon Relational Database Service User Guide
Connecting an AWS compute
resource and a Multi-AZ DB cluster

according to the current configuration of the security groups associated with the Multi-AZ DB cluster and
Lambda function, and proxy, as described in the following table.

Current RDS security Current Lambda security Current proxy security RDS action
group configuration group configuration group configuration

Amazon RDS takes no There are one or more There are one or more There are one or more
action because security security groups associated security groups associated security groups associated
groups of all resources with the Multi-AZ DB with the Lambda function with the proxy with a
follow the correct naming cluster with a name that with a name that matches name that matches the
pattern and have the right matches the pattern the pattern lambda- pattern rdsproxy-
inbound and outbound rds-lambda-n (where rds-n or lambda- lambda-n (where n is a
rules. n is a number) or if rdsproxy-n (where n is a number).
the TargetHealth of number).
an associated proxy is A security group that
AVAILABLE. A security group that matches the pattern
matches the pattern hasn't been modified.
A security group that hasn't been modified. This This security group has
matches the pattern security group has only inbound and outbound
hasn't been modified. This one outbound rule with rules with the VPC security
security group has only either the VPC security groups of the Lambda
one inbound rule with the group of the Multi-AZ DB function and the Multi-AZ
VPC security group of the cluster or the proxy as the DB cluster.
Lambda function or proxy destination.
as the source.

Either of the following Either of the following Either of the following RDS action: create new
conditions apply: conditions apply: conditions apply: security groups

• There is no security • There is no security • There is no security


group associated with group associated with group associated with
the Multi-AZ DB cluster the Lambda function the proxy with a name
with a name that with a name that that matches the
matches the pattern matches the pattern pattern rdsproxy-
rds-lambda-n or if lambda-rds-n or lambda-n.
the TargetHealth of lambda-rdsproxy-n. • There are one or
an associated proxy is • There are one or more security groups
AVAILABLE. more security groups associated with the
• There are one or associated with the proxy with a name that
more security groups Lambda function matches rdsproxy-
associated with the with a name that lambda-n. However,
Multi-AZ DB cluster matches the pattern Amazon RDS can't
with a name that lambda-rds-n or use any of these
matches the pattern lambda-rdsproxy-n. security groups for the
rds-lambda-n or if However, Amazon RDS connection with the
the TargetHealth can't use any of these Multi-AZ DB cluster or
of an associated security groups for the Lambda function.
proxy is AVAILABLE. connection with the
However, Amazon RDS Multi-AZ DB cluster.
can't use any of these Amazon RDS can't use
security groups for the a security group that
connection with the Amazon RDS can't use a doesn't have inbound and
Lambda function. security group if it doesn't outbound rules with the
have one outbound rule VPC security group of
with the VPC security the Multi-AZ DB cluster
Amazon RDS can't use a group of the Multi-AZ DB and the Lambda function.
security group that doesn't cluster or proxy as the Amazon RDS also can't use

533
Amazon Relational Database Service User Guide
Connecting an AWS compute
resource and a Multi-AZ DB cluster

Current RDS security Current Lambda security Current proxy security RDS action
group configuration group configuration group configuration
have one inbound rule source. Amazon RDS also a security group that has
with the VPC security can't use a security group been modified.
group of the Lambda that has been modified.
function or proxy as the
source. Amazon RDS also
can't use a security group
that has been modified.
Examples of modifications
include adding a rule or
changing the port of an
existing rule.

There are one or more There are one or more There are one or more RDS action: create new
security groups associated security groups associated security groups associated security groups
with the Multi-AZ DB with the Lambda function with the proxy with a
cluster with a name that with a name that matches name that matches the
matches the pattern the pattern lambda- pattern rdsproxy-
rds-lambda-n or if rds-n or lambda- lambda-n.
the TargetHealth of rdsproxy-n.
an associated proxy is However, Amazon RDS
AVAILABLE. However, Amazon RDS can't use any of these
can't use any of these security groups for the
A security group that security groups for the connection with the Multi-
matches the pattern connection with the Multi- AZ DB cluster or Lambda
hasn't been modified. This AZ DB cluster. Amazon function. Amazon RDS
security group has only RDS can't use a security can't use a security group
one inbound rule with the group that doesn't have that doesn't have inbound
VPC security group of the one outbound rule with and outbound rules with
Lambda function or proxy the VPC security group of the VPC security group of
as the source. the Multi-AZ DB cluster or the Multi-AZ DB cluster
proxy as the destination. and the Lambda function.
Amazon RDS also can't use Amazon RDS also can't use
a security group that has a security group that has
been modified. been modified.

There are one or more A valid Lambda security A valid proxy security RDS action: associate
security groups associated group for the connection group for the connection Lambda security group
with the Multi-AZ DB exists, but it is not exists, but it is not
cluster with a name that associated with the associated with the proxy.
matches the pattern Lambda function. This This security group has
rds-lambda-n or if security group has a a name that matches
the TargetHealth of name that matches the the pattern rdsproxy-
an associated proxy is pattern lambda-rds-n lambda-n. It hasn't been
AVAILABLE. or lambda-rdsproxy-n. modified. It has inbound
It hasn't been modified. and outbound rules with
A security group that It has only one outbound the VPC security group of
matches the pattern rule with the VPC security the Multi-AZ DB cluster
hasn't been modified. This group of the Multi-AZ DB and the Lambda function.
security group has only cluster or proxy as the
one inbound rule with the destination.
VPC security group of the
Lambda function or proxy
as the source.

534
Amazon Relational Database Service User Guide
Connecting an AWS compute
resource and a Multi-AZ DB cluster

Current RDS security Current Lambda security Current proxy security RDS action
group configuration group configuration group configuration

Either of the following There are one or more There are one or more RDS action: create new
conditions apply: security groups associated security groups associated security groups
with the Lambda function with the proxy with a
• There is no security with a name that matches name that matches the
group associated with the pattern lambda- pattern rdsproxy-
the Multi-AZ DB cluster rds-n or lambda- lambda-n.
with a name that rdsproxy-n.
matches the pattern A security group that
rds-lambda-n or if A security group that matches the pattern
the TargetHealth of matches the pattern hasn't been modified.
an associated proxy is hasn't been modified. This This security group has
AVAILABLE. security group has only inbound and outbound
• There are one or one outbound rule with rules with the VPC security
more security groups the VPC security group of group of the Multi-AZ DB
associated with the the Multi-AZ DB cluster or cluster and the Lambda
Multi-AZ DB cluster proxy as the destination. function.
with a name that
matches the pattern
rds-lambda-n or if
the TargetHealth of
an associated proxy is
AVAILABLE. However,
Amazon RDS can'can't
use any of these
security groups for the
connection with the
Lambda function or
proxy.

Amazon RDS can't


use a security group
that doesn't have one
inbound rule with the
VPC security group of
the Lambda function
or proxy as the source.
Amazon RDS also can't
use a security group that
has been modified.

535
Amazon Relational Database Service User Guide
Connecting an AWS compute
resource and a Multi-AZ DB cluster

Current RDS security Current Lambda security Current proxy security RDS action
group configuration group configuration group configuration

There are one or more Either of the following Either of the following RDS action: create new
security groups associated conditions apply: conditions apply: security groups
with the Multi-AZ DB
cluster with a name that • There is no security • There is no security
matches the pattern rds- group associated with group associated with
rdsproxy-n (where n is a the Lambda function the proxy with a name
number). with a name that that matches the
matches the pattern pattern rdsproxy-
lambda-rds-n or lambda-n.
lambda-rdsproxy-n. • There are one or
• There are one or more security groups
more security groups associated with the
associated with the proxy with a name that
Lambda function matches rdsproxy-
with a name that lambda-n. However,
matches the pattern Amazon RDS can't
lambda-rds-n or use any of these
lambda-rdsproxy-n. security groups for the
However, Amazon RDS connection with the
can't use any of these Multi-AZ DB cluster or
security groups for the Lambda function.
connection with the
Multi-AZ DB cluster.
Amazon RDS can't use
a security group that
Amazon RDS can't use a doesn't have inbound and
security group that doesn't outbound rules with the
have one outbound rule VPC security group of
with the VPC security the Multi-AZ DB cluster
group of the Multi-AZ DB and the Lambda function.
cluster or proxy as the Amazon RDS also can't use
destination. Amazon RDS a security group that has
also can't use a security been modified.
group that has been
modified.

RDS action: create new security groups

Amazon RDS takes the following actions:

• Creates a new security group that matches the pattern rds-lambda-n.This security group has an
inbound rule with the VPC security group of the Lambda function or proxy as the source. This security
group is associated with the Multi-AZ DB cluster and allows the function or proxy to access the Multi-
AZ DB cluster.
• Creates a new security group that matches the pattern lambda-rds-n. This security group has an
outbound rule with the VPC security group of the Multi-AZ DB cluster or proxy as the destination. This
security group is associated with the Lambda function and allows the Lambda function to send traffic
to the Multi-AZ DB cluster or send traffic through a proxy.
• Creates a new security group that matches the pattern rdsproxy-lambda-n. This security group has
inbound and outbound rules with the VPC security group of the Multi-AZ DB cluster and the Lambda
function.

536
Amazon Relational Database Service User Guide
Connecting an AWS compute
resource and a Multi-AZ DB cluster

RDS action: associate Lambda security group

Amazon RDS associates the valid, existing Lambda security group with the Lambda function. This
security group allows the function to send traffic to the Multi-AZ DB cluster or send traffic through a
proxy.

Automatically connecting a Lambda function and a Multi-AZ DB cluster


You can use the Amazon RDS console to automatically connect a Lambda function to your Multi-AZ DB
cluster. This simplifies the process of setting up a connection between these resources.

You can also use RDS Proxy to include a proxy in your connection. Lambda functions make frequent
short database connections that benefit from the connection pooling that RDS Proxy offers. You can also
use any IAM authentication that you've already set up for your Lambda function, instead of managing
database credentials in your Lambda application code.

You can connect an existing Multi-AZ DB cluster to new and existing Lambda functions using the Set up
Lambda connection page. The setup process automatically sets up the required security groups for you.

Before setting up a connection between a Lambda function and a Multi-AZ DB cluster, make sure that:

• Your Lambda function and Multi-AZ DB cluster are in the same VPC.
• You have the right permissions for your user account. For more information about the requirements,
see Overview of automatic connectivity with a Lambda function (p. 393).

If you change security groups after you configure connectivity, the changes might affect the connection
between the Lambda function and the Multi-AZ DB cluster.
Note
You can automatically set up a connection between a Multi-AZ DB cluster and a Lambda
function only in the AWS Management Console. To connect a Lambda function, all instances in
the Multi-AZ DB cluster must be in the Available state.

To automatically connect a Lambda function and a Multi-AZ DB cluster


<result>

After you confirm the setup, Amazon RDS begins the process of connecting your Lambda function, RDS
Proxy (if you used a proxy), and Multi-AZ DB cluster. The console shows the Connection details dialog
box, which lists the security group changes that allow connections between your resources.
</result>

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases, and then choose the Multi-AZ DB cluster that you want to
connect to a Lambda function.
3. For Actions, choose Set up Lambda connection.
4. On the Set up Lambda connection page, under Select Lambda function, do either of the following:
• If you have an existing Lambda function in the same VPC as your Multi-AZ DB cluster, choose
Choose existing function, and then choose the function.
• If you don't have a Lambda function in the same VPC, choose Create new function, and then
enter a Function name. The default runtime is set to Nodejs.18. You can modify the settings for
your new Lambda function in the Lambda console after you complete the connection setup.
5. (Optional) Under RDS Proxy, select Connect using RDS Proxy, and then do any of the following:
• If you have an existing proxy that you want to use, choose Choose existing proxy, and then
choose the proxy.

537
Amazon Relational Database Service User Guide
Connecting an AWS compute
resource and a Multi-AZ DB cluster

• If you don't have a proxy, and you want Amazon RDS to automatically create one for you,
choose Create new proxy. Then, for Database credentials, do either of the following:

a. Choose Database username and password, and then enter the Username and Password
for your Multi-AZ DB cluster.
b. Choose Secrets Manager secret. Then, for Select secret, choose an AWS Secrets Manager
secret. If you don't have a Secrets Manager secret, choose Create new Secrets Manager
secret to create a new secret. After you create the secret, for Select secret, choose the new
secret.

After you create the new proxy, choose Choose existing proxy, and then choose the proxy. Note
that it might take some time for your proxy to be available for connection.
6. (Optional) Expand Connection summary and verify the highlighted updates for your resources.
7. Choose Set up.

Viewing connected compute resources


You can use the AWS Management Console to view the compute resources that are connected to your
Multi-AZ DB cluster. The resources shown include compute resource connections that Amazon RDS set up
automatically.

The listed compute resources don't include those that are manually connected to the Multi-AZ DB
cluster. For example, you can allow a compute resource to access your Multi-AZ DB cluster manually by
adding a rule to your VPC security group associated with the cluster.

For the console to list a Lambda function, the following conditions must apply:

• The name of the security group associated with the compute resource matches the pattern lambda-
rds-n or lambda-rdsproxy-n (where n is a number).
• The security group associated with the compute resource has an outbound rule with the port range set
to the port of the Multi-AZ DB cluster or an associated proxy. The destination for the outbound rule
must be set to a security group associated with the Multi-AZ DB cluster or an associated proxy.
• The name of the security group attached to the proxy associated with your database matches the
pattern rds-rdsproxy-n (where n is a number).
• The security group associated with the function has an outbound rule with the port set to the port
that the Multi-AZ DB cluster or associated proxy uses. The destination must be set to a security group
associated with the Multi-AZ DB cluster or associated proxy.

To view compute resources automatically connected to a Multi-AZ DB cluster

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases, and then choose the Multi-AZ DB cluster.
3. On the Connectivity & security tab, view the compute resources under Connected compute
resources.

538
Amazon Relational Database Service User Guide
Modifying a Multi-AZ DB cluster

Modifying a Multi-AZ DB cluster


A Multi-AZ DB cluster has a writer DB instance and two reader DB instances in three separate Availability
Zones. Multi-AZ DB clusters provide high availability, increased capacity for read workloads, and lower
latency when compared to Multi-AZ deployments. For more information about Multi-AZ DB clusters, see
Multi-AZ DB cluster deployments (p. 499).

You can modify a Multi-AZ DB cluster to change its settings. You can also perform operations on a Multi-
AZ DB cluster, such as taking a snapshot of it. However, you can't modify the DB instances in a Multi-AZ
DB cluster, and the only supported operation is rebooting a DB instance.
Note
Multi-AZ DB clusters are supported only for the MySQL and PostgreSQL DB engines.

You can modify a Multi-AZ DB cluster using the AWS Management Console, the AWS CLI, or the RDS API.

Console
To modify a Multi-AZ DB cluster

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases, and then choose the Multi-AZ DB cluster that you want to
modify.
3. Choose Modify. The Modify DB cluster page appears.
4. Change any of the settings that you want. For information about each setting, see Settings for
modifying Multi-AZ DB clusters (p. 540).
5. When all the changes are as you want them, choose Continue and check the summary of
modifications.
6. (Optional) Choose Apply immediately to apply the changes immediately. Choosing this option can
cause downtime in some cases. For more information, see Applying changes immediately (p. 540).
7. On the confirmation page, review your changes. If they're correct, choose Modify DB cluster to save
your changes.

Or choose Back to edit your changes or Cancel to cancel your changes.

AWS CLI
To modify a Multi-AZ DB cluster by using the AWS CLI, call the modify-db-cluster command. Specify the
DB cluster identifier and the values for the options that you want to modify. For information about each
option, see Settings for modifying Multi-AZ DB clusters (p. 540).

Example
The following code modifies my-multi-az-dbcluster by setting the backup retention period to
1 week (7 days). The code turns on deletion protection by using --deletion-protection. To turn
off deletion protection, use --no-deletion-protection. The changes are applied during the next
maintenance window by using --no-apply-immediately. Use --apply-immediately to apply the
changes immediately. For more information, see Applying changes immediately (p. 540).

For Linux, macOS, or Unix:

aws rds modify-db-cluster \


--db-cluster-identifier my-multi-az-dbcluster \
--backup-retention-period 7 \
--deletion-protection \
--no-apply-immediately

539
Amazon Relational Database Service User Guide
Modifying a Multi-AZ DB cluster

For Windows:

aws rds modify-db-cluster ^


--db-cluster-identifier my-multi-az-dbcluster ^
--backup-retention-period 7 ^
--deletion-protection ^
--no-apply-immediately

RDS API
To modify a Multi-AZ DB cluster by using the Amazon RDS API, call the ModifyDBCluster operation.
Specify the DB cluster identifier, and the parameters for the settings that you want to modify. For
information about each parameter, see Settings for modifying Multi-AZ DB clusters (p. 540).

Applying changes immediately


When you modify a Multi-AZ DB cluster, you can apply the changes immediately. To apply changes
immediately, you choose the Apply Immediately option in the AWS Management Console. Or you use
the --apply-immediately option when calling the AWS CLI or set the ApplyImmediately parameter
to true when using the Amazon RDS API.

If you don't choose to apply changes immediately, the changes are put into the pending modifications
queue. During the next maintenance window, any pending changes in the queue are applied. If you
choose to apply changes immediately, your new changes and any changes in the pending modifications
queue are applied.
Important
If any of the pending modifications require the DB cluster to be temporarily unavailable
(downtime), choosing the apply immediately option can cause unexpected downtime.
When you choose to apply a change immediately, any pending modifications are also applied
immediately, instead of during the next maintenance window.
If you don't want a pending change to be applied in the next maintenance window, you
can modify the DB instance to revert the change. You can do this by using the AWS CLI and
specifying the --apply-immediately option.

Changes to some database settings are applied immediately, even if you choose to defer your changes.
To see how the different database settings interact with the apply immediately setting, see Settings for
modifying Multi-AZ DB clusters (p. 540).

Settings for modifying Multi-AZ DB clusters


For details about settings that you can use to modify a Multi-AZ DB cluster, see the following table. For
more information about the AWS CLI options, see modify-db-cluster. For more information about the
RDS API parameters, see ModifyDBCluster.

Console Setting description CLI option and RDS API When the change Downtime notes
setting parameter occurs

Allocated The amount of CLI option: If you choose to apply Downtime doesn't occur
storage storage to allocate the change immediately, during this change.
for each DB instance --allocated- it occurs immediately.
in your DB cluster (in storage
gibibytes). If you don't choose
API parameter: to apply the change
For more information, immediately, it occurs
see Amazon AllocatedStorage during the next
RDS DB instance maintenance window.
storage (p. 101).

540
Amazon Relational Database Service User Guide
Modifying a Multi-AZ DB cluster

Console Setting description CLI option and RDS API When the change Downtime notes
setting parameter occurs

Auto Enable auto minor CLI option: The change occurs Downtime doesn't occur
minor version upgrade to immediately. This during this change.
version have your DB cluster --auto-minor- setting ignores the
upgrade receive preferred version-upgrade apply immediately
minor DB engine setting.
version upgrades --no-auto-minor-
automatically version-upgrade
when they become
API parameter:
available. Amazon
RDS performs AutoMinorVersionUpgrade
automatic minor
version upgrades
in the maintenance
window.

Backup The number of CLI option: If you choose to apply Downtime occurs if
retention days that you want the change immediately, you change from 0 to a
period automatic backups --backup- it occurs immediately. nonzero value, or from a
of your DB cluster to retention-period nonzero value to 0.
be retained. For any If you don't choose
nontrivial DB cluster, API parameter: to apply the change
set this value to 1 or immediately, and you
BackupRetentionPeriodchange the setting
greater.
from a nonzero value
For more information, to another nonzero
see Working with value, the change is
backups (p. 591). applied asynchronously,
as soon as possible.
Otherwise, the change
occurs during the next
maintenance window.

Backup The time period CLI option: The change is applied Downtime doesn't occur
window during which Amazon asynchronously, as soon during this change.
RDS automatically --preferred- as possible.
takes a backup of backup-window
your DB cluster.
Unless you have a API parameter:
specific time that you
PreferredBackupWindow
want to have your
database backed up,
use the default of No
preference.

For more information,


see Working with
backups (p. 591).

541
Amazon Relational Database Service User Guide
Modifying a Multi-AZ DB cluster

Console Setting description CLI option and RDS API When the change Downtime notes
setting parameter occurs

Copy This option copies CLI option: The change occurs Downtime doesn't occur
tags to any DB cluster tags immediately. This during this change.
snapshots to a DB snapshot -copy-tags-to- setting ignores the
when you create a snapshot apply immediately
snapshot. setting.
-no-copy-tags-to-
For more information, snapshot
see Tagging
Amazon RDS RDS API parameter:
resources (p. 461).
CopyTagsToSnapshot

Database For Multi-AZ None because password If you choose to apply Downtime doesn't occur
authentication
DB clusters, authentication is the the change immediately, during this change.
only Password default. it occurs immediately.
authentication is
supported. If you don't choose
to apply the change
immediately, it occurs
during the next
maintenance window.

542
Amazon Relational Database Service User Guide
Modifying a Multi-AZ DB cluster

Console Setting description CLI option and RDS API When the change Downtime notes
setting parameter occurs

DB The DB cluster CLI option: If you choose to apply An outage doesn't occur
cluster identifier. This value is the change immediately, during this change.
identifier stored as a lowercase --new-db-cluster- it occurs immediately.
string. identifier
If you don't choose
When you change the RDS API parameter: to apply the change
DB cluster identifier, immediately, it occurs
the DB cluster NewDBClusterIdentifier
during the next
endpoint changes. maintenance window.
The identifiers and
endpoints of the
DB instances in
the DB cluster also
change. The new
DB cluster name
must be unique. The
maximum length is
63 characters.

The names of the DB


instances in the DB
cluster are changed
to correspond with
the new name of the
DB cluster. A new
DB instance name
can't be the same
as the name of an
existing DB instance.
For example, if you
change the DB cluster
name to maz, a DB
instance name might
be changed to maz-
instance-1. In
this case, there can't
be an existing DB
instance named maz-
instance-1.

For more information,


see Renaming
a Multi-AZ DB
cluster (p. 550).

543
Amazon Relational Database Service User Guide
Modifying a Multi-AZ DB cluster

Console Setting description CLI option and RDS API When the change Downtime notes
setting parameter occurs

DB The compute and CLI option: If you choose to apply Downtime occurs during
cluster memory capacity the change immediately, this change.
instance of each DB instance --db-cluster- it occurs immediately.
class in the Multi-AZ DB instance-class
cluster, for example If you don't choose
db.r6gd.xlarge. RDS API parameter: to apply the change
immediately, it occurs
If possible, choose DBClusterInstanceClass
during the next
a DB instance class maintenance window.
large enough that a
typical query working
set can be held in
memory. When
working sets are
held in memory, the
system can avoid
writing to disk,
which improves
performance.

Currently, Multi-
AZ DB clusters only
support db.m6gd and
db.r6gd DB instance
classes. For more
information about
DB instance classes,
see DB instance
classes (p. 11).

DB The DB cluster CLI option: The parameter An outage doesn't occur


cluster parameter group that group change occurs during this change.
parameter you want associated --db-cluster- immediately. When you change
group with the DB cluster. parameter-group- the parameter group,
name changes to some
For more information, parameters are applied
see Working with RDS API parameter: to the DB instances in
parameter groups the Multi-AZ DB cluster
for Multi-AZ DB DBClusterParameterGroupName
immediately without
clusters (p. 503). a reboot. Changes
to other parameters
are applied only after
the DB instances are
rebooted.

DB The version of CLI option: If you choose to apply An outage occurs during
engine database engine that the change immediately, this change.
version you want to use. --engine-version it occurs immediately.

RDS API parameter: If you don't choose


to apply the change
EngineVersion immediately, it occurs
during the next
maintenance window.

544
Amazon Relational Database Service User Guide
Modifying a Multi-AZ DB cluster

Console Setting description CLI option and RDS API When the change Downtime notes
setting parameter occurs

Deletion Enable deletion CLI option: The change occurs An outage doesn't occur
protection protection to prevent immediately. This during this change.
your DB cluster from --deletion- setting ignores the
being deleted. protection apply immediately
setting.
For more information, --no-deletion-
see Deleting a DB protection
instance (p. 489).
RDS API parameter:

DeletionProtection

Maintenance
The 30-minute CLI option: The change occurs If there are one or
window window in immediately. This more pending actions
which pending --preferred- setting ignores the that cause downtime,
modifications to maintenance-window apply immediately and the maintenance
your DB cluster setting. window is changed to
are applied. If the RDS API parameter: include the current time,
time period doesn't those pending actions
PreferredMaintenanceWindow
matter, choose No are applied immediately
preference. and downtime occurs.

For more information,


see The Amazon
RDS maintenance
window (p. 423).

545
Amazon Relational Database Service User Guide
Modifying a Multi-AZ DB cluster

Console Setting description CLI option and RDS API When the change Downtime notes
setting parameter occurs

Manage Select Manage CLI option: If you are turning Downtime doesn't occur
master master credentials on or turning off during this change.
credentials in AWS Secrets --manage-master- automatic master user
in AWS Manager to manage user-password | -- password management,
Secrets the master user no-manage-master- the change occurs
Manager password in a secret user-password immediately. This
in Secrets Manager. change ignores the
--master-user- apply immediately
Optionally, choose secret-kms-key-id setting.
a KMS key to use to
protect the secret. --rotate-master- If you are rotating the
Choose from the KMS user-password | -- master user password,
keys in your account, no-rotate-master- you must specify that
or enter the key from user-password the change is applied
a different account. immediately.
RDS API parameter:
If RDS is already
managing the master ManageMasterUserPassword
user password for
MasterUserSecretKmsKeyId
the DB cluster, you
can rotate the master RotateMasterUserPassword
user password by
choosing Rotate
secret immediately.

For more information,


see Password
management
with Amazon RDS
and AWS Secrets
Manager (p. 2568).

New The password for CLI option: The change is applied Downtime doesn't occur
master your master user asynchronously, as during this change.
password account. --master-user- soon as possible. This
password setting ignores the
apply immediately
RDS API parameter: setting.
MasterUserPassword

546
Amazon Relational Database Service User Guide
Modifying a Multi-AZ DB cluster

Console Setting description CLI option and RDS API When the change Downtime notes
setting parameter occurs

ProvisionedThe amount of CLI option: If you choose to apply Downtime doesn't occur
IOPS Provisioned IOPS the change immediately, during this change.
(input/output --iops it occurs immediately.
operations per
second) to be initially RDS API parameter: If you don't choose
allocated for the DB to apply the change
Iops immediately, it occurs
cluster. This setting
is available only if during the next
Provisioned IOPS maintenance window.
(io1) is selected as
the storage type.

For more information,


see Provisioned IOPS
SSD storage (p. 104).

547
Amazon Relational Database Service User Guide
Modifying a Multi-AZ DB cluster

Console Setting description CLI option and RDS API When the change Downtime notes
setting parameter occurs

Public Publicly accessible CLI option: The change occurs An outage doesn't occur
access to give the DB cluster immediately. This during this change.
a public IP address, --publicly- setting ignores the
meaning that it's accessible apply immediately
accessible outside its setting.
virtual private cloud --no-publicly-
(VPC). To be publicly accessible
accessible, the DB
RDS API parameter:
cluster also has to be
in a public subnet in PubliclyAccessible
the VPC.

Not publicly
accessible to make
the DB cluster
accessible only from
inside the VPC.

For more information,


see Hiding a
DB instance in
a VPC from the
internet (p. 2695).

To connect to a DB
cluster from outside
of its VPC, the DB
cluster must be
publicly accessible.
Also, access must
be granted using
the inbound rules
of the DB cluster's
security group, and
other requirements
must be met. For
more information,
see Can't connect
to Amazon RDS DB
instance (p. 2727).

If your DB cluster isn't


publicly accessible,
you can use an AWS
Site-to-Site VPN
connection or an
AWS Direct Connect
connection to access
it from a private
network. For more
information, see
Internetwork traffic
privacy (p. 2605).

548
Amazon Relational Database Service User Guide
Modifying a Multi-AZ DB cluster

Console Setting description CLI option and RDS API When the change Downtime notes
setting parameter occurs

VPC The security groups CLI option: The change is applied An outage doesn't occur
security to associate with the asynchronously, as during this change.
group DB cluster. --vpc-security- soon as possible. This
group-ids setting ignores the
For more information, apply immediately
see Overview RDS API parameter: setting.
of VPC security
groups (p. 2680). VpcSecurityGroupIds

Settings that don't apply when modifying Multi-AZ DB clusters


The following settings in the AWS CLI command modify-db-cluster and the RDS API operation
ModifyDBCluster don't apply to Multi-AZ DB clusters.

You also can't modify these settings for Multi-AZ DB clusters in the console.

AWS CLI setting RDS API setting

--backtrack-window BacktrackWindow

--cloudwatch-logs-export-configuration CloudwatchLogsExportConfiguration

--copy-tags-to-snapshot | --no-copy- CopyTagsToSnapshot


tags-to-snapshot

--db-instance-parameter-group-name DBInstanceParameterGroupName

--domain Domain

--domain-iam-role-name DomainIAMRoleName

--enable-global-write-forwarding | -- EnableGlobalWriteForwarding
no-enable-global-write-forwarding

--enable-http-endpoint | --no-enable- EnableHttpEndpoint


http-endpoint

--enable-iam-database-authentication EnableIAMDatabaseAuthentication
| --no-enable-iam-database-
authentication

--option-group-name OptionGroupName

--port Port

--scaling-configuration ScalingConfiguration

--storage-type StorageType

549
Amazon Relational Database Service User Guide
Renaming a Multi-AZ DB cluster

Renaming a Multi-AZ DB cluster


You can rename a Multi-AZ DB cluster by using the AWS Management Console, the AWS CLI modify-
db-cluster command, or the Amazon RDS API ModifyDBCluster operation. Renaming a Multi-AZ DB
cluster can have significant effects. The following is a list of considerations before you rename a Multi-AZ
DB cluster.

• When you rename a Multi-AZ DB cluster, the cluster endpoints for the Multi-AZ DB cluster change.
These endpoints change because they include the name you assigned to the Multi-AZ DB cluster. You
can redirect traffic from an old endpoint to a new one. For more information about Multi-AZ DB cluster
endpoints, see Connecting to a Multi-AZ DB cluster (p. 522).
• When you rename a Multi-AZ DB cluster, the old DNS name that was used by the Multi-AZ DB cluster
is deleted, although it could remain cached for a few minutes. The new DNS name for the renamed
Multi-AZ DB cluster becomes effective in about two minutes. The renamed Multi-AZ DB cluster isn't
available until the new name becomes effective.
• You can't use an existing Multi-AZ DB cluster name when renaming a cluster.
• Metrics and events associated with the name of a Multi-AZ DB cluster are maintained if you reuse a DB
cluster name.
• Multi-AZ DB cluster tags remain with the Multi-AZ DB cluster, regardless of renaming.
• DB cluster snapshots are retained for a renamed Multi-AZ DB cluster.

Note
A Multi-AZ DB cluster is an isolated database environment running in the cloud. A Multi-AZ DB
cluster can host multiple databases. For information about changing a database name, see the
documentation for your DB engine.

Renaming to replace an existing Multi-AZ DB cluster


The most common scenarios for renaming a Multi-AZ DB cluster include restoring data from a DB cluster
snapshot or performing point-in-time recovery (PITR). By renaming the Multi-AZ DB cluster, you can
replace the Multi-AZ DB cluster without changing any application code that references the Multi-AZ DB
cluster. In these cases, complete the following steps:

1. Stop all traffic going to the Multi-AZ DB cluster. You can redirect traffic from accessing the databases
on the Multi-AZ DB cluster, or choose another way to prevent traffic from accessing your databases on
the Multi-AZ DB cluster.
2. Rename the existing Multi-AZ DB cluster.
3. Create a new Multi-AZ DB cluster by restoring from a DB cluster snapshot or recovering to a point in
time. Then, give the new Multi-AZ DB cluster the name of the previous Multi-AZ DB cluster.

If you delete the old Multi-AZ DB cluster, you are responsible for deleting any unwanted DB cluster
snapshots of the old Multi-AZ DB cluster.

Console
To rename a Multi-AZ DB cluster

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases.
3. Choose the Multi-AZ DB cluster that you want to rename.
4. Choose Modify.

550
Amazon Relational Database Service User Guide
Renaming a Multi-AZ DB cluster

5. In Settings, enter a new name for DB cluster identifier.


6. Choose Continue.
7. To apply the changes immediately, choose Apply immediately. Choosing this option can cause an
outage in some cases. For more information, see Applying changes immediately (p. 540).
8. On the confirmation page, review your changes. If they are correct, choose Modify cluster to save
your changes.

Alternatively, choose Back to edit your changes, or choose Cancel to discard your changes.

AWS CLI
To rename a Multi-AZ DB cluster, use the AWS CLI command modify-db-cluster. Provide the current
--db-cluster-identifier value and --new-db-cluster-identifier parameter with the new
name of the Multi-AZ DB cluster.

Example

For Linux, macOS, or Unix:

aws rds modify-db-cluster \


--db-cluster-identifier DBClusterIdentifier \
--new-db-cluster-identifier NewDBClusterIdentifier

For Windows:

aws rds modify-db-cluster ^


--db-cluster-identifier DBClusterIdentifier ^
--new-db-cluster-identifier NewDBClusterIdentifier

RDS API
To rename a Multi-AZ DB cluster, call the Amazon RDS API operation ModifyDBCluster with the
following parameters:

• DBClusterIdentifier – The existing name of the DB cluster.


• NewDBClusterIdentifier – The new name of the DB cluster.

551
Amazon Relational Database Service User Guide
Rebooting a Multi-AZ DB cluster

Rebooting a Multi-AZ DB cluster and reader DB


instances
You might need to reboot your Multi-AZ DB cluster, usually for maintenance reasons. For example, if you
make certain modifications or change the DB cluster parameter group associated with a DB cluster, you
reboot the DB cluster. Doing so causes the changes to take effect.

If a DB cluster isn't using the latest changes to its associated DB cluster parameter group, the AWS
Management Console shows the DB cluster parameter group with a status of pending-reboot. The
pending-reboot parameter groups status doesn't result in an automatic reboot during the next
maintenance window. To apply the latest parameter changes to that DB cluster, manually reboot the DB
cluster. For more information about parameter groups, see Working with parameter groups for Multi-AZ
DB clusters (p. 503).

Rebooting a DB cluster restarts the database engine service. Rebooting a DB cluster results in a
momentary outage, during which the DB cluster status is set to rebooting.

You can't reboot your DB cluster if it isn't in the Available state. Your database can be unavailable for
several reasons, such as an in-progress backup, a previously requested modification, or a maintenance-
window action.

The time required to reboot your DB cluster depends on the crash recovery process, the database activity
at the time of reboot, and the behavior of your specific DB cluster. To improve the reboot time, we
recommend that you reduce database activity as much as possible during the reboot process. Reducing
database activity reduces rollback activity for in-transit transactions.
Important
Multi-AZ DB clusters don't support reboot with a failover. When you reboot the writer instance
of a Multi-AZ DB cluster, it doesn't affect the reader DB instances in that DB cluster and no
failover occurs. When you reboot a reader DB instance, no failover occurs. To fail over a Multi-AZ
DB cluster, choose Failover in the console, call the AWS CLI command failover-db-cluster,
or call the API operation FailoverDBCluster.

Console

To reboot a DB cluster

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases, and then choose the Multi-AZ DB cluster that you want to
reboot.
3. For Actions, choose Reboot.

The Reboot DB cluster page appears.


4. Choose Reboot to reboot your DB cluster.

Or choose Cancel.

AWS CLI
To reboot a Multi-AZ DB cluster by using the AWS CLI, call the reboot-db-cluster command.

aws rds reboot-db-cluster --db-cluster-identifier mymultiazdbcluster

552
Amazon Relational Database Service User Guide
Rebooting a Multi-AZ DB cluster

RDS API
To reboot a Multi-AZ DB cluster by using the Amazon RDS API, call the RebootDBCluster operation.

553
Amazon Relational Database Service User Guide
Working with Multi-AZ DB cluster read replicas

Working with Multi-AZ DB cluster read replicas


A DB cluster read replica is a special type of cluster that you create from a source DB instance. After
you create a read replica, any updates made to the primary DB instance are asynchronously copied to
the Multi-AZ DB cluster read replica. You can reduce the load on your primary DB instance by routing
read queries from your applications to the read replica. Using read replicas, you can elastically scale out
beyond the capacity constraints of a single DB instance for read-heavy database workloads.

You can also create one or more DB instance read replicas from a Multi-AZ DB cluster. DB instance read
replicas let you scale beyond the compute or I/O capacity of the source Multi-AZ DB cluster by directing
excess read traffic to the read replicas. Currently, you can't create a Multi-AZ DB cluster read replica from
an existing Multi-AZ DB cluster.

Topics
• Migrating to a Multi-AZ DB cluster using a read replica (p. 554)
• Creating a DB instance read replica from a Multi-AZ DB cluster (p. 557)

Migrating to a Multi-AZ DB cluster using a read replica


To migrate a Single-AZ deployment or Multi-AZ DB instance deployment to a Multi-AZ DB cluster
deployment with reduced downtime, you can create a Multi-AZ DB cluster read replica. For the source,
you specify the DB instance in the Single-AZ deployment or the primary DB instance in the Multi-AZ DB
instance deployment. The DB instance can process write transactions during the migration to a Multi-AZ
DB cluster.

Consider the following before you create a Multi-AZ DB cluster read replica:

• The source DB instance must be on a version that supports Multi-AZ DB clusters. For more information,
see Multi-AZ DB clusters (p. 147).
• The Multi-AZ DB cluster read replica must be on the same major version as its source, and the same or
higher minor version.
• You must turn on automatic backups on the source DB instance by setting the backup retention period
to a value other than 0.
• The allocated storage of the source DB instance must be 100 GiB or higher.
• For RDS for MySQL, both the gtid-mode and enforce_gtid_consistency parameters must be set
to ON for the source DB instance. You must use a custom parameter group, not the default parameter
group. For more information, see the section called “Working with DB parameter groups” (p. 349).
• An active, long-running transaction can slow the process of creating the read replica. We recommend
that you wait for long-running transactions to complete before creating a read replica.
• If you delete the source DB instance for a Multi-AZ DB cluster read replica, the read replica is promoted
to a standalone Multi-AZ DB cluster.

Creating and promoting the Multi-AZ DB cluster read replica


You can create and promote a Multi-AZ DB cluster read replica using the AWS Management Console,
AWS CLI, or RDS API.
Note
We strongly recommend that you create all read replicas in the same virtual private cloud (VPC)
based on Amazon VPC of the source DB instance.
If you create a read replica in a different VPC from the source DB instance, classless inter-domain
routing (CIDR) ranges can overlap between the replica and the RDS system. CIDR overlap makes
the replica unstable, which can negatively impact applications connecting to it. If you receive an

554
Amazon Relational Database Service User Guide
Working with Multi-AZ DB cluster read replicas

error when creating the read replica, choose a different destination DB subnet group. For more
information, see Working with a DB instance in a VPC (p. 2688).

Console

To migrate a Single-AZ deployment or Multi-AZ DB instance deployment to a Multi-AZ DB cluster using a


read replica, complete the following steps using the AWS Management Console.

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. Create the Multi-AZ DB cluster read replica.

a. In the navigation pane, choose Databases.


b. Choose the DB instance that you want to use as the source for a read replica.
c. For Actions, choose Create read replica.
d. For Availability and durability, choose Multi-AZ DB cluster.
e. For DB instance identifier, enter a name for the read replica.
f. For the remaining sections, specify your DB cluster settings. For information about a setting, see
Settings for creating Multi-AZ DB clusters (p. 514).
g. Choose Create read replica.
3. When you are ready, promote the read replica to be a standalone Multi-AZ DB cluster:

a. Stop any transactions from being written to the source DB instance, and then wait for all
updates to be made to the read replica.

Database updates occur on the read replica after they have occurred on the primary DB
instance. This replication lag can vary significantly. Use the ReplicaLag metric to determine
when all updates have been made to the read replica. For more information about replica lag,
see Monitoring read replication (p. 449).
b. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
c. In the Amazon RDS console, choose Databases.

The Databases pane appears. Each read replica shows Replica in the Role column.
d. Choose the Multi-AZ DB cluster read replica that you want to promote.
e. For Actions, choose Promote.
f. On the Promote read replica page, enter the backup retention period and the backup window
for the newly promoted Multi-AZ DB cluster.
g. When the settings are as you want them, choose Promote read replica.
h. Wait for the status of the promoted Multi-AZ DB cluster to be Available.
i. Direct your applications to use the promoted Multi-AZ DB cluster.

Optionally, delete the Single-AZ deployment or Multi-AZ DB instance deployment if it is no longer


needed. For instructions, see Deleting a DB instance (p. 489).

AWS CLI

To migrate a Single-AZ deployment or Multi-AZ DB instance deployment to a Multi-AZ DB cluster using a


read replica, complete the following steps using the AWS CLI.

1. Create the Multi-AZ DB cluster read replica.

555
Amazon Relational Database Service User Guide
Working with Multi-AZ DB cluster read replicas

To create a read replica from the source DB instance, use the AWS CLI command create-db-
cluster. For --replication-source-identifier, specify the Amazon Resource Name (ARN)
of the source DB instance.

For Linux, macOS, or Unix:

aws rds create-db-cluster \


--db-cluster-identifier mymultiazdbcluster \
--replication-source-identifier arn:aws:rds:us-east-2:123456789012:db:mydbinstance

For Windows:

aws rds create-db-cluster ^


--db-cluster-identifier mymultiazdbcluster ^
--replication-source-identifier arn:aws:rds:us-east-2:123456789012:db:mydbinstance

2. Stop any transactions from being written to the source DB instance, and then wait for all updates to
be made to the read replica.

Database updates occur on the read replica after they have occurred on the primary DB instance.
This replication lag can vary significantly. Use the Replica Lag metric to determine when all
updates have been made to the read replica. For more information about replica lag, see Monitoring
read replication (p. 449).
3. When you are ready, promote the read replica to be a standalone Multi-AZ DB cluster.

To promote a Multi-AZ DB cluster read replica, use the AWS CLI command promote-read-
replica-db-cluster. For --db-cluster-identifier, specify the identifier of the Multi-AZ DB
cluster read replica.

aws rds promote-read-replica-db-cluster --db-cluster-identifier mymultiazdbcluster

4. Wait for the status of the promoted Multi-AZ DB cluster to be Available.


5. Direct your applications to use the promoted Multi-AZ DB cluster.

Optionally, delete the Single-AZ deployment or Multi-AZ DB instance deployment if it is no longer


needed. For instructions, see Deleting a DB instance (p. 489).

RDS API

To migrate a Single-AZ deployment or Multi-AZ DB instance deployment to a Multi-AZ DB cluster using a


read replica, complete the following steps using the RDS API.

1. Create the Multi-AZ DB cluster read replica.

To create a Multi-AZ DB cluster read replica, use the CreateDBCluster operation with the required
parameter DBClusterIdentifier. For ReplicationSourceIdentifier, specify the Amazon
Resource Name (ARN) of the source DB instance.
2. Stop any transactions from being written to the source DB instance, and then wait for all updates to
be made to the read replica.

Database updates occur on the read replica after they have occurred on the primary DB instance.
This replication lag can vary significantly. Use the Replica Lag metric to determine when all
updates have been made to the read replica. For more information about replica lag, see Monitoring
read replication (p. 449).
3. When you are ready, promote read replica to be a standalone Multi-AZ DB cluster.

556
Amazon Relational Database Service User Guide
Working with Multi-AZ DB cluster read replicas

To promote a Multi-AZ DB cluster read replica, use the PromoteReadReplicaDBCluster


operation with the required parameter DBClusterIdentifier. Specify the identifier of the Multi-
AZ DB cluster read replica.
4. Wait for the status of the promoted Multi-AZ DB cluster to be Available.
5. Direct your applications to use the promoted Multi-AZ DB cluster.

Optionally, delete the Single-AZ deployment or Multi-AZ DB instance deployment if it is no longer


needed. For instructions, see Deleting a DB instance (p. 489).

Limitations for creating a Multi-AZ DB cluster read replica


The following limitations apply to creating a Multi-AZ DB cluster read replica from a Single-AZ
deployment or Multi-AZ DB instance deployment.

• You can't create a Multi-AZ DB cluster read replica in an AWS account that is different from the AWS
account that owns the source DB instance.
• You can't create a Multi-AZ DB cluster read replica in a different AWS Region from the source DB
instance.
• You can't recover a Multi-AZ DB cluster read replica to a point in time.
• Storage encryption must have the same settings on the source DB instance and Multi-AZ DB cluster.
• If the source DB instance is encrypted, the Multi-AZ DB cluster read replica must be encrypted using
the same KMS key.
• To perform a minor version upgrade on the source DB instance, you must first perform the minor
version upgrade on the Multi-AZ DB cluster read replica.
• You can't perform a major version upgrade on a Multi-AZ DB cluster.
• You can perform a major version upgrade on the source DB instance of a Multi-AZ DB cluster read
replica, but replication to the read replica stops and can't be restarted.
• The Multi-AZ DB cluster read replica doesn't support cascading read replicas.
• For RDS for PostgreSQL, Multi-AZ DB cluster read replicas can't fail over.

Creating a DB instance read replica from a Multi-AZ DB cluster


You can create a DB instance read replica from a Multi-AZ DB cluster in order to scale beyond the
compute or I/O capacity of the cluster for read-heavy database workloads. You can direct this excess
read traffic to one or more DB instance read replicas. You can also use read replicas to migrate from a
Multi-AZ DB cluster to a DB instance.

To create a read replica, specify a Multi-AZ DB cluster as the replication source. One of the reader
instances of the Multi-AZ DB cluster is always the source of replication, not the writer instance. This
condition ensures that the replica is always in sync with the source cluster, even in cases of failover.

Topics
• Comparing reader DB instances and DB instance read replicas (p. 558)
• Considerations (p. 558)
• Creating a DB instance read replica (p. 558)
• Promoting the DB instance read replica (p. 559)
• Limitations for creating a DB instance read replica from a Multi-AZ DB cluster (p. 560)

557
Amazon Relational Database Service User Guide
Working with Multi-AZ DB cluster read replicas

Comparing reader DB instances and DB instance read replicas


A DB instance read replica of a Multi-AZ DB cluster is different than the reader DB instances of the Multi-
AZ DB cluster in the following ways:

• The reader DB instances act as automatic failover targets, while DB instance read replicas do not.
• Reader DB instances must acknowledge a change from the writer DB instance before the change can
be committed. For DB instance read replicas, however, updates are asynchronously copied to the read
replica without requiring acknowledgement.
• Reader DB instances always share the same instance class, storage type, and engine version as the
writer DB instance of the Multi-AZ DB cluster. DB instance read replicas, however, don’t necessarily
have to share the same configurations as the source cluster.
• You can promote a DB instance read replica to a standalone DB instance. You can’t promote a reader
DB instance of a Multi-AZ DB cluster to a standalone instance.
• The reader endpoint only routes requests to the reader DB instances of the Multi-AZ DB cluster. It
never routes requests to a DB instance read replica.

For more information about reader and writer DB instances, see the section called “Overview of Multi-AZ
DB clusters” (p. 500).

Considerations
Consider the following before you create a DB instance read replica from a Multi-AZ DB cluster:

• When you create the DB instance read replica, it must be on the same major version as its source
cluster, and the same or higher minor version. After you create it, you can optionally upgrade the read
replica to a higher minor version than the source cluster.
• When you create the DB instance read replica, the allocated storage must be the same as the allocated
storage of the source Multi-AZ DB cluster. You can change the allocated storage after the read replica
is created.
• For RDS for MySQL, the gtid-mode parameter must be set to ON for the source Multi-AZ DB cluster.
For more information, see the section called “Working with DB cluster parameter groups” (p. 360).
• An active, long-running transaction can slow the process of creating the read replica. We recommend
that you wait for long-running transactions to complete before creating a read replica.
• If you delete the source Multi-AZ DB cluster for a DB instance read replica, any read replicas that it's
writing to are promoted to standalone DB instances.

Creating a DB instance read replica


You can create a DB instance read replica from a Multi-AZ DB cluster using the AWS Management
Console, AWS CLI, or RDS API.
Note
We strongly recommend that you create all read replicas in the same virtual private cloud (VPC)
based on Amazon VPC of the source Multi-AZ DB cluster.
If you create a read replica in a different VPC from the source Multi-AZ DB cluster, Classless
Inter-Domain Routing (CIDR) ranges can overlap between the replica and the RDS system. CIDR
overlap makes the replica unstable, which can negatively impact applications connecting to
it. If you receive an error when creating the read replica, choose a different destination DB
subnet group. For more information, see the section called “Working with a DB instance in a
VPC” (p. 2688).

Console
To create a DB instance read replica from a Multi-AZ DB cluster, complete the following steps using the
AWS Management Console.

558
Amazon Relational Database Service User Guide
Working with Multi-AZ DB cluster read replicas

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases.
3. Choose the Multi-AZ DB cluster that you want to use as the source for a read replica.
4. For Actions, choose Create read replica.
5. For Replica source, make sure that the correct Multi-AZ DB cluster is selected.
6. For DB identifier, enter a name for the read replica.
7. For the remaining sections, specify your DB instance settings. For information about a setting, see
the section called “Available settings” (p. 308).
Note
The allocated storage for the DB instance read replica must be the same as the allocated
storage for the source Multi-AZ DB cluster.
8. Choose Create read replica.

AWS CLI

To create a DB instance read replica from a Multi-AZ DB cluster, use the AWS CLI command create-db-
instance-read-replica. For --source-db-cluster-identifier, specify the identifier of the
Multi-AZ DB cluster.

For Linux, macOS, or Unix:

aws rds create-db-instance-read-replica \


--db-instance-identifier myreadreplica \
--source-db-cluster-identifier mymultiazdbcluster

For Windows:

aws rds create-db-instance-read-replica ^


--db-instance-identifier myreadreplica ^
--source-db-cluster-identifier mymultiazdbcluster

RDS API
To create a DB instance read replica from a Multi-AZ DB cluster, use the
CreateDBInstanceReadReplica operation.

Promoting the DB instance read replica


If you no longer need the DB instance read replica, you can promote it into a standalone DB instance.
When you promote a read replica, the DB instance is rebooted before it becomes available. For
instructions, see the section called “Promoting a read replica” (p. 447).

If you're using the read replica to migrate a Multi-AZ DB cluster deployment to a Single-AZ or Multi-AZ
DB instance deployment, make sure to stop any transactions that are being written to the source DB
cluster. Then, wait for all updates to be made to the read replica. Database updates occur on the read
replica after they occur on one of the reader DB instances of the Multi-AZ DB cluster. This replication
lag can vary significantly. Use the ReplicaLag metric to determine when all updates have been made
to the read replica. For more information about replica lag, see the section called “Monitoring read
replication” (p. 449).

After you promote the read replica, wait for the status of the promoted DB instance to be Available
before you direct your applications to use the promoted DB instance. Optionally, delete the Multi-AZ DB
cluster deployment if you no longer need it. For instructions, see the section called “Deleting a Multi-AZ
DB cluster” (p. 563).

559
Amazon Relational Database Service User Guide
Working with Multi-AZ DB cluster read replicas

Limitations for creating a DB instance read replica from a Multi-AZ DB cluster


The following limitations apply to creating a DB instance read replica from a Multi-AZ DB cluster
deployment.

• You can't create a DB instance read replica in an AWS account that's different from the AWS account
that owns the source Multi-AZ DB cluster.
• You can't create a DB instance read replica in a different AWS Region from the source Multi-AZ DB
cluster.
• You can't recover a DB instance read replica to a point in time.
• Storage encryption must have the same settings on the source Multi-AZ DB cluster and DB instance
read replica.
• If the source Multi-AZ DB cluster is encrypted, the DB instance read replica must be encrypted using
the same KMS key.
• To perform a minor version upgrade on the source Multi-AZ DB cluster, you must first perform the
minor version upgrade on the DB instance read replica.
• The DB instance read replica doesn't support cascading read replicas.
• For RDS for PostgreSQL, the source Multi-AZ DB cluster must be running PostgreSQL version 13.11,
14.8, or 15.2.R2 or higher in order to create a DB instance read replica.
• You can perform a major version upgrade on the source Multi-AZ DB cluster of a DB instance read
replica, but replication to the read replica stops and can't be restarted.

560
Amazon Relational Database Service User Guide
Using PostgreSQL logical replication
with Multi-AZ DB clusters

Using PostgreSQL logical replication with Multi-AZ


DB clusters
By using PostgreSQL logical replication with your Multi-AZ DB cluster, you can replicate and synchronize
individual tables rather than the entire database instance. Logical replication uses a publish and
subscribe model to replicate changes from a source to one or more recipients. It works by using change
records from the PostgreSQL write-ahead log (WAL). For more information, see the section called
“Logical replication” (p. 2160).

When you create a new logical replication slot on the writer DB instance of a Multi-AZ DB cluster, the slot
is asynchronously copied to each reader DB instance in the cluster. The slots on the reader DB instances
are continuously synchronized with those on the writer DB instance.

Logical replication is supported for Multi-AZ DB clusters running RDS for PostgreSQL version 14.8-R2
and higher, and 15.3-R2 and higher.
Note
In addition to the native PostgreSQL logical replication feature, Multi-AZ DB clusters running
RDS for PostgreSQL also support the pglogical extension.

For more information about PostgreSQL logical replication, see Logical replication in the PostgreSQL
documentation.

Topics
• Prerequisites (p. 561)
• Setting up logical replication (p. 561)

Prerequisites
To configure PostgreSQL logical replication for Multi-AZ DB clusters, you must meet the following
prerequisites.

• Your user account must be a member of the rds_superuser group and have rds_superuser
privileges. For more information, see the section called “Understanding PostgreSQL roles and
permissions” (p. 2271).
• Your Multi-AZ DB cluster must be associated with a custom DB cluster parameter group so that you
can configure the parameter values described in the following procedure. For more information, see
the section called “Working with DB cluster parameter groups” (p. 360).

Setting up logical replication


To set up logical replication for a Multi-AZ DB cluster, you enable specific parameters within the
associated DB cluster parameter group, then create logical replication slots.

To set up logical replication for an RDS for PostgreSQL Multi-AZ DB cluster

1. Open the custom DB cluster parameter group associated with your RDS for PostgreSQL Multi-AZ DB
cluster.
2. In the Parameters search field, locate the rds.logical_replication static parameter and set its
value to 1. This parameter change can increase WAL generation, so enable it only when you’re using
logical slots.
3. As part of this change, configure the following DB cluster parameters.

• max_wal_senders

561
Amazon Relational Database Service User Guide
Using PostgreSQL logical replication
with Multi-AZ DB clusters

• max_replication_slots
• max_connections

Depending on your expected usage, you might also need to change the values of the following
parameters. However, in many cases, the default values are sufficient.

• max_logical_replication_workers
• max_sync_workers_per_subscription
4. Reboot the Multi-AZ DB cluster for the parameter values to take effect. For instructions, see the
section called “Rebooting a Multi-AZ DB cluster” (p. 552).
5. Create a logical replication slot on the writer DB instance of the Multi-AZ DB cluster as explained in
the section called “Working with logical replication slots” (p. 2161). This process requires that you
specify a decoding plugin. Currently, RDS for PostgreSQL supports the test_decoding, wal2json,
and pgoutput plugins that ship with PostgreSQL.

The slot is asynchronously copied to each reader DB instance in the cluster.


6. Verify the state of the slot on all reader DB instances of the Multi-AZ DB cluster. To do so,
inspect the pg_replication_slots view on all reader DB instances and make sure that the
confirmed_flush_lsn state is making progress while the application is actively consuming logical
changes.

The following commands demonstrate how to inspect the replication state on the reader DB
instances.

% psql -h test-postgres-instance-2.abcdefabcdef.us-west-2.rds.amazonaws.com

postgres=> select slot_name, slot_type, confirmed_flush_lsn from pg_replication_slots;


slot_name | slot_type | confirmed_flush_lsn
--------------+-----------+---------------------
logical_slot | logical | 32/D0001700
(1 row)

postgres=> select slot_name, slot_type, confirmed_flush_lsn from pg_replication_slots;


slot_name | slot_type | confirmed_flush_lsn
--------------+-----------+---------------------
logical_slot | logical | 32/D8003628
(1 row)

% psql -h test-postgres-instance-3.abcdefabcdef.us-west-2.rds.amazonaws.com

postgres=> select slot_name, slot_type, confirmed_flush_lsn from pg_replication_slots;


slot_name | slot_type | confirmed_flush_lsn
--------------+-----------+---------------------
logical_slot | logical | 32/D0001700
(1 row)

postgres=> select slot_name, slot_type, confirmed_flush_lsn from pg_replication_slots;


slot_name | slot_type | confirmed_flush_lsn
--------------+-----------+---------------------
logical_slot | logical | 32/D8003628
(1 row)

After you complete your replication tasks, stop the replication process, drop replication slots, and turn
off logical replication. To turn off logical replication, modify your DB cluster parameter group and set the
value of rds.logical_replication back to 0. Reboot the cluster for the parameter change to take
effect.

562
Amazon Relational Database Service User Guide
Deleting a Multi-AZ DB cluster

Deleting a Multi-AZ DB cluster


You can delete a DB Multi-AZ DB cluster using the AWS Management Console, the AWS CLI, or the RDS
API.

The time required to delete a Multi-AZ DB cluster can vary depending on certain factors. These are the
backup retention period (that is, how many backups to delete), how much data is deleted, and whether a
final snapshot is taken.

You can't delete a Multi-AZ DB cluster when deletion protection is turned on for it. For more information,
see Prerequisites for deleting a DB instance (p. 489). You can turn off deletion protection by modifying
the Multi-AZ DB cluster. For more information, see Modifying a Multi-AZ DB cluster (p. 539).

Console
To delete a Multi-AZ DB cluster

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases, and then choose the Multi-AZ DB cluster that you want to
delete.
3. For Actions, choose Delete.
4. Choose Create final snapshot? to create a final DB snapshot for the Multi-AZ DB cluster.

If you create a final snapshot, enter a name for Final snapshot name.
5. Choose Retain automated backups to retain automated backups.
6. Enter delete me in the box.
7. Choose Delete.

AWS CLI
To delete a Multi-AZ DB cluster by using the AWS CLI, call the delete-db-cluster command with the
following options:

• --db-cluster-identifier
• --final-db-snapshot-identifier or --skip-final-snapshot

Example With a final snapshot


For Linux, macOS, or Unix:

aws rds delete-db-cluster \


--db-cluster-identifier mymultiazdbcluster \
--final-db-snapshot-identifier mymultiazdbclusterfinalsnapshot

For Windows:

aws rds delete-db-cluster ^


--db-cluster-identifier mymultiazdbcluster ^
--final-db-snapshot-identifier mymultiazdbclusterfinalsnapshot

Example With no final snapshot


For Linux, macOS, or Unix:

563
Amazon Relational Database Service User Guide
Deleting a Multi-AZ DB cluster

aws rds delete-db-cluster \


--db-cluster-identifier mymultiazdbcluster \
--skip-final-snapshot

For Windows:

aws rds delete-db-cluster ^


--db-cluster-identifier mymultiazdbcluster ^
--skip-final-snapshot

RDS API
To delete a Multi-AZ DB cluster by using the Amazon RDS API, call the DeleteDBCluster operation with
the following parameters:

• DBClusterIdentifier
• FinalDBSnapshotIdentifier or SkipFinalSnapshot

564
Amazon Relational Database Service User Guide

Using Amazon RDS Extended


Support
After a major engine version of your database reaches the RDS end of standard support date, Amazon
RDS automatically upgrades your databases to the next supported major engine version. With Amazon
RDS Extended Support, you can continue running your database on a major engine version past the RDS
end of standard support date for an additional cost.

This paid feature gives you more time to upgrade to a supported major engine version. During Extended
Support, Amazon RDS will supply patches for Critical and High CVEs as defined by the National
Vulnerability Database (NVD) CVSS severity ratings. For more information, see Vulnerability Metrics. You
can also create new databases with major engine versions that have reached the RDS end of standard
support date. When you create these databases, Amazon RDS automatically enables Amazon RDS
Extended Support.

For example, the RDS end of standard support date for RDS for MySQL version 5.7 is February 29, 2024.
But you aren't ready to manually upgrade to RDS for MySQL version 8.0 before that date or for Amazon
RDS to automatically upgrade your DB instances to RDS for MySQL version 8.0 after that date. If you
enable Extended Support for those DB instances before February 29, 2024, you can continue to run
RDS for MySQL version 5.7, and starting March 1, 2024, Amazon RDS will automatically charge you for
Extended Support.

This additional charge for Extended Support ends as soon as you upgrade to a supported major
engine version or you delete the database that was running a major version past the RDS end of
standard support date. For more information, see Amazon RDS for MySQL pricing and Amazon RDS for
PostgreSQL pricing.
Note
Extended Support is only available on the last minor version released before the RDS end of
standard support date. If you enable Extended Support, Amazon RDS automatically upgrades
your DB instance to a minor version that supports Extended Support. Amazon RDS won't
upgrade your minor version until after the RDS end of standard support date for your major
engine version. For more information, see Supported MySQL minor versions on Amazon
RDS (p. 1627).

Extended Support is available for up to 3 years past the RDS end of standard support date for a major
engine version. After 3 years, if you haven't upgraded your major engine version to a supported version,
then Amazon RDS will automatically upgrade your major engine version. We recommend that you
upgrade to a supported major engine version as soon as possible.

Extended Support is available for RDS for MySQL 5.7 and 8.0, and for RDS for PostgreSQL 11 and higher.
For more information, see Supported MySQL major versions on Amazon RDS (p. 1629) and Release
calendar for Amazon RDS for PostgreSQL.
Note
You must enable Extended Support for a particular version before the RDS end of standard
support date for that version.
Extended Support will be available through the AWS Management Console or the Amazon RDS
API in December 2023.

565
Amazon Relational Database Service User Guide

Using Amazon RDS Blue/Green


Deployments for database updates
A blue/green deployment copies a production database environment to a separate, synchronized staging
environment. By using Amazon RDS Blue/Green Deployments, you can make changes to the database in
the staging environment without affecting the production environment. For example, you can upgrade
the major or minor DB engine version, change database parameters, or make schema changes in the
staging environment. When you are ready, you can promote the staging environment to be the new
production database environment, with downtime typically under one minute.
Note
Currently, blue/green deployments are supported only for RDS for MariaDB and RDS for MySQL.
For Amazon Aurora availabilty, see Using Amazon RDS Blue/Green Deployments for database
updates in the Amazon Aurora User Guide.

Topics
• Overview of Amazon RDS Blue/Green Deployments (p. 567)
• Creating a blue/green deployment (p. 575)
• Viewing a blue/green deployment (p. 579)
• Switching a blue/green deployment (p. 582)
• Deleting a blue/green deployment (p. 587)

566
Amazon Relational Database Service User Guide
Overview of Amazon RDS Blue/Green Deployments

Overview of Amazon RDS Blue/Green


Deployments
By using Amazon RDS Blue/Green Deployments, you can create a blue/green deployment for managed
database changes. A blue/green deployment creates a staging environment that copies the production
environment. In a blue/green deployment, the blue environment is the current production environment.
The green environment is the staging environment. The staging environment stays in sync with the
current production environment using logical replication.

You can make changes to the RDS DB instances in the green environment without affecting production
workloads. For example, you can upgrade the major or minor DB engine version, change database
parameters, or make schema changes in the staging environment. You can thoroughly test changes
in the green environment. When ready, you can switch over the environments to promote the green
environment to be the new production environment. The switchover typically takes under a minute with
no data loss and no need for application changes.

Because the green environment is a copy of the topology of the production environment, the green
environment includes the features used by the DB instance. These features include the read replicas,
the storage configuration, DB snapshots, automated backups, Performance Insights, and Enhanced
Monitoring. If the blue DB instance is a Multi-AZ DB instance deployment, then the green DB instance is
also a Multi-AZ DB instance deployment.
Note
Currently, blue/green deployments are supported only for RDS for MariaDB and RDS for MySQL.
For Amazon Aurora availabilty, see Using Amazon RDS Blue/Green Deployments for database
updates in the Amazon Aurora User Guide.

Topics
• Benefits of using Amazon RDS Blue/Green Deployments (p. 567)
• Workflow of a blue/green deployment (p. 568)
• Authorizing access to blue/green deployment operations (p. 572)
• Considerations for blue/green deployments (p. 572)
• Best practices for blue/green deployments (p. 574)
• Region and version availability (p. 575)
• Limitations for blue/green deployments (p. 575)

Benefits of using Amazon RDS Blue/Green


Deployments
By using Amazon RDS Blue/Green Deployments, you can stay current on security patches, improve
database performance, and adopt newer database features with short, predictable downtime. Blue/
green deployments reduce the risks and downtime for database updates, such as major or minor engine
version upgrades.

Blue/green deployments provide the following benefits:

• Easily create a production-ready staging environment.


• Automatically replicate database changes from the production environment to the staging
environment.
• Test database changes in a safe staging environment without affecting the production environment.

567
Amazon Relational Database Service User Guide
Workflow

• Stay current with database patches and system updates.


• Implement and test newer database features.
• Switch over your staging environment to be the new production environment without changes to your
application.
• Safely switch over through the use of built-in switchover guardrails.
• Eliminate data loss during switchover.
• Switch over quickly, typically under a minute depending on your workload.

Workflow of a blue/green deployment


Complete the following major steps when you use a blue/green deployment for database updates.

1. Identify a production environment that requires updates.

For example, the production environment in this image has a Multi-AZ DB instance deployment
(mydb1) and a read replica (mydb2).

2. Create the blue/green deployment. For instructions, see Creating a blue/green deployment (p. 575).

The following image shows an example of a blue/green deployment of the production environment
from step 1. While creating the blue/green deployment, RDS copies the complete topology and
configuration of the primary DB instance to create the green environment. The copied DB instance
names are appended with -green-random-characters. The staging environment in the image
contains a Multi-AZ DB instance deployment (mydb1-green-abc123) and a read replica (mydb2-
green-abc123).

568
Amazon Relational Database Service User Guide
Workflow

When you create the blue/green deployment, you can upgrade your DB engine version and specify
a different DB parameter group for the DB instances in the green environment. RDS also configures
logical replication from the primary DB instance in the blue environment to the primary DB instance in
the green environment.

After you create the blue/green deployment, the DB instance in the green environment is read-only by
default.
3. Make additional changes to the staging environment, if required.

For example, you might make schema changes to your database or change the DB instance class used
by one or more DB instances in the green environment.

569
Amazon Relational Database Service User Guide
Workflow

For information about modifying a DB instance, see Modifying an Amazon RDS DB instance (p. 401).
4. Test your staging environment.

During testing, we recommend that you keep your databases in the green environment read only. We
recommend that you enable write operations on the green environment with caution because they
can result in replication conflicts. They can also result in unintended data in the production databases
after switchover.
5. When ready, switch over to promote the staging environment to be the new production environment.
For instructions, see Switching a blue/green deployment (p. 582).

The switchover results in downtime. The downtime is usually under one minute, but it can be longer
depending on your workload.

The following image shows the DB instances after the switchover.

570
Amazon Relational Database Service User Guide
Workflow

After the switchover, the DB instances that were in the green environment become the new
production DB instances. The names and endpoints in the current production environment are
assigned to the newly promoted production environment, requiring no changes to your application.
As a result, your production traffic now flows to the new production environment. The DB instances
in the previous blue environment are renamed by appending -oldn to the current name, where n is
a number. For example, assume the name of the DB instance in the blue environment is mydb1. After
switchover, the DB instance name might be mydb1-old1.

In the example in the image, the following changes occur during switchover:
• The green environment Multi-AZ DB instance deployment named mydb1-green-abc123 becomes
the production Multi-AZ DB instance deployment named mydb1.
• The green environment read replica named mydb2-green-abc123 becomes the production read
replica mydb2.

571
Amazon Relational Database Service User Guide
Authorizing access

• The blue environment Multi-AZ DB instance deployment named mydb1 becomes mydb1-old1.
• The blue environment read replica named mydb2 becomes mydb2-old1.
6. If you no longer need a blue/green deployment, you can delete it. For instructions, see Deleting a
blue/green deployment (p. 587).

After switchover, the previous production environment isn't deleted so that you can use it for
regression testing, if necessary.

Authorizing access to blue/green deployment


operations
Users must have the required permissions to perform operations related to blue/green deployments.
You can create IAM policies that grant users and roles permission to perform specific API operations on
the specified resources they need. You can then attach those policies to the IAM permission sets or roles
that require those permissions. For more information, see Identity and access management for Amazon
RDS (p. 2606).

The user who creates a blue/green deployment must have permissions to perform the following RDS
operations:

• rds:AddTagsToResource
• rds:CreateDBInstanceReadReplica

The user who switches over a blue/green deployment must have permissions to perform the following
RDS operations:

• rds:ModifyDBInstance
• rds:PromoteDBInstance

The user who deletes a blue/green deployment must have permissions to perform the following RDS
operation:

• rds:DeleteDBInstance

Considerations for blue/green deployments


Amazon RDS tracks resources in blue/green deployments with the DbiResourceId of each resource.
This resource ID is an AWS Region-unique, immutable identifier for the resource.

The resource ID is separate from the DB instance ID:

572
Amazon Relational Database Service User Guide
Considerations

The name (instance ID) of a resource changes when you switch over a blue/green deployment, but each
resource keeps the same resource ID. For example, a DB instance identifier might be mydb in the blue
environment. After switchover, the same DB instance might be renamed to mydb-old1. However, the
resource ID of the DB instance doesn't change during switchover. So, when the green resources are
promoted to be the new production resources, their resource IDs don't match the blue resource IDs that
were previously in production.

After switching over a blue/green deployment, consider updating the resource IDs to those of the newly
promoted production resources for integrated features and services that you used with the production
resources. Specifically, consider the following updates:

• If you perform filtering using the RDS API and resource IDs, adjust the resource IDs used in filtering
after switchover.
• If you use CloudTrail for auditing resources, adjust the consumers of the CloudTrail to track the new
resource IDs after switchover. For more information, see Monitoring Amazon RDS API calls in AWS
CloudTrail (p. 940).
• If you use the Performance Insights API, adjust the resource IDs in calls to the API after switchover. For
more information, see Monitoring DB load with Performance Insights on Amazon RDS (p. 720).

You can monitor a database with the same name after switchover, but it doesn't contain the data from
before the switchover.
• If you use resource IDs in IAM policies, make sure you add the resource IDs of the newly promoted
resources when necessary. For more information, see Identity and access management for Amazon
RDS (p. 2606).
• If you authenticate to your DB instance using IAM database authentication (p. 2642), make sure that
the IAM policy used for database access has both the blue and the green databases listed under the
Resource element of the policy. This is required in order to connect to the green database after
switchover. For more information, see the section called “Creating and using an IAM policy for IAM
database access” (p. 2646).
• If you use AWS Backup to manage automated backups of resources in a blue/green deployment, adjust
the resource IDs used by AWS Backup after switchover. For more information, see Using AWS Backup
to manage automated backups (p. 599).

573
Amazon Relational Database Service User Guide
Best practices

• If you want to restore a manual or automated DB snapshot for a DB instance that was part of a blue/
green deployment, make sure you restore the correct DB snapshot by examining the time when the
snapshot was taken. For more information, see Restoring from a DB snapshot (p. 615).
• If you want to describe a previous blue environment DB instance automated backup or restore it to a
point in time, use the resource ID for the operation.

Because the name of the DB instance changes during switchover, you can't use its previous name for
DescribeDBInstanceAutomatedBackups or RestoreDBInstanceToPointInTime operations.

For more information, see Restoring a DB instance to a specified time (p. 660).
• When you add a read replica to a DB instance in the green environment of a blue/green deployment,
the new read replica won't replace a read replica in the blue environment when you switch over.
However, the new read replica is retained in the new production environment after switchover.
• When you delete a DB instance in the green environment of a blue/green deployment, you can't create
a new DB instance to replace it in the blue/green deployment.

If you create a new DB instance with the same name and Amazon Resource Name (ARN) as the deleted
DB instance, it has a different DbiResourceId, so it isn't part of the green environment.

The following behavior results if you delete a DB instance in the green environment:
• If the DB instance in the blue environment with the same name exists, it won't be switched over to
the DB instance in the green environment. This DB instance won't be renamed by adding -oldn to
the DB instance name.
• Any application that points to the DB instance in the blue environment continues to use the same
DB instance after switchover.

The same behavior applies to DB instances and read replicas.

Best practices for blue/green deployments


The following are best practices for blue/green deployments:

• Avoid using non-transactional storage engines, such as MyISAM, that aren't optimized for replication.
• Optimize read replicas for binary log replication.

For example, if your DB engine version supports it, consider using GTID replication, parallel replication,
and crash-safe replication in your production environment before deploying your blue/green
deployment. These options promote consistency and durability of your data before you switch over
your blue/green deployment. For more information about GTID replication for read replicas, see Using
GTID-based replication for Amazon RDS for MySQL (p. 1719).
• Thoroughly test the DB instances in the green environment before switching over.
• Keep your databases in the green environment read only. We recommend that you enable write
operations on the green environment with caution because they can result in replication conflicts.
They can also result in unintended data in the production databases after switchover.
• When using a blue/green deployment to implement schema changes, make only replication-
compatible changes.

For example, you can add new columns at the end of a table, create indexes, or drop indexes without
disrupting replication from the blue deployment to the green deployment. However, schema changes,
such as renaming columns or renaming tables, break binary log replication to the green deployment.

For more information about replication-compatible changes, see Replication with Differing Table
Definitions on Source and Replica in the MySQL documentation.

574
Amazon Relational Database Service User Guide
Region and version availability

• After you create the blue/green deployment, handle lazy loading if necessary. Make sure data loading
is complete before switching over. For more information, see Handling lazy loading when you create a
blue/green deployment (p. 576).
• When you switch over a blue/green deployment, follow the switchover best practices. For more
information, see the section called “Switchover best practices” (p. 584).

Region and version availability


Feature availability and support varies across specific versions of each database engine, and across
AWS Regions. For more information on version and Region availability with Amazon RDS Blue/Green
Deployments, see Blue/Green Deployments (p. 118).

Limitations for blue/green deployments


The following limitations apply to blue/green deployments:

• MySQL versions 8.0.11 through 8.0.13 have a community bug that prevents RDS from supporting
them for blue/green deployments.
• The Event Scheduler (event_scheduler parameter) must be disabled on the green environment
when you create a blue/green deployment. This prevents events from being generated in the green
environment and causing inconsistencies.
• Blue/green deployments aren't supported for the following features:
• Amazon RDS Proxy
• Cascading read replicas
• Cross-Region read replicas
• AWS CloudFormation
• Multi-AZ DB cluster deployments

Blue/green deployments are supported for Multi-AZ DB instance deployments. For more
information about Multi-AZ deployments, see Configuring and managing a Multi-AZ
deployment (p. 492).
• The following are limitations for changes in a blue/green deployment:
• You can't change an unencrypted DB instance into an encrypted DB instance.
• You can't change an encrypted DB instance into an unencrypted DB instance.
• You can't change a blue environment DB instance to a higher engine version than its corresponding
green environment DB instance.
• The resources in the blue environment and green environment must be in the same AWS account.
• During switchover, the blue primary DB instance can't be the target of external replication.
• If the source database is associated with a custom option group, you can't specify a major version
upgrade when you create the blue/green deployment.

In this case, you can create a blue/green deployment without specifying a major version upgrade.
Then, you can upgrade the database in the green environment. For more information, see Upgrading
a DB instance engine version (p. 429).

Creating a blue/green deployment


When you create a blue/green deployment, you specify the DB instance to copy in the deployment. The
DB instance you choose is the production DB instance, and it becomes the primary DB instance in the
blue environment. This DB instance is copied to the green environment, and RDS configures replication
from the DB instance in the blue environment to the DB instance in the green environment.

575
Amazon Relational Database Service User Guide
Making changes in the green environment

RDS copies the blue environment's topology to a staging area, along with its configured features. When
the blue DB instance has read replicas, the read replicas are copied as read replicas of the green DB
instance in the deployment. If the blue DB instance is a Multi-AZ DB instance deployment, then the green
DB instance is created as a Multi-AZ DB instance deployment.

Topics
• Making changes in the green environment (p. 576)
• Handling lazy loading when you create a blue/green deployment (p. 576)
• Creating the blue/green deployment (p. 577)

Making changes in the green environment


You can make the following changes to the DB instance in the green environment when you create the
blue/green deployment:

• You can specify a higher engine version if you want to test a DB engine upgrade.
• You can specify a DB parameter group that is different from the one used by the DB instance in
the blue environment. You can test how parameter changes affect the DB instances in the green
environment or specify a parameter group for a new major DB engine version in the case of an
upgrade.

If you specify a different DB parameter group, the specified DB parameter group is associated with all
of the DB instances in the green environment. If you don't specify a different parameter group, each
DB instance in the green environment is associated with the parameter group of its corresponding blue
DB instance.

You can make other modifications to the DB instance in the green environment after it is deployed. For
example, you might make schema changes to your database or change the DB instance class used by one
or more DB instances in the green environment.

For information about modifying a DB instance, see Modifying an Amazon RDS DB instance (p. 401).

Handling lazy loading when you create a blue/green


deployment
When you create a blue/green deployment, Amazon RDS creates the primary DB instance in the green
environment by restoring from a DB snapshot. After it is created, the green DB instance continues to load
data in the background, which is known as lazy loading. If the DB instance has read replicas, these are
also created from DB snapshots and are subject to lazy loading.

If you access data that hasn't been loaded yet, the DB instance immediately downloads the requested
data from Amazon S3, and then continues loading the rest of the data in the background. For more
information, see Amazon EBS snapshots.

To help mitigate the effects of lazy loading on tables to which you require quick access, you can perform
operations that involve full-table scans, such as SELECT *. This operation allows Amazon RDS to
download all of the backed-up table data from S3.

If an application attempts to access data that isn't loaded, the application can encounter higher latency
than normal while the data is loaded. This higher latency due to lazy loading could lead to poor
performance for latency-sensitive workloads.
Important
If you switch over a blue/green deployment before data loading is complete, your application
could experience performance issues due to high latency.

576
Amazon Relational Database Service User Guide
Creating the blue/green deployment

Creating the blue/green deployment


You can create the blue/green deployment using the AWS Management Console, the AWS CLI, or the
RDS API.

Console
To create a blue/green deployment

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases, and then choose the DB instance that you want to copy
to a green environment.
3. For Actions, choose Create Blue/Green Deployment.

The Create Blue/Green Deployment page appears.

4. On the Create Blue/Green Deployment page, review the blue database identifiers. Make sure they
match the DB instances that you expect in the blue environment. If they don't, choose Cancel.
5. For Blue/Green Deployment identifier, enter a name for your blue/green deployment.

577
Amazon Relational Database Service User Guide
Creating the blue/green deployment

6. (Optional) For Blue/Green Deployment settings, specify the settings for the green environment:

• Choose a DB engine version if you want to test a DB engine version upgrade.


• Choose a DB parameter group to associate with the DB instances in the green environment.

You can make other modifications to the databases in the green environment after it is deployed.
7. Choose Create Blue/Green Deployment.

AWS CLI
To create a blue/green deployment by using the AWS CLI, use the create-blue-green-deployment
command with the following options:

• --blue-green-deployment-name – Specify the name of the blue/green deployment.


• --source – Specify the ARN of the DB instance that you want to copy.
• --target-engine-version – Specify an engine version if you want to test a DB engine version
upgrade in the green environment. This option upgrades the DB instances in the green environment to
the specified DB engine version.

If not specified, each DB instance in the green environment is created with the same engine version as
the corresponding DB instance in the blue environment.
• --target-db-parameter-group-name – Specify a DB parameter group to associate with the DB
instances in the green environment.

Example Create a blue/green deployment

For Linux, macOS, or Unix:

aws rds create-blue-green-deployment \


--blue-green-deployment-name my-blue-green-deployment \
--source arn:aws:rds:us-east-2:123456789012:db:mydb1 \
--target-engine-version 8.0.31 \
--target-db-parameter-group-name mydbparametergroup

For Windows:

aws rds create-blue-green-deployment ^


--blue-green-deployment-name my-blue-green-deployment ^
--source arn:aws:rds:us-east-2:123456789012:db:mydb1 ^
--target-engine-version 8.0.31 ^
--target-db-parameter-group-name mydbparametergroup

RDS API
To create a blue/green deployment by using the Amazon RDS API, use the
CreateBlueGreenDeployment operation with the following parameters:

• BlueGreenDeploymentName – Specify the name of the blue/green deployment.


• Source – Specify the ARN of the DB instance that you want to copy to the green environment.
• TargetEngineVersion – Specify an engine version if you want to test a DB engine version upgrade
in the green environment. This option upgrades the DB instances in the green environment to the
specified DB engine version.

578
Amazon Relational Database Service User Guide
Viewing a blue/green deployment

If not specified, each DB instance in the green environment is created with the same engine version as
the corresponding DB instance in the blue environment.
• TargetDBParameterGroupName – Specify a DB parameter group to associate with the DB instances
in the green environment.

Viewing a blue/green deployment


You can view the details about a blue/green deployment using the AWS Management Console, the AWS
CLI, or the RDS API.

You can also view and subscribe to events for information about a blue/green deployment. For more
information, see Blue/green deployment events (p. 892).

Console
To view the details about a blue/green deployment

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases, and then find the blue/green deployment in the list.

The Role value for the blue/green deployment is Blue/Green Deployment.


3. Choose the name of blue/green deployment that you want to view to display its details.

Each tab has a section for the blue deployment and a section for the green deployment. The section
for the blue deployment shows the details about DB instances in the blue environment. The section
for the green deployment shows the details about DB instances in the green environment. You can
examine the details in both environments to see differences between them. For example, on the
Configuration tab, the DB engine version might be different in the blue environment and in the
green environment if you are upgrading the DB engine version in the green environment. Make sure
the values for the DB instances in both environments are the expected values.

The following image shows an example of the Connectivity & security tab.

579
Amazon Relational Database Service User Guide
Viewing a blue/green deployment

The following image shows an example of the Configuration tab.

580
Amazon Relational Database Service User Guide
Viewing a blue/green deployment

The following image shows an example of the Status tab.

581
Amazon Relational Database Service User Guide
Switching a blue/green deployment

AWS CLI
To view the details about a blue/green deployment by using the AWS CLI, use the describe-blue-green-
deployments command.

Example View the details about a blue/green deployment by filtering on its name

When you use the describe-blue-green-deployments command, you can filter on the --blue-green-
deployment-name. The following example shows the details for a blue/green deployment named my-
blue-green-deployment.

aws rds describe-blue-green-deployments --filters Name=blue-green-deployment-


name,Values=my-blue-green-deployment

Example View the details about a blue/green deployment by specifying its identifier

When you use the describe-blue-green-deployments command, you can specify the --blue-green-
deployment-identifier. The following example shows the details for a blue/green deployment with
the identifier bgd-1234567890abcdef.

aws rds describe-blue-green-deployments --blue-green-deployment-


identifier bgd-1234567890abcdef

RDS API
To view the details about a blue/green deployment by using the Amazon RDS API, use the
DescribeBlueGreenDeployments operation and specify the BlueGreenDeploymentIdentifier.

Switching a blue/green deployment


A switchover promotes the green environment to be the new production environment. When the green
DB instance has read replicas, they are also promoted. Before you switch over, production traffic is routed
to the DB instance and read replicas in the blue environment. After you switch over, production traffic is
routed to the DB instance and read replicas in the green environment.

Topics
• Switchover timeout (p. 582)
• Switchover guardrails (p. 583)
• Switchover actions (p. 583)
• Switchover best practices (p. 584)
• Verifying CloudWatch metrics before switchover (p. 584)
• Switching over a blue/green deployment (p. 585)
• After switchover (p. 587)

Switchover timeout
You can specify a switchover timeout period between 30 seconds and 3,600 seconds (one hour). If the
switchover takes longer than the specified duration, then any changes are rolled back and no changes
are made to either environment. The default timeout period is 300 seconds (five minutes).

582
Amazon Relational Database Service User Guide
Switchover guardrails

Switchover guardrails
When you start a switchover, Amazon RDS runs some basic checks to test the readiness of the blue and
green environments for switchover. These checks are known as switchover guardrails. These switchover
guardrails prevent a switchover if the environments aren't ready for it. Therefore, they avoid longer than
expected downtime and prevent the loss of data between the blue and green environments that might
result if the switchover started.

Amazon RDS runs the following guardrail checks on the green environment:

• Replication health – Check if green primary DB instance replication status is healthy. The green primary
DB instance is a replica of the blue primary DB instance.
• Replication lag – Check if the replica lag of the green primary DB instance is within allowable limits for
switchover. The allowable limits are based on the specified timeout period. Replica lag indicates how
far the green primary DB instance is lagging behind its blue primary DB instance. Replica lag indicates
how much time the green replica might require before it catches up with its blue source. For more
information, see Diagnosing and resolving lag between read replicas (p. 2736).
• Active writes – Make sure there are no active writes on the green primary DB instance.

Amazon RDS runs the following guardrail checks on the blue environment:

• External replication – Make sure the blue primary DB instance isn't the target of external replication to
prevent writes on the blue primary DB instance during switchover.
• Long-running active writes – Make sure there are no long-running active writes on the blue primary DB
instance because they can increase replica lag.
• Long-running DDL statements – Make sure there are no long-running DDL statements on the blue
primary DB instance because they can increase replica lag.

Switchover actions
When you switch over a blue/green deployment, RDS performs the following actions:

1. Runs guardrail checks to verify if the blue and green environments are ready for switchover.
2. Stops new write operations on the primary DB instance in both environments.
3. Drops connections to the DB instances in both environments and doesn't allow new connections.
4. Waits for replication to catch up in the green environment so that the green environment is in sync
with the blue environment.
5. Renames the DB instances in the both environments.

RDS renames the DB instances in the green environment to match the corresponding DB instances
in the blue environment. For example, assume the name of a DB instance in the blue environment is
mydb. Also assume the name of the corresponding DB instance in the green environment is mydb-
green-abc123. During switchover, the name of the DB instance in the green environment is changed
to mydb.

RDS renames the DB instances in the blue environment by appending -oldn to the current name,
where n is a number. For example, assume the name of a DB instance in the blue environment is mydb.
After switchover, the DB instance name might be mydb-old1.

RDS also renames the endpoints in the green environment to match the corresponding endpoints in
the blue environment so that application changes aren't required.
6. Allows connections to databases in both environments.

583
Amazon Relational Database Service User Guide
Switchover best practices

7. Allows write operations on the primary DB instance in the new production environment.

After switchover, the previous production primary DB instance only allows read operations until it is
rebooted.

You can monitor the status of a switchover using Amazon EventBridge. For more information, see the
section called “Blue/green deployment events” (p. 892).

If you have tags configured in the blue environment, these tags are moved to the new production
environment during switchover. The previous production environment also retains these tags. For more
information about tags, see Tagging Amazon RDS resources (p. 461).

If the switchover starts and then stops before finishing for any reason, then any changes are rolled back,
and no changes are made to either environment.

Switchover best practices


Before you switchover, we strongly recommend that you adhere to best practices by completing the
following tasks:

• Thoroughly test the resources in the green environment. Make sure they function properly and
efficiently.
• Monitor relevant Amazon CloudWatch metrics. For more information, see the section called “Verifying
CloudWatch metrics before switchover” (p. 584).
• Identify the best time for the switchover.

During the switchover, writes are cut off from databases in both environments. Identify a time when
traffic is lowest on your production environment. Long-running transactions, such as active DDLs, can
increase your switchover time, resulting in longer downtime for your production workloads.

If there's a large number of connections on your DB instances, consider manually reducing them
to the minimum amount necessary for your application before you switch over the blue/green
deployment. One way to achieve this is to create a script that monitors the status of the blue/green
deployment and starts cleaning up connections when it detects that the status has changed to
SWITCHOVER_IN_PROGRESS.
• Make sure the DB instances in both environments are in Available state.
• Make sure the primary DB instance in the green environment is healthy and replicating.
• Make sure that your network and client configurations don’t increase the DNS cache Time-To-Live
(TTL) beyond five seconds, which is the default for RDS DNS zones.
Otherwise, applications will continue to send write traffic to the blue environment after
switchover.
• Make sure data loading is complete before switching over. For more information, see Handling lazy
loading when you create a blue/green deployment (p. 576).

Note
During a switchover, you can't modify any DB instances included in the switchover.

Verifying CloudWatch metrics before switchover


Before you switch over a blue/green deployment, we recommend that you check the values of the
following metrics within Amazon CloudWatch.

• ReplicaLag – Use this metric to identify the current replication lag on the green environment. To
reduce downtime, make sure that this value is close to zero before you switch over.

584
Amazon Relational Database Service User Guide
Switching over a blue/green deployment

• DatabaseConnections – Use this metric to estimate the level of activity on the blue/green
deployment, and make sure that the value is at an acceptable level for your deployment before you
switch over. If Performance Insights is turned on, DBLoad is a more accurate metric.

For more information about these metrics, see the section called “CloudWatch metrics for
RDS” (p. 806).

Switching over a blue/green deployment


You can switch over a blue/green deployment using the AWS Management Console, the AWS CLI, or the
RDS API.

Console

To switch over a blue/green deployment

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases, and then choose the blue/green deployment that you
want to switch over.
3. For Actions, choose Switch over.

The Switch over page appears.

585
Amazon Relational Database Service User Guide
Switching over a blue/green deployment

4. On the Switch over page, review the switchover summary. Make sure the resources in both
environments match what you expect. If they don't, choose Cancel.
5. For Timeout, enter the time limit for switchover.
6. Choose Switch over.

AWS CLI
To switch over a blue/green deployment by using the AWS CLI, use the switchover-blue-green-
deployment command with the following options:

• --blue-green-deployment-identifier – Specify the identifier of the blue/green deployment.


• --switchover-timeout – Specify the time limit for the switchover, in seconds. The default is 300.

586
Amazon Relational Database Service User Guide
After switchover

Example Switch over a blue/green deployment

For Linux, macOS, or Unix:

aws rds switchover-blue-green-deployment \


--blue-green-deployment-identifier bgd-1234567890abcdef \
--switchover-timeout 600

For Windows:

aws rds switchover-blue-green-deployment ^


--blue-green-deployment-identifier bgd-1234567890abcdef ^
--switchover-timeout 600

RDS API
To switch over a blue/green deployment by using the Amazon RDS API, use the
SwitchoverBlueGreenDeployment operation with the following parameters:

• BlueGreenDeploymentIdentifier – Specify the identifier of the blue/green deployment.


• SwitchoverTimeout – Specify the time limit for the switchover, in seconds. The default is 300.

After switchover
After a switchover, the DB instances in the previous blue environment are retained. Standard costs apply
to these resources. Replication between the blue and green environments stops.

RDS renames the DB instances in the blue environment by appending -oldn to the current resource
name, where n is a number. The DB instances are read-only until you set the read_only parameter to 0.

Deleting a blue/green deployment


You can delete a blue/green deployment before or after you switch it over.

When you delete a blue/green deployment before switching it over, Amazon RDS optionally deletes the
DB instances in the green environment:

• If you choose to delete the DB instances in the green environment (--delete-target), they must
have deletion protection turned off.
• If you don't delete the DB instances in the green environment (--no-delete-target), the instances
are retained, but they're no longer part of a blue/green deployment. Replication continues between
the environments.

The option to delete the green databases isn't available in the console after switchover (p. 582). When
you delete blue/green deployments using the AWS CLI, you can't specify the --delete-target option
if the deployment status is SWITCHOVER_COMPLETED.
Important
Deleting a blue/green deployment doesn't affect the blue environment.

You can delete a blue/green deployment using the AWS Management Console, the AWS CLI, or the RDS
API.

587
Amazon Relational Database Service User Guide
Deleting a blue/green deployment

Console
To delete a blue/green deployment

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases, and then choose the blue/green deployment that you
want to delete.
3. For Actions, choose Delete.

The Delete Blue/Green Deployment? window appears.

To delete the green databases, select Delete the green databases in this Blue/Green Deployment.
4. Enter delete me in the box.
5. Choose Delete.

AWS CLI
To delete a blue/green deployment by using the AWS CLI, use the delete-blue-green-deployment
command with the following options:

• --blue-green-deployment-identifier – The identifier of the blue/green deployment to be


deleted.
• --delete-target – Specifies that the DB instances in the green environment are deleted. You can't
specify this option if the blue/green deployment has a status of SWITCHOVER_COMPLETED.
• --no-delete-target – Specifies that the DB instances in the green environment are retained.

Example Delete a blue/green deployment and the DB instances in the green environment

For Linux, macOS, or Unix:

588
Amazon Relational Database Service User Guide
Deleting a blue/green deployment

aws rds delete-blue-green-deployment \


--blue-green-deployment-identifier bgd-1234567890abcdef \
--delete-target

For Windows:

aws rds delete-blue-green-deployment ^


--blue-green-deployment-identifier bgd-1234567890abcdef ^
--delete-target

Example Delete a blue/green deployment but retain the DB instances in the green
environment

For Linux, macOS, or Unix:

aws rds delete-blue-green-deployment \


--blue-green-deployment-identifier bgd-1234567890abcdef \
--no-delete-target

For Windows:

aws rds delete-blue-green-deployment ^


--blue-green-deployment-identifier bgd-1234567890abcdef ^
--no-delete-target

RDS API
To delete a blue/green deployment by using the Amazon RDS API, use the
DeleteBlueGreenDeployment operation with the following parameters:

• BlueGreenDeploymentIdentifier – The identifier of the blue/green deployment to be deleted.


• DeleteTarget – Specify TRUE to delete the DB instances in the green environment or FALSE to
retain them. Cannot be TRUE if the blue/green deployment has a status of SWITCHOVER_COMPLETED.

589
Amazon Relational Database Service User Guide

Backing up and restoring


This section shows how to back up and restore an Amazon RDS DB instance or Multi-AZ DB cluster.

Topics
• Working with backups (p. 591)
• Backing up and restoring a DB instance (p. 600)
• Backing up and restoring a Multi-AZ DB cluster (p. 668)

590
Amazon Relational Database Service User Guide
Working with backups

Working with backups


Amazon RDS creates and saves automated backups of your DB instance or Multi-AZ DB cluster during the
backup window of your DB instance. RDS creates a storage volume snapshot of your DB instance, backing
up the entire DB instance and not just individual databases. RDS saves the automated backups of your
DB instance according to the backup retention period that you specify. If necessary, you can recover your
DB instance to any point in time during the backup retention period.

Automated backups follow these rules:

• Your DB instance must be in the available state for automated backups to occur. Automated
backups don't occur while your DB instance is in a state other than available, for example,
storage_full.
• Automated backups don't occur while a DB snapshot copy is running in the same AWS Region for the
same database.

You can also back up your DB instance manually by creating a DB snapshot. For more information about
manually creating a DB snapshot, see Creating a DB snapshot (p. 613).

The first snapshot of a DB instance contains the data for the full database. Subsequent snapshots of the
same database are incremental, which means that only the data that has changed after your most recent
snapshot is saved.

You can copy both automatic and manual DB snapshots, and share manual DB snapshots. For more
information about copying a DB snapshot, see Copying a DB snapshot (p. 619). For more information
about sharing a DB snapshot, see Sharing a DB snapshot (p. 633).

Backup storage
Your Amazon RDS backup storage for each AWS Region is composed of the automated backups and
manual DB snapshots for that Region. Total backup storage space equals the sum of the storage for all
backups in that Region. Moving a DB snapshot to another Region increases the backup storage in the
destination Region. Backups are stored in Amazon S3.

For more information about backup storage costs, see Amazon RDS pricing.

If you choose to retain automated backups when you delete a DB instance, the automated backups are
saved for the full retention period. If you don't choose Retain automated backups when you delete
a DB instance, all automated backups are deleted with the DB instance. After they are deleted, the
automated backups can't be recovered. If you choose to have Amazon RDS create a final DB snapshot
before it deletes your DB instance, you can use that to recover your DB instance. Optionally, you can
use a previously created manual snapshot. Manual snapshots are not deleted. You can have up to 100
manual snapshots per Region.

Backup window
Automated backups occur daily during the preferred backup window. If the backup requires more time
than allotted to the backup window, the backup continues after the window ends until it finishes. The
backup window can't overlap with the weekly maintenance window for the DB instance or Multi-AZ DB
cluster.

During the automatic backup window, storage I/O might be suspended briefly while the backup process
initializes (typically under a few seconds). You might experience elevated latencies for a few minutes
during backups for Multi-AZ deployments. For MariaDB, MySQL, Oracle, and PostgreSQL, I/O activity
isn't suspended on your primary during backup for Multi-AZ deployments because the backup is taken
from the standby. For SQL Server, I/O activity is suspended briefly during backup for both Single-AZ and
Multi-AZ deployments because the backup is taken from the primary.

591
Amazon Relational Database Service User Guide
Backup window

Automated backups might occasionally be skipped if the DB instance or cluster has a heavy workload at
the time a backup is supposed to start. If a backup is skipped, you can still do a point-in-time-recovery
(PITR), and a backup is still attempted during the next backup window. For more information on PITR,
see Restoring a DB instance to a specified time (p. 660).

If you don't specify a preferred backup window when you create the DB instance or Multi-AZ DB cluster,
Amazon RDS assigns a default 30-minute backup window. This window is selected at random from an 8-
hour block of time for each AWS Region. The following table lists the time blocks for each AWS Region
from which the default backup windows are assigned.

Region Name Region Time Block

US East (Ohio) us-east-2 03:00–11:00 UTC

US East (N. Virginia) us-east-1 03:00–11:00 UTC

US West (N. California) us-west-1 06:00–14:00 UTC

US West (Oregon) us-west-2 06:00–14:00 UTC

Africa (Cape Town) af-south-1 03:00–11:00 UTC

Asia Pacific (Hong ap-east-1 06:00–14:00 UTC


Kong)

Asia Pacific ap-south-2 06:30–14:30 UTC


(Hyderabad)

Asia Pacific (Jakarta) ap-southeast-3 08:00–16:00 UTC

Asia Pacific ap-southeast-4 11:00–19:00 UTC


(Melbourne)

Asia Pacific (Mumbai) ap-south-1 16:30–00:30 UTC

Asia Pacific (Osaka) ap-northeast-3 00:00–08:00 UTC

Asia Pacific (Seoul) ap-northeast-2 13:00–21:00 UTC

Asia Pacific (Singapore) ap-southeast-1 14:00–22:00 UTC

Asia Pacific (Sydney) ap-southeast-2 12:00–20:00 UTC

Asia Pacific (Tokyo) ap-northeast-1 13:00–21:00 UTC

Canada (Central) ca-central-1 03:00–11:00 UTC

China (Beijing) cn-north-1 06:00–14:00 UTC

China (Ningxia) cn-northwest-1 06:00–14:00 UTC

Europe (Frankfurt) eu-central-1 20:00–04:00 UTC

Europe (Ireland) eu-west-1 22:00–06:00 UTC

Europe (London) eu-west-2 22:00–06:00 UTC

Europe (Milan) eu-south-1 02:00–10:00 UTC

Europe (Paris) eu-west-3 07:29–14:29 UTC

Europe (Spain) eu-south-2 02:00–10:00 UTC

592
Amazon Relational Database Service User Guide
Backup retention period

Region Name Region Time Block

Europe (Stockholm) eu-north-1 23:00–07:00 UTC

Europe (Zurich) eu-central-2 02:00–10:00 UTC

Israel (Tel Aviv) il-central-1 03:00–11:00 UTC

Middle East (Bahrain) me-south-1 06:00–14:00 UTC

Middle East (UAE) me-central-1 05:00–13:00 UTC

South America (São sa-east-1 23:00–07:00 UTC


Paulo)

AWS GovCloud (US- us-gov-east-1 17:00–01:00 UTC


East)

AWS GovCloud (US- us-gov-west-1 06:00–14:00 UTC


West)

Backup retention period


You can set the backup retention period when you create a DB instance or Multi-AZ DB cluster. If you
don't set the backup retention period, the default backup retention period is one day if you create the DB
instance using the Amazon RDS API or the AWS CLI. The default backup retention period is seven days if
you create the DB instance using the console.

After you create a DB instance or cluster, you can modify the backup retention period. You can set the
backup retention period of a DB instance to between 0 and 35 days. Setting the backup retention period
to 0 disables automated backups. You can set the backup retention period of a multi-AZ DB cluster to
between 1 and 35 days. Manual snapshot limits (100 per Region) don't apply to automated backups.

Automated backups aren't created while a DB instance or cluster is stopped. Backups can be retained
longer than the backup retention period if a DB instance has been stopped. RDS doesn't include time
spent in the stopped state when the backup retention window is calculated.
Important
An outage occurs if you change the backup retention period from 0 to a nonzero value or from a
nonzero value to 0. This applies to both Single-AZ and Multi-AZ DB instances.

Enabling automated backups


If your DB instance doesn't have automated backups enabled, you can enable them at any time. You
enable automated backups by setting the backup retention period to a positive nonzero value. When
automated backups are turned on, your DB instance is taken offline and a backup is immediately created.
Note
If you manage your backups in AWS Backup, you can't enable automated backups. For more
information, see Using AWS Backup to manage automated backups (p. 599).

Console
To enable automated backups immediately

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.

593
Amazon Relational Database Service User Guide
Enabling automated backups

2. In the navigation pane, choose Databases, and then choose the DB instance or Multi-AZ DB cluster
that you want to modify.
3. Choose Modify.
4. For Backup retention period, choose a positive nonzero value, for example 3 days.
5. Choose Continue.
6. Choose Apply immediately.
7. Choose Modify DB instance or Modify cluster to save your changes and enable automated backups.

AWS CLI
To enable automated backups, use the AWS CLI modify-db-instance or modify-db-cluster
command.

Include the following parameters:

• --db-instance-identifier (or --db-cluster-identifier for a Multi-AZ DB cluster)


• --backup-retention-period
• --apply-immediately or --no-apply-immediately

In the following example, we enable automated backups by setting the backup retention period to three
days. The changes are applied immediately.

Example

For Linux, macOS, or Unix:

aws rds modify-db-instance \


--db-instance-identifier mydbinstance \
--backup-retention-period 3 \
--apply-immediately

For Windows:

aws rds modify-db-instance ^


--db-instance-identifier mydbinstance ^
--backup-retention-period 3 ^
--apply-immediately

RDS API
To enable automated backups, use the RDS API ModifyDBInstance or ModifyDBCluster operation
with the following required parameters:

• DBInstanceIdentifier or DBClusterIdentifier
• BackupRetentionPeriod

Viewing automated backups


To view your automated backups, choose Automated backups in the navigation pane. To view
individual snapshots associated with an automated backup, choose Snapshots in the navigation pane.
Alternatively, you can describe individual snapshots associated with an automated backup. From there,
you can restore a DB instance directly from one of those snapshots.

594
Amazon Relational Database Service User Guide
Retaining automated backups

To describe the automated backups for your existing DB instances using the AWS CLI, use one of the
following commands:

aws rds describe-db-instance-automated-backups --db-instance-


identifier DBInstanceIdentifier

or

aws rds describe-db-instance-automated-backups --dbi-resource-id DbiResourceId

To describe the retained automated backups for your existing DB instances using the RDS API, call the
DescribeDBInstanceAutomatedBackups action with one of the following parameters:

• DBInstanceIdentifier
• DbiResourceId

Retaining automated backups


Note
You can only retain automated backups of DB instances, not Multi-AZ DB clusters.

When you delete a DB instance, you can choose to retain automated backups. Automated backups can be
retained for a number of days equal to the backup retention period configured for the DB instance at the
time when you delete it.

Retained automated backups contain system snapshots and transaction logs from a DB instance. They
also include your DB instance properties like allocated storage and DB instance class, which are required
to restore it to an active instance.

Retained automated backups and manual snapshots incur billing charges until they're deleted. For more
information, see Retention costs (p. 596).

You can retain automated backups for RDS instances running the MySQL, MariaDB, PostgreSQL, Oracle,
and Microsoft SQL Server engines.

You can restore or remove retained automated backups using the AWS Management Console, RDS API,
and AWS CLI.

Topics
• Retention period (p. 595)
• Viewing retained backups (p. 596)
• Restoration (p. 596)
• Retention costs (p. 596)
• Limitations (p. 596)

Retention period
The system snapshots and transaction logs in a retained automated backup expire the same way that
they expire for the source DB instance. Because there are no new snapshots or logs created for this
instance, the retained automated backups eventually expire completely. Effectively, they live as long
their last system snapshot would have done, based on the settings for retention period the source
instance had when you deleted it. Retained automated backups are removed by the system after their
last system snapshot expires.

595
Amazon Relational Database Service User Guide
Deleting retained automated backups

You can remove a retained automated backup in the same way that you can delete a DB instance.
You can remove retained automated backups using the console or the RDS API operation
DeleteDBInstanceAutomatedBackup.

Final snapshots are independent of retained automated backups. We strongly suggest that you take
a final snapshot even if you retain automated backups because the retained automated backups
eventually expire. The final snapshot doesn't expire.

Viewing retained backups


To view your retained automated backups, choose Automated backups in the navigation pane, then
choose Retained. To view individual snapshots associated with a retained automated backup, choose
Snapshots in the navigation pane. Alternatively, you can describe individual snapshots associated with
a retained automated backup. From there, you can restore a DB instance directly from one of those
snapshots.

To describe your retained automated backups using the AWS CLI, use the following command:

aws rds describe-db-instance-automated-backups --dbi-resource-id DbiResourceId

To describe your retained automated backups using the RDS API, call the
DescribeDBInstanceAutomatedBackups action with the DbiResourceId parameter.

Restoration
For information on restoring DB instances from automated backups, see Restoring a DB instance to a
specified time (p. 660).

Retention costs
The cost of a retained automated backup is the cost of total storage of the system snapshots that are
associated with it. There is no additional charge for transaction logs or instance metadata. All other
pricing rules for backups apply to restorable instances.

For example, suppose that your total allocated storage of running instances is 100 GB. Suppose also
that you have 50 GB of manual snapshots plus 75 GB of system snapshots associated with a retained
automated backup. In this case, you are charged only for the additional 25 GB of backup storage, like
this: (50 GB + 75 GB) – 100 GB = 25 GB.

Limitations
The following limitations apply to retained automated backups:

• The maximum number of retained automated backups in one AWS Region is 40. It's not included in the
DB instances quota. You can have 40 running DB instances and an additional 40 retained automated
backups at the same time.
• Retained automated backups don't contain information about parameters or option groups.
• You can restore a deleted instance to a point in time that is within the retention period at the time of
deletion.
• You can't modify a retained automated backup. That's because it consists of system backups,
transaction logs, and the DB instance properties that existed at the time that you deleted the source
instance.

Deleting retained automated backups


You can delete retained automated backups when they are no longer needed.

596
Amazon Relational Database Service User Guide
Disabling automated backups

Console
To delete a retained automated backup

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Automated backups.
3. On the Retained tab, choose the retained automated backup that you want to delete.
4. For Actions, choose Delete.
5. On the confirmation page, enter delete me and choose Delete.

AWS CLI
You can delete a retained automated backup by using the AWS CLI command delete-db-instance-
automated-backup with the following option:

• --dbi-resource-id – The resource identifier for the source DB instance.

You can find the resource identifier for the source DB instance of a retained automated backup by
running the AWS CLI command describe-db-instance-automated-backups.

Example

The following example deletes the retained automated backup with source DB instance resource
identifier db-123ABCEXAMPLE.

For Linux, macOS, or Unix:

aws rds delete-db-instance-automated-backup \


--dbi-resource-id db-123ABCEXAMPLE

For Windows:

aws rds delete-db-instance-automated-backup ^


--dbi-resource-id db-123ABCEXAMPLE

RDS API
You can delete a retained automated backup by using the Amazon RDS API operation
DeleteDBInstanceAutomatedBackup with the following parameter:

• DbiResourceId – The resource identifier for the source DB instance.

You can find the resource identifier for the source DB instance of a retained automated backup using
the Amazon RDS API operation DescribeDBInstanceAutomatedBackups.

Disabling automated backups


You might want to temporarily disable automated backups in certain situations, for example while
loading large amounts of data.
Important
We highly discourage disabling automated backups because it disables point-in-time recovery.
Disabling automatic backups for a DB instance or Multi-AZ DB cluster deletes all existing

597
Amazon Relational Database Service User Guide
Disabling automated backups

automated backups for the database. If you disable and then re-enable automated backups, you
can restore starting only from the time you re-enabled automated backups.

Console
To disable automated backups immediately

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases, and then choose the DB instance or Multi-AZ DB cluster
that you want to modify.
3. Choose Modify.
4. For Backup retention period, choose 0 days.
5. Choose Continue.
6. Choose Apply immediately.
7. Choose Modify DB instance or Modify cluster to save your changes and disable automated backups.

AWS CLI
To disable automated backups immediately, use the modify-db-instance or modify-db-cluster command
and set the backup retention period to 0 with --apply-immediately.

Example

The following example immediately disables automatic backups on a Multi-AZ DB cluster.

For Linux, macOS, or Unix:

aws rds modify-db-cluster \


--db-cluster-identifier mydbcluster \
--backup-retention-period 0 \
--apply-immediately

For Windows:

aws rds modify-db-cluster ^


--db-cluster-identifier mydbcluster ^
--backup-retention-period 0 ^
--apply-immediately

To know when the modification is in effect, call describe-db-instances for the DB instance (or
describe-db-clusters for a Multi-AZ DB cluster) until the value for backup retention period is 0 and
mydbcluster status is available.

aws rds describe-db-clusters --db-cluster-identifier mydcluster

RDS API
To disable automated backups immediately, call the ModifyDBInstance or ModifyDBCluster operation
with the following parameters:

• DBInstanceIdentifier = mydbinstance (or DBClusterIdentifier = mydbcluster)


• BackupRetentionPeriod = 0

598
Amazon Relational Database Service User Guide
Using AWS Backup

Example

https://rds.amazonaws.com/
?Action=ModifyDBInstance
&DBInstanceIdentifier=mydbinstance
&BackupRetentionPeriod=0
&SignatureVersion=2
&SignatureMethod=HmacSHA256
&Timestamp=2009-10-14T17%3A48%3A21.746Z
&AWSAccessKeyId=<&AWS; Access Key ID>
&Signature=<Signature>

Using AWS Backup to manage automated backups


AWS Backup is a fully managed backup service that makes it easy to centralize and automate the backup
of data across AWS services in the cloud and on premises. You can manage backups of your Amazon RDS
databases in AWS Backup.

To enable backups in AWS Backup, use resource tagging to associate your database with a backup plan.
For more information, see Using tags to enable backups in AWS Backup (p. 468).
Note
Backups managed by AWS Backup are considered manual DB snapshots, but don't count toward
the DB snapshot quota for RDS. Backups that were created with AWS Backup have names
ending in awsbackup:backup-job-number.

For more information about AWS Backup, see the AWS Backup Developer Guide.

To view backups managed by AWS Backup

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Snapshots.
3. Choose the Backup service tab.

Your AWS Backup backups are listed under Backup service snapshots.

Automated backups with unsupported MySQL


storage engines
For the MySQL DB engine, automated backups are only supported for the InnoDB storage engine.
Using these features with other MySQL storage engines, including MyISAM, can lead to unreliable
behavior when you're restoring from backups. Specifically, since storage engines like MyISAM don't
support reliable crash recovery, your tables can be corrupted in the event of a crash. For this reason, we
encourage you to use the InnoDB storage engine.

• To convert existing MyISAM tables to InnoDB tables, you can use the ALTER TABLE command, for
example: ALTER TABLE table_name ENGINE=innodb, ALGORITHM=COPY;
• If you choose to use MyISAM, you can attempt to manually repair tables that become damaged after
a crash by using the REPAIR command. For more information, see REPAIR TABLE statement in the
MySQL documentation. However, as noted in the MySQL documentation, there is a good chance that
you might not be able to recover all your data.
• If you want to take a snapshot of your MyISAM tables before restoring, follow these steps:
1. Stop all activity to your MyISAM tables (that is, close all sessions).

599
Amazon Relational Database Service User Guide
Unsupported MariaDB storage engines

You can close all sessions by calling the mysql.rds_kill command for each process that is returned
from the SHOW FULL PROCESSLIST command.
2. Lock and flush each of your MyISAM tables. For example, the following commands lock and flush
two tables named myisam_table1 and myisam_table2:

mysql> FLUSH TABLES myisam_table, myisam_table2 WITH READ LOCK;

3. Create a snapshot of your DB instance or Multi-AZ DB cluster. When the snapshot has completed,
release the locks and resume activity on the MyISAM tables. You can release the locks on your tables
using the following command:

mysql> UNLOCK TABLES;

These steps force MyISAM to flush data stored in memory to disk, which ensures a clean start when
you restore from a DB snapshot. For more information on creating a DB snapshot, see Creating a DB
snapshot (p. 613).

Automated backups with unsupported MariaDB


storage engines
For the MariaDB DB engine, automated backups are only supported with the InnoDB storage engine.
Using these features with other MariaDB storage engines, including Aria, can lead to unreliable behavior
when you're restoring from backups. Even though Aria is a crash-resistant alternative to MyISAM, your
tables can still be corrupted in the event of a crash. For this reason, we encourage you to use the InnoDB
storage engine.

• To convert existing Aria tables to InnoDB tables, you can use the ALTER TABLE command. For
example: ALTER TABLE table_name ENGINE=innodb, ALGORITHM=COPY;
• If you choose to use Aria, you can attempt to manually repair tables that become damaged after a
crash by using the REPAIR TABLE command. For more information, see http://mariadb.com/kb/en/
mariadb/repair-table/.
• If you want to take a snapshot of your Aria tables before restoring, follow these steps:
1. Stop all activity to your Aria tables (that is, close all sessions).
2. Lock and flush each of your Aria tables.
3. Create a snapshot of your DB instance or Multi-AZ DB cluster. When the snapshot has completed,
release the locks and resume activity on the Aria tables. These steps force Aria to flush data stored
in memory to disk, thereby ensuring a clean start when you restore from a DB snapshot.

Backing up and restoring a DB instance


This section shows how to back up and restore a DB instance.

Topics
• Replicating automated backups to another AWS Region (p. 602)
• Creating a DB snapshot (p. 613)
• Restoring from a DB snapshot (p. 615)
• Copying a DB snapshot (p. 619)
• Sharing a DB snapshot (p. 633)
• Exporting DB snapshot data to Amazon S3 (p. 642)

600
Amazon Relational Database Service User Guide
Backing up and restoring a DB instance

• Restoring a DB instance to a specified time (p. 660)


• Deleting a DB snapshot (p. 663)
• Tutorial: Restore an Amazon RDS DB instance from a DB snapshot (p. 665)

601
Amazon Relational Database Service User Guide
Cross-Region automated backups

Replicating automated backups to another AWS


Region
For added disaster recovery capability, you can configure your Amazon RDS database instance to
replicate snapshots and transaction logs to a destination AWS Region of your choice. When backup
replication is configured for a DB instance, RDS initiates a cross-Region copy of all snapshots and
transaction logs as soon as they are ready on the DB instance.

DB snapshot copy charges apply to the data transfer. After the DB snapshot is copied, standard charges
apply to storage in the destination Region. For more details, see RDS Pricing.

For an example of using backup replication, see the AWS online tech talk Managed Disaster Recovery
with Amazon RDS for Oracle Cross-Region Automated Backups.

Topics
• Region and version availability (p. 602)
• Source and destination AWS Region support (p. 602)
• Enabling cross-Region automated backups (p. 604)
• Finding information about replicated backups (p. 606)
• Restoring to a specified time from a replicated backup (p. 609)
• Stopping automated backup replication (p. 610)
• Deleting replicated backups (p. 611)

Region and version availability


Feature availability and support varies across specific versions of each database engine, and across AWS
Regions. For more information on version and Region availability with cross-Region automated backups,
see Cross-Region automated backups (p. 118).

Source and destination AWS Region support


Backup replication is supported between the following AWS Regions.

Source Region Destination Regions available

Asia Pacific (Mumbai) Asia Pacific (Singapore)

US East (N. Virginia), US East (Ohio), US West (Oregon)

Asia Pacific (Osaka) Asia Pacific (Tokyo)

Asia Pacific (Seoul) Asia Pacific (Singapore), Asia Pacific (Tokyo)

US East (N. Virginia), US East (Ohio), US West (Oregon)

Asia Pacific (Singapore) Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Sydney), Asia Pacific
(Tokyo)

US East (N. Virginia), US East (Ohio), US West (Oregon)

Asia Pacific (Sydney) Asia Pacific (Singapore)

US East (N. Virginia), US West (N. California), US West (Oregon)

Asia Pacific (Tokyo) Asia Pacific (Osaka), Asia Pacific (Seoul), Asia Pacific (Singapore)

602
Amazon Relational Database Service User Guide
Cross-Region automated backups

Source Region Destination Regions available


US East (N. Virginia), US East (Ohio), US West (Oregon)

Canada (Central) Europe (Ireland)

US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon)

China (Beijing) China (Ningxia)

China (Ningxia) China (Beijing)

Europe (Frankfurt) Europe (Ireland), Europe (London), Europe (Paris), Europe (Stockholm)

US East (N. Virginia), US East (Ohio), US West (Oregon)

Europe (Ireland) Canada (Central)

Europe (Frankfurt), Europe (London), Europe (Paris), Europe (Stockholm)

US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon)

Europe (London) Europe (Frankfurt), Europe (Ireland), Europe (Paris), Europe (Stockholm)

US East (N. Virginia)

Europe (Paris) Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Stockholm)

US East (N. Virginia)

Europe (Stockholm) Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Paris)

US East (N. Virginia)

South America (São Paulo) US East (N. Virginia), US East (Ohio)

AWS GovCloud (US-East) AWS GovCloud (US-West)

AWS GovCloud (US-West) AWS GovCloud (US-East)

US East (N. Virginia) Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific
(Sydney), Asia Pacific (Tokyo)

Canada (Central)

Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Paris), Europe


(Stockholm)

South America (São Paulo)

US East (Ohio), US West (N. California), US West (Oregon)

US East (Ohio) Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific
(Tokyo)

Canada (Central)

Europe (Frankfurt), Europe (Ireland)

South America (São Paulo)

US East (N. Virginia), US West (N. California), US West (Oregon)

603
Amazon Relational Database Service User Guide
Cross-Region automated backups

Source Region Destination Regions available

US West (N. California) Asia Pacific (Sydney)

Canada (Central)

Europe (Ireland)

US East (N. Virginia), US East (Ohio), US West (Oregon)

US West (Oregon) Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific
(Sydney), Asia Pacific (Tokyo)

Canada (Central)

Europe (Frankfurt), Europe (Ireland)

US East (N. Virginia), US East (Ohio), US West (N. California)

You can also use the describe-source-regions AWS CLI command to find out which AWS
Regions can replicate to each other. For more information, see Finding information about replicated
backups (p. 606).

Enabling cross-Region automated backups


You can enable backup replication on new or existing DB instances using the Amazon RDS console. You
can also use the start-db-instance-automated-backups-replication AWS CLI command or the
StartDBInstanceAutomatedBackupsReplication RDS API operation. You can replicate up to 20
backups to each destination AWS Region for each AWS account.
Note
To be able to replicate automated backups, make sure to enable them. For more information,
see Enabling automated backups (p. 593).

Console

You can enable backup replication for a new or existing DB instance:

• For a new DB instance, enable it when you launch the instance. For more information, see Settings for
DB instances (p. 308).
• For an existing DB instance, use the following procedure.

To enable backup replication for an existing DB instance

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Automated backups.
3. On the Current Region tab, choose the DB instance for which you want to enable backup
replication.
4. For Actions, choose Manage cross-Region replication.
5. Under Backup replication, choose Enable replication to another AWS Region.
6. Choose the Destination Region.
7. Choose the Replicated backup retention period.
8. If you've enabled encryption on the source DB instance, choose the AWS KMS key for encrypting the
backups.

604
Amazon Relational Database Service User Guide
Cross-Region automated backups

9. Choose Save.

In the source Region, replicated backups are listed on the Current Region tab of the Automated backups
page. In the destination Region, replicated backups are listed on the Replicated backups tab of the
Automated backups page.

AWS CLI

Enable backup replication by using the start-db-instance-automated-backups-replication


AWS CLI command.

The following CLI example replicates automated backups from a DB instance in the US West (Oregon)
Region to the US East (N. Virginia) Region. It also encrypts the replicated backups, using an AWS KMS key
in the destination Region.

To enable backup replication

• Run one of the following commands.

For Linux, macOS, or Unix:

aws rds start-db-instance-automated-backups-replication \


--region us-east-1 \
--source-db-instance-arn "arn:aws:rds:us-west-2:123456789012:db:mydatabase" \
--kms-key-id "arn:aws:kms:us-east-1:123456789012:key/AKIAIOSFODNN7EXAMPLE" \
--backup-retention-period 7

For Windows:

aws rds start-db-instance-automated-backups-replication ^


--region us-east-1 ^
--source-db-instance-arn "arn:aws:rds:us-west-2:123456789012:db:mydatabase" ^
--kms-key-id "arn:aws:kms:us-east-1:123456789012:key/AKIAIOSFODNN7EXAMPLE" ^
--backup-retention-period 7

The --source-region option is required when you encrypt backups between the AWS GovCloud
(US-East) and AWS GovCloud (US-West) Regions. For --source-region, specify the AWS Region of
the source DB instance.

If --source-region isn't specified, make sure to specify a --pre-signed-url value. A presigned


URL is a URL that contains a Signature Version 4 signed request for the start-db-instance-
automated-backups-replication command that is called in the source AWS Region. To learn
more about the pre-signed-url option, see start-db-instance-automated-backups-replication in
the AWS CLI Command Reference.

RDS API

Enable backup replication by using the StartDBInstanceAutomatedBackupsReplication RDS API


operation with the following parameters:

• Region
• SourceDBInstanceArn
• BackupRetentionPeriod
• KmsKeyId (optional)
• PreSignedUrl (required if you use KmsKeyId)

605
Amazon Relational Database Service User Guide
Cross-Region automated backups

Note
If you encrypt the backups, you must also include a presigned URL. For more information on
presigned URLs, see Authenticating Requests: Using Query Parameters (AWS Signature Version
4) in the Amazon Simple Storage Service API Reference and Signature Version 4 signing process in
the AWS General Reference.

Finding information about replicated backups


You can use the following CLI commands to find information about replicated backups:

• describe-source-regions
• describe-db-instances
• describe-db-instance-automated-backups

The following describe-source-regions example lists the source AWS Regions from which
automated backups can be replicated to the US West (Oregon) destination Region.

To show information about source Regions

• Run the following command.

aws rds describe-source-regions --region us-west-2

The output shows that backups can be replicated from US East (N. Virginia), but not from US East (Ohio)
or US West (N. California), into US West (Oregon).

{
"SourceRegions": [
...
{
"RegionName": "us-east-1",
"Endpoint": "https://rds.us-east-1.amazonaws.com",
"Status": "available",
"SupportsDBInstanceAutomatedBackupsReplication": true
},
{
"RegionName": "us-east-2",
"Endpoint": "https://rds.us-east-2.amazonaws.com",
"Status": "available",
"SupportsDBInstanceAutomatedBackupsReplication": false
},
"RegionName": "us-west-1",
"Endpoint": "https://rds.us-west-1.amazonaws.com",
"Status": "available",
"SupportsDBInstanceAutomatedBackupsReplication": false
}
]
}

The following describe-db-instances example shows the automated backups for a DB instance.

To show the replicated backups for a DB instance

• Run one of the following commands.

For Linux, macOS, or Unix:

606
Amazon Relational Database Service User Guide
Cross-Region automated backups

aws rds describe-db-instances \


--db-instance-identifier mydatabase

For Windows:

aws rds describe-db-instances ^


--db-instance-identifier mydatabase

The output includes the replicated backups.

{
"DBInstances": [
{
"StorageEncrypted": false,
"Endpoint": {
"HostedZoneId": "Z1PVIF0B656C1W",
"Port": 1521,
...

"BackupRetentionPeriod": 7,
"DBInstanceAutomatedBackupsReplications": [{"DBInstanceAutomatedBackupsArn":
"arn:aws:rds:us-east-1:123456789012:auto-backup:ab-L2IJCEXJP7XQ7HOJ4SIEXAMPLE"}]
}
]
}

The following describe-db-instance-automated-backups example shows the automated backups


for a DB instance.

To show automated backups for a DB instance

• Run one of the following commands.

For Linux, macOS, or Unix:

aws rds describe-db-instance-automated-backups \


--db-instance-identifier mydatabase

For Windows:

aws rds describe-db-instance-automated-backups ^


--db-instance-identifier mydatabase

The output shows the source DB instance and automated backups in US West (Oregon), with backups
replicated to US East (N. Virginia).

{
"DBInstanceAutomatedBackups": [
{
"DBInstanceArn": "arn:aws:rds:us-west-2:868710585169:db:mydatabase",
"DbiResourceId": "db-L2IJCEXJP7XQ7HOJ4SIEXAMPLE",
"DBInstanceAutomatedBackupsArn": "arn:aws:rds:us-west-2:123456789012:auto-
backup:ab-L2IJCEXJP7XQ7HOJ4SIEXAMPLE",
"BackupRetentionPeriod": 7,
"DBInstanceAutomatedBackupsReplications": [{"DBInstanceAutomatedBackupsArn":
"arn:aws:rds:us-east-1:123456789012:auto-backup:ab-L2IJCEXJP7XQ7HOJ4SIEXAMPLE"}]

607
Amazon Relational Database Service User Guide
Cross-Region automated backups

"Region": "us-west-2",
"DBInstanceIdentifier": "mydatabase",
"RestoreWindow": {
"EarliestTime": "2020-10-26T01:09:07Z",
"LatestTime": "2020-10-31T19:09:53Z",
}
...
}
]
}

The following describe-db-instance-automated-backups example uses the --db-instance-


automated-backups-arn option to show the replicated backups in the destination Region.

To show replicated backups

• Run one of the following commands.

For Linux, macOS, or Unix:

aws rds describe-db-instance-automated-backups \


--db-instance-automated-backups-arn "arn:aws:rds:us-east-1:123456789012:auto-backup:ab-
L2IJCEXJP7XQ7HOJ4SIEXAMPLE"

For Windows:

aws rds describe-db-instance-automated-backups ^


--db-instance-automated-backups-arn "arn:aws:rds:us-east-1:123456789012:auto-backup:ab-
L2IJCEXJP7XQ7HOJ4SIEXAMPLE"

The output shows the source DB instance in US West (Oregon), with replicated backups in US East (N.
Virginia).

{
"DBInstanceAutomatedBackups": [
{
"DBInstanceArn": "arn:aws:rds:us-west-2:868710585169:db:mydatabase",
"DbiResourceId": "db-L2IJCEXJP7XQ7HOJ4SIEXAMPLE",
"DBInstanceAutomatedBackupsArn": "arn:aws:rds:us-east-1:123456789012:auto-
backup:ab-L2IJCEXJP7XQ7HOJ4SIEXAMPLE",
"Region": "us-west-2",
"DBInstanceIdentifier": "mydatabase",
"RestoreWindow": {
"EarliestTime": "2020-10-26T01:09:07Z",
"LatestTime": "2020-10-31T19:01:23Z"
},
"AllocatedStorage": 50,
"BackupRetentionPeriod": 7,
"Status": "replicating",
"Port": 1521,
...
}
]
}

608
Amazon Relational Database Service User Guide
Cross-Region automated backups

Restoring to a specified time from a replicated backup


You can restore a DB instance to a specific point in time from a replicated backup using the Amazon RDS
console. You can also use the restore-db-instance-to-point-in-time AWS CLI command or the
RestoreDBInstanceToPointInTime RDS API operation.

For general information on point-in-time recovery (PITR), see Restoring a DB instance to a specified
time (p. 660).
Note
On RDS for SQL Server, option groups aren't copied across AWS Regions when automated
backups are replicated. If you've associated a custom option group with your RDS for SQL Server
DB instance, you can re-create that option group in the destination Region. Then restore the
DB instance in the destination Region and associate the custom option group with it. For more
information, see Working with option groups (p. 331).

Console

To restore a DB instance to a specified time from a replicated backup

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. Choose the destination Region (where backups are replicated to) from the Region selector.
3. In the navigation pane, choose Automated backups.
4. On the Replicated backups tab, choose the DB instance that you want to restore.
5. For Actions, choose Restore to point in time.
6. Choose Latest restorable time to restore to the latest possible time, or choose Custom to choose a
time.

If you chose Custom, enter the date and time that you want to restore the instance to.
Note
Times are shown in your local time zone, which is indicated by an offset from Coordinated
Universal Time (UTC). For example, UTC-5 is Eastern Standard Time/Central Daylight Time.
7. For DB instance identifier, enter the name of the target restored DB instance.
8. (Optional) Choose other options as needed, such as enabling autoscaling.
9. Choose Restore to point in time.

AWS CLI

Use the restore-db-instance-to-point-in-time AWS CLI command to create a new DB instance.

To restore a DB instance to a specified time from a replicated backup

• Run one of the following commands.

For Linux, macOS, or Unix:

aws rds restore-db-instance-to-point-in-time \


--source-db-instance-automated-backups-arn "arn:aws:rds:us-
east-1:123456789012:auto-backup:ab-L2IJCEXJP7XQ7HOJ4SIEXAMPLE" \
--target-db-instance-identifier mytargetdbinstance \
--restore-time 2020-10-14T23:45:00.000Z

For Windows:

609
Amazon Relational Database Service User Guide
Cross-Region automated backups

aws rds restore-db-instance-to-point-in-time ^


--source-db-instance-automated-backups-arn "arn:aws:rds:us-
east-1:123456789012:auto-backup:ab-L2IJCEXJP7XQ7HOJ4SIEXAMPLE" ^
--target-db-instance-identifier mytargetdbinstance ^
--restore-time 2020-10-14T23:45:00.000Z

RDS API

To restore a DB instance to a specified time, call the RestoreDBInstanceToPointInTime Amazon


RDS API operation with the following parameters:

• SourceDBInstanceAutomatedBackupsArn
• TargetDBInstanceIdentifier
• RestoreTime

Stopping automated backup replication


You can stop backup replication for DB instances using the Amazon RDS console. You can also
use the stop-db-instance-automated-backups-replication AWS CLI command or the
StopDBInstanceAutomatedBackupsReplication RDS API operation.

Replicated backups are retained, subject to the backup retention period set when they were created.

Console

Stop backup replication from the Automated backups page in the source Region.

To stop backup replication to an AWS Region

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. Choose the source Region from the Region selector.
3. In the navigation pane, choose Automated backups.
4. On the Current Region tab, choose the DB instance for which you want to stop backup replication.
5. For Actions, choose Manage cross-Region replication.
6. Under Backup replication, clear the Enable replication to another AWS Region check box.
7. Choose Save.

Replicated backups are listed on the Retained tab of the Automated backups page in the destination
Region.

AWS CLI

Stop backup replication by using the stop-db-instance-automated-backups-replication AWS


CLI command.

The following CLI example stops automated backups of a DB instance from replicating in the US West
(Oregon) Region.

To stop backup replication

• Run one of the following commands.

610
Amazon Relational Database Service User Guide
Cross-Region automated backups

For Linux, macOS, or Unix:

aws rds stop-db-instance-automated-backups-replication \


--region us-east-1 \
--source-db-instance-arn "arn:aws:rds:us-west-2:123456789012:db:mydatabase"

For Windows:

aws rds stop-db-instance-automated-backups-replication ^


--region us-east-1 ^
--source-db-instance-arn "arn:aws:rds:us-west-2:123456789012:db:mydatabase"

RDS API

Stop backup replication by using the StopDBInstanceAutomatedBackupsReplication RDS API


operation with the following parameters:

• Region
• SourceDBInstanceArn

Deleting replicated backups


You can delete replicated backups for DB instances using the Amazon RDS console. You
can also use the delete-db-instance-automated-backups AWS CLI command or the
DeleteDBInstanceAutomatedBackup RDS API operation.

Console

Delete replicated backups in the destination Region from the Automated backups page.

To delete replicated backups

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. Choose the destination Region from the Region selector.
3. In the navigation pane, choose Automated backups.
4. On the Replicated backups tab, choose the DB instance for which you want to delete the replicated
backups.
5. For Actions, choose Delete.
6. On the confirmation page, enter delete me and choose Delete.

AWS CLI

Delete replicated backups by using the delete-db-instance-automated-backup AWS CLI


command.

You can use the describe-db-instances CLI command to find the Amazon Resource Names
(ARNs) of the replicated backups. For more information, see Finding information about replicated
backups (p. 606).

To delete replicated backups

• Run one of the following commands.

611
Amazon Relational Database Service User Guide
Cross-Region automated backups

For Linux, macOS, or Unix:

aws rds delete-db-instance-automated-backup \


--db-instance-automated-backups-arn "arn:aws:rds:us-east-1:123456789012:auto-backup:ab-
L2IJCEXJP7XQ7HOJ4SIEXAMPLE"

For Windows:

aws rds delete-db-instance-automated-backup ^


--db-instance-automated-backups-arn "arn:aws:rds:us-east-1:123456789012:auto-backup:ab-
L2IJCEXJP7XQ7HOJ4SIEXAMPLE"

RDS API

Delete replicated backups by using the DeleteDBInstanceAutomatedBackup RDS API operation with
the DBInstanceAutomatedBackupsArn parameter.

612
Amazon Relational Database Service User Guide
Creating a DB snapshot

Creating a DB snapshot
Amazon RDS creates a storage volume snapshot of your DB instance, backing up the entire DB instance
and not just individual databases. Creating this DB snapshot on a Single-AZ DB instance results in a brief
I/O suspension that can last from a few seconds to a few minutes, depending on the size and class of
your DB instance. For MariaDB, MySQL, Oracle, and PostgreSQL, I/O activity is not suspended on your
primary during backup for Multi-AZ deployments, because the backup is taken from the standby. For
SQL Server, I/O activity is suspended briefly during backup for Multi-AZ deployments.

When you create a DB snapshot, you need to identify which DB instance you are going to back up, and
then give your DB snapshot a name so you can restore from it later. The amount of time it takes to create
a snapshot varies with the size of your databases. Since the snapshot includes the entire storage volume,
the size of files, such as temporary files, also affects the amount of time it takes to create the snapshot.
Note
Your DB instance must be in the available state to take a DB snapshot.
For PostgreSQL DB instances, data in unlogged tables might not be restored from snapshots.
For more information, see Best practices for working with PostgreSQL (p. 294).

Unlike automated backups, manual snapshots aren't subject to the backup retention period. Snapshots
don't expire.

For very long-term backups of MariaDB, MySQL, and PostgreSQL data, we recommend exporting
snapshot data to Amazon S3. If the major version of your DB engine is no longer supported, you can't
restore to that version from a snapshot. For more information, see Exporting DB snapshot data to
Amazon S3 (p. 642).

You can create a DB snapshot using the AWS Management Console, the AWS CLI, or the RDS API.

Console
To create a DB snapshot

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases.
3. In the list of DB instances, choose the DB instance for which you want to take a snapshot.
4. For Actions, choose Take snapshot.

The Take DB snapshot window appears.


5. Enter the name of the snapshot in the Snapshot name box.

6. Choose Take snapshot.

613
Amazon Relational Database Service User Guide
Creating a DB snapshot

The Snapshots page appears, with the new DB snapshot's status shown as Creating. After its status is
Available, you can see its creation time.

AWS CLI
When you create a DB snapshot using the AWS CLI, you need to identify which DB instance you are going
to back up, and then give your DB snapshot a name so you can restore from it later. You can do this by
using the AWS CLI create-db-snapshot command with the following parameters:

• --db-instance-identifier
• --db-snapshot-identifier

In this example, you create a DB snapshot called mydbsnapshot for a DB instance called
mydbinstance.

Example

For Linux, macOS, or Unix:

aws rds create-db-snapshot \


--db-instance-identifier mydbinstance \
--db-snapshot-identifier mydbsnapshot

For Windows:

aws rds create-db-snapshot ^


--db-instance-identifier mydbinstance ^
--db-snapshot-identifier mydbsnapshot

RDS API
When you create a DB snapshot using the Amazon RDS API, you need to identify which DB instance you
are going to back up, and then give your DB snapshot a name so you can restore from it later. You can do
this by using the Amazon RDS API CreateDBSnapshot command with the following parameters:

• DBInstanceIdentifier
• DBSnapshotIdentifier

614
Amazon Relational Database Service User Guide
Restoring from a DB snapshot

Restoring from a DB snapshot


Amazon RDS creates a storage volume snapshot of your DB instance, backing up the entire DB instance
and not just individual databases. You can create a new DB instance by restoring from a DB snapshot.
You provide the name of the DB snapshot to restore from, and then provide a name for the new DB
instance that is created from the restore. You can't restore from a DB snapshot to an existing DB
instance; a new DB instance is created when you restore.

You can use the restored DB instance as soon as its status is available. The DB instance continues to
load data in the background. This is known as lazy loading.

If you access data that hasn't been loaded yet, the DB instance immediately downloads the requested
data from Amazon S3, and then continues loading the rest of the data in the background. For more
information, see Amazon EBS snapshots.

To help mitigate the effects of lazy loading on tables to which you require quick access, you can perform
operations that involve full-table scans, such as SELECT *. This allows Amazon RDS to download all of
the backed-up table data from S3.

You can restore a DB instance and use a different storage type than the source DB snapshot. In this case,
the restoration process is slower because of the additional work required to migrate the data to the new
storage type. If you restore to or from magnetic storage, the migration process is the slowest. That's
because magnetic storage doesn't have the IOPS capability of Provisioned IOPS or General Purpose (SSD)
storage.

You can use AWS CloudFormation to restore a DB instance from a DB instance snapshot. For more
information, see AWS::RDS::DBInstance in the AWS CloudFormation User Guide.
Note
You can't restore a DB instance from a DB snapshot that is both shared and encrypted. Instead,
you can make a copy of the DB snapshot and restore the DB instance from the copy. For more
information, see Copying a DB snapshot (p. 619).

Parameter group considerations


We recommend that you retain the DB parameter group for any DB snapshots you create, so that you can
associate your restored DB instance with the correct parameter group.

The default DB parameter group is associated with the restored instance, unless you choose a different
one. No custom parameter settings are available in the default parameter group.

You can specify the parameter group when you restore the DB instance.

For more information about DB parameter groups, see Working with parameter groups (p. 347).

Security group considerations


When you restore a DB instance, the default virtual private cloud (VPC), DB subnet group, and VPC
security group are associated with the restored instance, unless you choose different ones.

• If you're using the Amazon RDS console, you can specify a custom VPC security group to associate with
the instance or create a new VPC security group.
• If you're using the AWS CLI, you can specify a custom VPC security group to associate with the instance
by including the --vpc-security-group-ids option in the restore-db-instance-from-db-
snapshot command.
• If you're using the Amazon RDS API, you can include the
VpcSecurityGroupIds.VpcSecurityGroupId.N parameter in the
RestoreDBInstanceFromDBSnapshot action.

615
Amazon Relational Database Service User Guide
Restoring from a DB snapshot

As soon as the restore is complete and your new DB instance is available, you can also change the
VPC settings by modifying the DB instance. For more information, see Modifying an Amazon RDS DB
instance (p. 401).

Option group considerations


When you restore a DB instance, the default DB option group is associated with the restored DB instance
in most cases.

The exception is when the source DB instance is associated with an option group that contains a
persistent or permanent option. For example, if the source DB instance uses Oracle Transparent Data
Encryption (TDE), the restored DB instance must use an option group that has the TDE option.

If you restore a DB instance into a different VPC, you must do one of the following to assign a DB option
group:

• Assign the default option group for that VPC group to the instance.
• Assign another option group that is linked to that VPC.
• Create a new option group and assign it to the DB instance. With persistent or permanent options,
such as Oracle TDE, you must create a new option group that includes the persistent or permanent
option.

For more information about DB option groups, see Working with option groups (p. 331).

Resource tagging considerations


When you restore a DB instance from a DB snapshot, RDS checks whether you specify new tags. If yes,
the new tags are added to the restored DB instance. If there are no new tags, RDS adds the tags from the
source DB instance at the time of snapshot creation to the restored DB instance.

For more information, see Copying tags to DB instance snapshots (p. 465).

Microsoft SQL Server considerations


When you restore an RDS for Microsoft SQL Server DB snapshot to a new instance, you can always
restore to the same edition as your snapshot. In some cases, you can also change the edition of the DB
instance. The following limitations apply when you change editions:

• The DB snapshot must have enough storage allocated for the new edition.
• Only the following edition changes are supported:
• From Standard Edition to Enterprise Edition
• From Web Edition to Standard Edition or Enterprise Edition
• From Express Edition to Web Edition, Standard Edition, or Enterprise Edition

If you want to change from one edition to a new edition that isn't supported by restoring a snapshot,
you can try using the native backup and restore feature. SQL Server verifies whether your database is
compatible with the new edition based on what SQL Server features you have enabled on the database.
For more information, see Importing and exporting SQL Server databases using native backup and
restore (p. 1419).

Oracle Database considerations


If you use Oracle GoldenGate, always retain the parameter group with the compatible parameter.
When you restore a DB instance from a DB snapshot, specify a parameter group that has a matching or
greater compatible value.

616
Amazon Relational Database Service User Guide
Restoring from a DB snapshot

If you restore a snapshot of a CDB instance, you can change the PDB name. You can't change the CDB
name, which is always RDSCDB. This CDB name is the same for all RDS instances that use a single-tenant
architecture. For more information, see Backing up and restoring a CDB (p. 1844).

Before you restore a DB snapshot, you can upgrade it to a later release. For more information, see
Upgrading an Oracle DB snapshot (p. 2111).

Restoring from a snapshot


You can restore a DB instance from a DB snapshot using the AWS Management Console, the AWS CLI, or
the RDS API.
Note
You can't reduce the amount of storage when you restore a DB instance. When you increase the
allocated storage, it must be by at least 10 percent. If you try to increase the value by less than
10 percent, you get an error. You can't increase the allocated storage when restoring RDS for
SQL Server DB instances.

Console

To restore a DB instance from a DB snapshot

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Snapshots.
3. Choose the DB snapshot that you want to restore from.
4. For Actions, choose Restore snapshot.
5. On the Restore snapshot page, for DB instance identifier, enter the name for your restored DB
instance.
6. Specify other settings, such as allocated storage size.

For information about each setting, see Settings for DB instances (p. 308).
7. Choose Restore DB instance.

AWS CLI

To restore a DB instance from a DB snapshot, use the AWS CLI command restore-db-instance-from-db-
snapshot.

In this example, you restore from a previously created DB snapshot named mydbsnapshot. You restore
to a new DB instance named mynewdbinstance. This example also sets the allocated storage size.

You can specify other settings. For information about each setting, see Settings for DB instances (p. 308).

Example

For Linux, macOS, or Unix:

aws rds restore-db-instance-from-db-snapshot \


--db-instance-identifier mynewdbinstance \
--db-snapshot-identifier mydbsnapshot \
--allocated-storage 100

For Windows:

617
Amazon Relational Database Service User Guide
Restoring from a DB snapshot

aws rds restore-db-instance-from-db-snapshot ^


--db-instance-identifier mynewdbinstance ^
--db-snapshot-identifier mydbsnapshot ^
--allocated-storage 100

This command returns output similar to the following:

DBINSTANCE mynewdbinstance db.t3.small MySQL 50 sa creating 3 n


8.0.28 general-public-license

RDS API

To restore a DB instance from a DB snapshot, call the Amazon RDS API function
RestoreDBInstanceFromDBSnapshot with the following parameters:

• DBInstanceIdentifier
• DBSnapshotIdentifier

618
Amazon Relational Database Service User Guide
Copying a DB snapshot

Copying a DB snapshot
With Amazon RDS, you can copy automated backups or manual DB snapshots. After you copy a
snapshot, the copy is a manual snapshot. You can make multiple copies of an automated backup or
manual snapshot, but each copy must have a unique identifier.

You can copy a snapshot within the same AWS Region, you can copy a snapshot across AWS Regions, and
you can copy shared snapshots.

Limitations
The following are some limitations when you copy snapshots:

• You can't copy a snapshot to or from the China (Beijing) Region or the China (Ningxia) Region.
• You can copy a snapshot between AWS GovCloud (US-East) and AWS GovCloud (US-West). However,
you can't copy a snapshot between these GovCloud (US) Regions and Regions that aren't GovCloud
(US) Regions.
• If you delete a source snapshot before the target snapshot becomes available, the snapshot copy
might fail. Verify that the target snapshot has a status of AVAILABLE before you delete a source
snapshot.
• You can have up to 20 snapshot copy requests in progress to a single destination Region per account.
• When you request multiple snapshot copies for the same source DB instance, they're queued internally.
The copies requested later won't start until the previous snapshot copies are completed. For more
information, see Why is my EC2 AMI or EBS snapshot creation slow? in the AWS Knowledge Center.
• Depending on the AWS Regions involved and the amount of data to be copied, a cross-Region
snapshot copy can take hours to complete. In some cases, there might be a large number of cross-
Region snapshot copy requests from a given source Region. In such cases, Amazon RDS might put
new cross-Region copy requests from that source Region into a queue until some in-progress copies
complete. No progress information is displayed about copy requests while they are in the queue.
Progress information is displayed when the copy starts.
• If a copy is still pending when you start another copy, the second copy doesn't start until the first copy
finishes.

Snapshot retention
Amazon RDS deletes automated backups in several situations:

• At the end of their retention period.


• When you disable automated backups for a DB instance.
• When you delete a DB instance.

If you want to keep an automated backup for a longer period, copy it to create a manual snapshot, which
is retained until you delete it. Amazon RDS storage costs might apply to manual snapshots if they exceed
your default storage space.

For more information about backup storage costs, see Amazon RDS pricing.

Copying shared snapshots


You can copy snapshots shared to you by other AWS accounts. In some cases, you might copy an
encrypted snapshot that has been shared from another AWS account. In these cases, you must have
access to the AWS KMS key that was used to encrypt the snapshot.

619
Amazon Relational Database Service User Guide
Copying a DB snapshot

You can copy a shared DB snapshot across AWS Regions if the snapshot is unencrypted. However, if the
shared DB snapshot is encrypted, you can only copy it in the same Region.
Note
Copying shared incremental snapshots in the same AWS Region is supported when they're
unencrypted, or encrypted using the same KMS key as the initial full snapshot. If you use a
different KMS key to encrypt subsequent snapshots when copying them, those shared snapshots
are full snapshots. For more information, see Incremental snapshot copying (p. 620).

Handling encryption
You can copy a snapshot that has been encrypted using a KMS key. If you copy an encrypted snapshot,
the copy of the snapshot must also be encrypted. If you copy an encrypted snapshot within the same
AWS Region, you can encrypt the copy with the same KMS key as the original snapshot. Or you can
specify a different KMS key.

If you copy an encrypted snapshot across Regions, you must specify a KMS key valid in the destination
AWS Region. It can be a Region-specific KMS key, or a multi-Region key. For more information on multi-
Region KMS keys, see Using multi-Region keys in AWS KMS.

The source snapshot remains encrypted throughout the copy process. For more information, see
Limitations of Amazon RDS encrypted DB instances (p. 2589).

You can also encrypt a copy of an unencrypted snapshot. This way, you can quickly add encryption to
a previously unencrypted DB instance. To do this, you create a snapshot of your DB instance when you
are ready to encrypt it. You then create a copy of that snapshot and specify a KMS key to encrypt that
snapshot copy. You can then restore an encrypted DB instance from the encrypted snapshot.

Incremental snapshot copying


An incremental snapshot contains only the data that has changed after the most recent snapshot of the
same DB instance. Incremental snapshot copying is faster and results in lower storage costs than full
snapshot copying.
Note
When you copy a source snapshot that is a snapshot copy itself, the new copy isn't incremental.
This is because the source snapshot copy doesn't include the required metadata for incremental
copies.

Whether a snapshot copy is incremental is determined by the most recently completed snapshot copy. If
the most recent snapshot copy was deleted, the next copy is a full copy, not an incremental copy.

When you copy a snapshot across AWS accounts, the copy is an incremental copy only if all of the
following conditions are met:

• A different snapshot of the same source DB instance was previously copied to the destination account.
• The most recent snapshot copy still exists in the destination account.
• All copies of the snapshot in the destination account are either unencrypted, or were encrypted using
the same KMS key.
• If the source DB instance is a Multi-AZ instance, it hasn't failed over to another AZ since the last
snapshot was taken from it.

The following examples illustrate the difference between full and incremental snapshots. They apply to
both shared and unshared snapshots.

Snapshot Encryption key Full or incremental

S1 K1 Full

620
Amazon Relational Database Service User Guide
Copying a DB snapshot

Snapshot Encryption key Full or incremental

S2 K1 Incremental of S1

S3 K1 Incremental of S2

S4 K1 Incremental of S3

Copy of S1 (S1C) K2 Full

Copy of S2 (S2C) K3 Full

Copy of S3 (S3C) K3 Incremental of S2C

Copy of S4 (S4C) K3 Incremental of S3C

Copy 2 of S4 (S4C2) K4 Full

Note
In these examples, snapshots S2, S3, and S4 are incremental only if the previous snapshot still
exists.
The same applies to copies. Snapshot copies S3C and S4C are incremental only if the previous
copy still exists.

For information on copying incremental snapshots across AWS Regions, see Full and incremental
copies (p. 624).

Cross-Region snapshot copying


You can copy DB snapshots across AWS Regions. However, there are certain constraints and
considerations for cross-Region snapshot copying.

Requesting a cross-Region DB snapshot copy


To communicate with the source Region to request a cross-Region DB snapshot copy, the requester (IAM
role or IAM user) must have access to the source DB snapshot and the source Region.

Certain conditions in the requester's IAM policy can cause the request to fail. The following examples
assume that you're copying the DB snapshot from US East (Ohio) to US East (N. Virginia). These examples
show conditions in the requester's IAM policy that cause the request to fail:

• The requester's policy has a condition for aws:RequestedRegion.

...
"Effect": "Allow",
"Action": "rds:CopyDBSnapshot",
"Resource": "*",
"Condition": {
"StringEquals": {
"aws:RequestedRegion": "us-east-1"
}
}

The request fails because the policy doesn't allow access to the source Region. For a successful request,
specify both the source and destination Regions.

...
"Effect": "Allow",

621
Amazon Relational Database Service User Guide
Copying a DB snapshot

"Action": "rds:CopyDBSnapshot",
"Resource": "*",
"Condition": {
"StringEquals": {
"aws:RequestedRegion": [
"us-east-1",
"us-east-2"
]
}
}

• The requester's policy doesn't allow access to the source DB snapshot.

...
"Effect": "Allow",
"Action": "rds:CopyDBSnapshot",
"Resource": "arn:aws:rds:us-east-1:123456789012:snapshot:target-snapshot"
...

For a successful request, specify both the source and target snapshots.

...
"Effect": "Allow",
"Action": "rds:CopyDBSnapshot",
"Resource": [
"arn:aws:rds:us-east-1:123456789012:snapshot:target-snapshot",
"arn:aws:rds:us-east-2:123456789012:snapshot:source-snapshot"
]
...

• The requester's policy denies aws:ViaAWSService.

...
"Effect": "Allow",
"Action": "rds:CopyDBSnapshot",
"Resource": "*",
"Condition": {
"Bool": {"aws:ViaAWSService": "false"}
}

Communication with the source Region is made by RDS on the requester's behalf. For a successful
request, don't deny calls made by AWS services.
• The requester's policy has a condition for aws:SourceVpc or aws:SourceVpce.

These requests might fail because when RDS makes the call to the remote Region, it isn't from the
specified VPC or VPC endpoint.

If you need to use one of the previous conditions that would cause a request to fail, you can include a
second statement with aws:CalledVia in your policy to make the request succeed. For example, you
can use aws:CalledVia with aws:SourceVpce as shown here:

...
"Effect": "Allow",
"Action": "rds:CopyDBSnapshot",
"Resource": "*",
"Condition": {
"Condition" : {
"ForAnyValue:StringEquals" : {
"aws:SourceVpce": "vpce-1a2b3c4d"
}

622
Amazon Relational Database Service User Guide
Copying a DB snapshot

}
},
{
"Effect": "Allow",
"Action": [
"rds:CopyDBSnapshot"
],
"Resource": "*",
"Condition": {
"ForAnyValue:StringEquals": {
"aws:CalledVia": [
"rds.amazonaws.com"
]
}
}
}

For more information, see Policies and permissions in IAM in the IAM User Guide.

Authorizing the snapshot copy


After a cross-Region DB snapshot copy request returns success, RDS starts the copy in the background.
An authorization for RDS to access the source snapshot is created. This authorization links the source DB
snapshot to the target DB snapshot, and allows RDS to copy only to the specified target snapshot.

The authorization is verified by RDS using the rds:CrossRegionCommunication permission in


the service-linked IAM role. If the copy is authorized, RDS communicates with the source Region and
completes the copy.

RDS doesn't have access to DB snapshots that weren't authorized previously by a CopyDBSnapshot
request. The authorization is revoked when copying completes.

RDS uses the service-linked role to verify the authorization in the source Region. If you delete the
service-linked role during the copy process, the copy fails.

For more information, see Using service-linked roles in the IAM User Guide.

Using AWS Security Token Service credentials


Session tokens from the global AWS Security Token Service (AWS STS) endpoint are valid only in AWS
Regions that are enabled by default (commercial Regions). If you use credentials from the assumeRole
API operation in AWS STS, use the regional endpoint if the source Region is an opt-in Region. Otherwise,
the request fails. This happens because your credentials must be valid in both Regions, which is true for
opt-in Regions only when the regional AWS STS endpoint is used.

To use the global endpoint, make sure that it's enabled for both Regions in the operations. Set the global
endpoint to Valid in all AWS Regions in the AWS STS account settings.

The same rule applies to credentials in the presigned URL parameter.

For more information, see Managing AWS STS in an AWS Region in the IAM User Guide.

Latency and multiple copy requests


Depending on the AWS Regions involved and the amount of data to be copied, a cross-Region snapshot
copy can take hours to complete.

In some cases, there might be a large number of cross-Region snapshot copy requests from a given
source AWS Region. In such cases, Amazon RDS might put new cross-Region copy requests from that

623
Amazon Relational Database Service User Guide
Copying a DB snapshot

source AWS Region into a queue until some in-progress copies complete. No progress information is
displayed about copy requests while they are in the queue. Progress information is displayed when the
copying starts.

Full and incremental copies


When you copy a snapshot to a different AWS Region from the source snapshot, the first copy is a
full snapshot copy, even if you copy an incremental snapshot. A full snapshot copy contains all of the
data and metadata required to restore the DB instance. After the first snapshot copy, you can copy
incremental snapshots of the same DB instance to the same destination Region within the same AWS
account. For more information on incremental snapshots, see Incremental snapshot copying (p. 620).

Incremental snapshot copying across AWS Regions is supported for both unencrypted and encrypted
snapshots.

When you copy a snapshot across AWS Regions, the copy is an incremental copy if the following
conditions are met:

• The snapshot was previously copied to the destination Region.


• The most recent snapshot copy still exists in the destination Region.
• All copies of the snapshot in the destination Region are either unencrypted, or were encrypted using
the same KMS key.

Option group considerations


DB option groups are specific to the AWS Region that they are created in, and you can't use an option
group from one AWS Region in another AWS Region.

For Oracle databases, you can use the AWS CLI or RDS API to copy the custom DB option group from a
snapshot that has been shared with your AWS account. You can only copy option groups within the same
AWS Region. The option group isn't copied if it has already been copied to the destination account and
no changes have been made to it since being copied. If the source option group has been copied before,
but has changed since being copied, RDS copies the new version to the destination account. Default
option groups aren't copied.

When you copy a snapshot across Regions, you can specify a new option group for the snapshot. We
recommend that you prepare the new option group before you copy the snapshot. In the destination
AWS Region, create an option group with the same settings as the original DB instance. If one already
exists in the new AWS Region, you can use that one.

In some cases, you might copy a snapshot and not specify a new option group for the snapshot. In these
cases, when you restore the snapshot the DB instance gets the default option group. To give the new DB
instance the same options as the original, do the following:

1. In the destination AWS Region, create an option group with the same settings as the original DB
instance. If one already exists in the new AWS Region, you can use that one.
2. After you restore the snapshot in the destination AWS Region, modify the new DB instance and add
the new or existing option group from the previous step.

Parameter group considerations


When you copy a snapshot across Regions, the copy doesn't include the parameter group used by the
original DB instance. When you restore a snapshot to create a new DB instance, that DB instance gets
the default parameter group for the AWS Region it is created in. To give the new DB instance the same
parameters as the original, do the following:

624
Amazon Relational Database Service User Guide
Copying a DB snapshot

1. In the destination AWS Region, create a DB parameter group with the same settings as the original DB
instance. If one already exists in the new AWS Region, you can use that one.
2. After you restore the snapshot in the destination AWS Region, modify the new DB instance and add
the new or existing parameter group from the previous step.

Copying a DB snapshot
Use the procedures in this topic to copy a DB snapshot. For an overview of copying a snapshot, see
Copying a DB snapshot (p. 619)

For each AWS account, you can copy up to 20 DB snapshots at a time from one AWS Region to another.
If you copy a DB snapshot to another AWS Region, you create a manual DB snapshot that is retained in
that AWS Region. Copying a DB snapshot out of the source AWS Region incurs Amazon RDS data transfer
charges.

For more information about data transfer pricing, see Amazon RDS pricing.

After the DB snapshot copy has been created in the new AWS Region, the DB snapshot copy behaves the
same as all other DB snapshots in that AWS Region.

You can copy a DB snapshot using the AWS Management Console, the AWS CLI, or the RDS API.

Console

The following procedure copies an encrypted or unencrypted DB snapshot, in the same AWS Region or
across Regions, by using the AWS Management Console.

To copy a DB snapshot

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Snapshots.
3. Select the DB snapshot that you want to copy.
4. For Actions, choose Copy snapshot.

The Copy snapshot page appears.

625
Amazon Relational Database Service User Guide
Copying a DB snapshot

5. For Target option group (optional), choose a new option group if you want.

626
Amazon Relational Database Service User Guide
Copying a DB snapshot

Specify this option if you are copying a snapshot from one AWS Region to another, and your DB
instance uses a nondefault option group.

If your source DB instance uses Transparent Data Encryption for Oracle or Microsoft SQL Server,
you must specify this option when copying across Regions. For more information, see Option group
considerations (p. 624).
6. (Optional) To copy the DB snapshot to a different AWS Region, for Destination Region, choose the
new AWS Region.
Note
The destination AWS Region must have the same database engine version available as the
source AWS Region.
7. For New DB snapshot identifier, type the name of the DB snapshot copy.

You can make multiple copies of an automated backup or manual snapshot, but each copy must
have a unique identifier.
8. (Optional) Select Copy Tags to copy tags and values from the snapshot to the copy of the snapshot.
9. (Optional) For Encryption, do the following:

a. Choose Enable Encryption if the DB snapshot isn't encrypted but you want to encrypt the copy.
Note
If the DB snapshot is encrypted, you must encrypt the copy, so the check box is already
selected.
b. For AWS KMS key, specify the KMS key identifier to use to encrypt the DB snapshot copy.
10. Choose Copy snapshot.

AWS CLI

You can copy a DB snapshot by using the AWS CLI command copy-db-snapshot. If you are copying the
snapshot to a new AWS Region, run the command in the new AWS Region.

The following options are used to copy a DB snapshot. Not all options are required for all scenarios. Use
the descriptions and the examples that follow to determine which options to use.

• --source-db-snapshot-identifier – The identifier for the source DB snapshot.


• If the source snapshot is in the same AWS Region as the copy, specify a valid DB snapshot identifier.
For example, rds:mysql-instance1-snapshot-20130805.
• If the source snapshot is in the same AWS Region as the copy, and has been shared with
your AWS account, specify a valid DB snapshot ARN. For example, arn:aws:rds:us-
west-2:123456789012:snapshot:mysql-instance1-snapshot-20130805.
• If the source snapshot is in a different AWS Region than the copy, specify a valid DB snapshot
ARN. For example, arn:aws:rds:us-west-2:123456789012:snapshot:mysql-instance1-
snapshot-20130805.
• If you are copying from a shared manual DB snapshot, this parameter must be the Amazon Resource
Name (ARN) of the shared DB snapshot.
• If you are copying an encrypted snapshot this parameter must be in the ARN format for the
source AWS Region, and must match the SourceDBSnapshotIdentifier in the PreSignedUrl
parameter.
• --target-db-snapshot-identifier – The identifier for the new copy of the encrypted DB
snapshot.
• --copy-option-group – Copy the option group from a snapshot that has been shared with your
AWS account.

627
Amazon Relational Database Service User Guide
Copying a DB snapshot

• --copy-tags – Include the copy tags option to copy tags and values from the snapshot to the copy of
the snapshot.
• --option-group-name – The option group to associate with the copy of the snapshot.

Specify this option if you are copying a snapshot from one AWS Region to another, and your DB
instance uses a non-default option group.

If your source DB instance uses Transparent Data Encryption for Oracle or Microsoft SQL Server,
you must specify this option when copying across Regions. For more information, see Option group
considerations (p. 624).
• --kms-key-id – The KMS key identifier for an encrypted DB snapshot. The KMS key identifier is the
Amazon Resource Name (ARN), key identifier, or key alias for the KMS key.
• If you copy an encrypted DB snapshot from your AWS account, you can specify a value for this
parameter to encrypt the copy with a new KMS key. If you don't specify a value for this parameter,
then the copy of the DB snapshot is encrypted with the same KMS key as the source DB snapshot.
• If you copy an encrypted DB snapshot that is shared from another AWS account, then you must
specify a value for this parameter.
• If you specify this parameter when you copy an unencrypted snapshot, the copy is encrypted.
• If you copy an encrypted snapshot to a different AWS Region, then you must specify a KMS key for
the destination AWS Region. KMS keys are specific to the AWS Region that they are created in, and
you cannot use encryption keys from one AWS Region in another AWS Region.

Example from unencrypted, to the same Region

The following code creates a copy of a snapshot, with the new name mydbsnapshotcopy, in the same
AWS Region as the source snapshot. When the copy is made, the DB option group and tags on the
original snapshot are copied to the snapshot copy.

For Linux, macOS, or Unix:

aws rds copy-db-snapshot \


--source-db-snapshot-identifier arn:aws:rds:us-west-2:123456789012:snapshot:mysql-
instance1-snapshot-20130805 \
--target-db-snapshot-identifier mydbsnapshotcopy \
--copy-option-group \
--copy-tags

For Windows:

aws rds copy-db-snapshot ^


--source-db-snapshot-identifier arn:aws:rds:us-west-2:123456789012:snapshot:mysql-
instance1-snapshot-20130805 ^
--target-db-snapshot-identifier mydbsnapshotcopy ^
--copy-option-group ^
--copy-tags

Example from unencrypted, across Regions

The following code creates a copy of a snapshot, with the new name mydbsnapshotcopy, in the AWS
Region in which the command is run.

For Linux, macOS, or Unix:

aws rds copy-db-snapshot \


--source-db-snapshot-identifier arn:aws:rds:us-east-1:123456789012:snapshot:mysql-
instance1-snapshot-20130805 \

628
Amazon Relational Database Service User Guide
Copying a DB snapshot

--target-db-snapshot-identifier mydbsnapshotcopy

For Windows:

aws rds copy-db-snapshot ^


--source-db-snapshot-identifier arn:aws:rds:us-east-1:123456789012:snapshot:mysql-
instance1-snapshot-20130805 ^
--target-db-snapshot-identifier mydbsnapshotcopy

Example from encrypted, across Regions


The following code example copies an encrypted DB snapshot from the US West (Oregon) Region in the
US East (N. Virginia) Region. Run the command in the destination (us-east-1) Region.

For Linux, macOS, or Unix:

aws rds copy-db-snapshot \


--source-db-snapshot-identifier arn:aws:rds:us-west-2:123456789012:snapshot:mysql-
instance1-snapshot-20161115 \
--target-db-snapshot-identifier mydbsnapshotcopy \
--kms-key-id my-us-east-1-key \
--option-group-name custom-option-group-name

For Windows:

aws rds copy-db-snapshot ^


--source-db-snapshot-identifier arn:aws:rds:us-west-2:123456789012:snapshot:mysql-
instance1-snapshot-20161115 ^
--target-db-snapshot-identifier mydbsnapshotcopy ^
--kms-key-id my-us-east-1-key ^
--option-group-name custom-option-group-name

The --source-region parameter is required when you're copying an encrypted snapshot between the
AWS GovCloud (US-East) and AWS GovCloud (US-West) Regions. For --source-region, specify the
AWS Region of the source DB instance.

If --source-region isn't specified, specify a --pre-signed-url value. A presigned URL is a URL that
contains a Signature Version 4 signed request for the copy-db-snapshot command that's called in the
source AWS Region. To learn more about the pre-signed-url option, see copy-db-snapshot in the
AWS CLI Command Reference.

RDS API
You can copy a DB snapshot by using the Amazon RDS API operation CopyDBSnapshot. If you are
copying the snapshot to a new AWS Region, perform the action in the new AWS Region.

The following parameters are used to copy a DB snapshot. Not all parameters are required for all
scenarios. Use the descriptions and the examples that follow to determine which parameters to use.

• SourceDBSnapshotIdentifier – The identifier for the source DB snapshot.


• If the source snapshot is in the same AWS Region as the copy, specify a valid DB snapshot identifier.
For example, rds:mysql-instance1-snapshot-20130805.
• If the source snapshot is in the same AWS Region as the copy, and has been shared with
your AWS account, specify a valid DB snapshot ARN. For example, arn:aws:rds:us-
west-2:123456789012:snapshot:mysql-instance1-snapshot-20130805.
• If the source snapshot is in a different AWS Region than the copy, specify a valid DB snapshot
ARN. For example, arn:aws:rds:us-west-2:123456789012:snapshot:mysql-instance1-
snapshot-20130805.

629
Amazon Relational Database Service User Guide
Copying a DB snapshot

• If you are copying from a shared manual DB snapshot, this parameter must be the Amazon Resource
Name (ARN) of the shared DB snapshot.
• If you are copying an encrypted snapshot this parameter must be in the ARN format for the
source AWS Region, and must match the SourceDBSnapshotIdentifier in the PreSignedUrl
parameter.
• TargetDBSnapshotIdentifier – The identifier for the new copy of the encrypted DB snapshot.
• CopyOptionGroup – Set this parameter to true to copy the option group from a shared snapshot to
the copy of the snapshot. The default is false.
• CopyTags – Set this parameter to true to copy tags and values from the snapshot to the copy of the
snapshot. The default is false.
• OptionGroupName – The option group to associate with the copy of the snapshot.

Specify this parameter if you are copying a snapshot from one AWS Region to another, and your DB
instance uses a non-default option group.

If your source DB instance uses Transparent Data Encryption for Oracle or Microsoft SQL Server, you
must specify this parameter when copying across Regions. For more information, see Option group
considerations (p. 624).
• KmsKeyId – The KMS key identifier for an encrypted DB snapshot. The KMS key identifier is the
Amazon Resource Name (ARN), key identifier, or key alias for the KMS key.
• If you copy an encrypted DB snapshot from your AWS account, you can specify a value for this
parameter to encrypt the copy with a new KMS key. If you don't specify a value for this parameter,
then the copy of the DB snapshot is encrypted with the same KMS key as the source DB snapshot.
• If you copy an encrypted DB snapshot that is shared from another AWS account, then you must
specify a value for this parameter.
• If you specify this parameter when you copy an unencrypted snapshot, the copy is encrypted.
• If you copy an encrypted snapshot to a different AWS Region, then you must specify a KMS key for
the destination AWS Region. KMS keys are specific to the AWS Region that they are created in, and
you cannot use encryption keys from one AWS Region in another AWS Region.
• PreSignedUrl – The URL that contains a Signature Version 4 signed request for the
CopyDBSnapshot API operation in the source AWS Region that contains the source DB snapshot to
copy.

Specify this parameter when you copy an encrypted DB snapshot from another AWS Region by using
the Amazon RDS API. You can specify the source Region option instead of this parameter when you
copy an encrypted DB snapshot from another AWS Region by using the AWS CLI.

The presigned URL must be a valid request for the CopyDBSnapshot API operation that can be run in
the source AWS Region containing the encrypted DB snapshot to be copied. The presigned URL request
must contain the following parameter values:
• DestinationRegion – The AWS Region that the encrypted DB snapshot will be copied to. This
AWS Region is the same one where the CopyDBSnapshot operation is called that contains this
presigned URL.

For example, suppose that you copy an encrypted DB snapshot from the us-west-2 Region to the us-
east-1 Region. You then call the CopyDBSnapshot operation in the us-east-1 Region and provide a
presigned URL that contains a call to the CopyDBSnapshot operation in the us-west-2 Region. For
this example, the DestinationRegion in the presigned URL must be set to the us-east-1 Region.
• KmsKeyId – The KMS key identifier for the key to use to encrypt the copy of the DB snapshot in the
destination AWS Region. This is the same identifier for both the CopyDBSnapshot operation that is
called in the destination AWS Region, and the operation contained in the presigned URL.
• SourceDBSnapshotIdentifier – The DB snapshot identifier for the encrypted snapshot to be
copied. This identifier must be in the Amazon Resource Name (ARN) format for the source AWS
Region. For example, if you are copying an encrypted DB snapshot from the us-west-2 Region,

630
Amazon Relational Database Service User Guide
Copying a DB snapshot

then your SourceDBSnapshotIdentifier looks like the following example: arn:aws:rds:us-


west-2:123456789012:snapshot:mysql-instance1-snapshot-20161115.

For more information on Signature Version 4 signed requests, see the following:
• Authenticating requests: Using query parameters (AWS signature version 4) in the Amazon Simple
Storage Service API Reference
• Signature version 4 signing process in the AWS General Reference

Example from unencrypted, to the same Region

The following code creates a copy of a snapshot, with the new name mydbsnapshotcopy, in the same
AWS Region as the source snapshot. When the copy is made, all tags on the original snapshot are copied
to the snapshot copy.

https://rds.us-west-1.amazonaws.com/
?Action=CopyDBSnapshot
&CopyTags=true
&SignatureMethod=HmacSHA256
&SignatureVersion=4
&SourceDBSnapshotIdentifier=mysql-instance1-snapshot-20130805
&TargetDBSnapshotIdentifier=mydbsnapshotcopy
&Version=2013-09-09
&X-Amz-Algorithm=AWS4-HMAC-SHA256
&X-Amz-Credential=AKIADQKE4SARGYLE/20140429/us-west-1/rds/aws4_request
&X-Amz-Date=20140429T175351Z
&X-Amz-SignedHeaders=content-type;host;user-agent;x-amz-content-sha256;x-amz-date
&X-Amz-Signature=9164337efa99caf850e874a1cb7ef62f3cea29d0b448b9e0e7c53b288ddffed2

Example from unencrypted, across Regions

The following code creates a copy of a snapshot, with the new name mydbsnapshotcopy, in the US
West (N. California) Region.

https://rds.us-west-1.amazonaws.com/
?Action=CopyDBSnapshot
&SignatureMethod=HmacSHA256
&SignatureVersion=4
&SourceDBSnapshotIdentifier=arn%3Aaws%3Ards%3Aus-east-1%3A123456789012%3Asnapshot%3Amysql-
instance1-snapshot-20130805
&TargetDBSnapshotIdentifier=mydbsnapshotcopy
&Version=2013-09-09
&X-Amz-Algorithm=AWS4-HMAC-SHA256
&X-Amz-Credential=AKIADQKE4SARGYLE/20140429/us-west-1/rds/aws4_request
&X-Amz-Date=20140429T175351Z
&X-Amz-SignedHeaders=content-type;host;user-agent;x-amz-content-sha256;x-amz-date
&X-Amz-Signature=9164337efa99caf850e874a1cb7ef62f3cea29d0b448b9e0e7c53b288ddffed2

Example from encrypted, across Regions

The following code creates a copy of a snapshot, with the new name mydbsnapshotcopy, in the US East
(N. Virginia) Region.

https://rds.us-east-1.amazonaws.com/
?Action=CopyDBSnapshot
&KmsKeyId=my-us-east-1-key
&OptionGroupName=custom-option-group-name
&PreSignedUrl=https%253A%252F%252Frds.us-west-2.amazonaws.com%252F
%253FAction%253DCopyDBSnapshot
%2526DestinationRegion%253Dus-east-1

631
Amazon Relational Database Service User Guide
Copying a DB snapshot

%2526KmsKeyId%253Dmy-us-east-1-key
%2526SourceDBSnapshotIdentifier%253Darn%25253Aaws%25253Ards%25253Aus-
west-2%25253A123456789012%25253Asnapshot%25253Amysql-instance1-snapshot-20161115
%2526SignatureMethod%253DHmacSHA256
%2526SignatureVersion%253D4
%2526Version%253D2014-10-31
%2526X-Amz-Algorithm%253DAWS4-HMAC-SHA256
%2526X-Amz-Credential%253DAKIADQKE4SARGYLE%252F20161117%252Fus-west-2%252Frds
%252Faws4_request
%2526X-Amz-Date%253D20161117T215409Z
%2526X-Amz-Expires%253D3600
%2526X-Amz-SignedHeaders%253Dcontent-type%253Bhost%253Buser-agent%253Bx-amz-
content-sha256%253Bx-amz-date
%2526X-Amz-Signature
%253D255a0f17b4e717d3b67fad163c3ec26573b882c03a65523522cf890a67fca613
&SignatureMethod=HmacSHA256
&SignatureVersion=4
&SourceDBSnapshotIdentifier=arn%3Aaws%3Ards%3Aus-west-2%3A123456789012%3Asnapshot
%3Amysql-instance1-snapshot-20161115
&TargetDBSnapshotIdentifier=mydbsnapshotcopy
&Version=2014-10-31
&X-Amz-Algorithm=AWS4-HMAC-SHA256
&X-Amz-Credential=AKIADQKE4SARGYLE/20161117/us-east-1/rds/aws4_request
&X-Amz-Date=20161117T221704Z
&X-Amz-SignedHeaders=content-type;host;user-agent;x-amz-content-sha256;x-amz-date
&X-Amz-Signature=da4f2da66739d2e722c85fcfd225dc27bba7e2b8dbea8d8612434378e52adccf

632
Amazon Relational Database Service User Guide
Sharing a DB snapshot

Sharing a DB snapshot
Using Amazon RDS, you can share a manual DB snapshot in the following ways:

• Sharing a manual DB snapshot, whether encrypted or unencrypted, enables authorized AWS accounts
to copy the snapshot.
• Sharing an unencrypted manual DB snapshot enables authorized AWS accounts to directly restore a
DB instance from the snapshot instead of taking a copy of it and restoring from that. However, you
can't restore a DB instance from a DB snapshot that is both shared and encrypted. Instead, you can
make a copy of the DB snapshot and restore the DB instance from the copy.

Note
To share an automated DB snapshot, create a manual DB snapshot by copying the automated
snapshot, and then share that copy. This process also applies to AWS Backup–generated
resources.

For more information on copying a snapshot, see Copying a DB snapshot (p. 619). For more information
on restoring a DB instance from a DB snapshot, see Restoring from a DB snapshot (p. 615).

You can share a manual snapshot with up to 20 other AWS accounts.

The following limitations apply when sharing manual snapshots with other AWS accounts:

• When you restore a DB instance from a shared snapshot using the AWS Command Line Interface (AWS
CLI) or Amazon RDS API, you must specify the Amazon Resource Name (ARN) of the shared snapshot
as the snapshot identifier.
• You can't share a DB snapshot that uses an option group with permanent or persistent options, except
for Oracle DB instances that have the Timezone or OLS option (or both).

A permanent option can't be removed from an option group. Option groups with persistent options
can't be removed from a DB instance once the option group has been assigned to the DB instance.

The following table lists permanent and persistent options and their related DB engines.

Option name Persistent Permanent DB engine

TDE Yes No Microsoft SQL Server Enterprise


Edition

TDE Yes Yes Oracle Enterprise Edition

Timezone Yes Yes Oracle Enterprise Edition

Oracle Standard Edition

Oracle Standard Edition One

Oracle Standard Edition Two

For Oracle DB instances, you can copy shared DB snapshots that have the Timezone or OLS option
(or both). To do so, specify a target option group that includes these options when you copy the DB
snapshot. The OLS option is permanent and persistent only for Oracle DB instances running Oracle
version 12.2 or higher. For more information about these options, see Oracle time zone (p. 2087) and
Oracle Label Security (p. 2049).

633
Amazon Relational Database Service User Guide
Sharing a DB snapshot

Sharing public snapshots


You can also share an unencrypted manual snapshot as public, which makes the snapshot available to
all AWS accounts. Make sure when sharing a snapshot as public that none of your private information is
included in the public snapshot.

When a snapshot is shared publicly, it gives all AWS accounts permission both to copy the snapshot and
to create DB instances from it.

You aren't billed for the backup storage of public snapshots owned by other accounts. You're billed only
for snapshots that you own.

If you copy a public snapshot, you own the copy. You're billed for the backup storage of your snapshot
copy. If you create a DB instance from a public snapshot, you're billed for that DB instance. For Amazon
RDS pricing information, see the Amazon RDS product page.

You can delete only the public snapshots that you own. To delete a shared or public snapshot, make sure
to log into the AWS account that owns the snapshot.

Viewing public snapshots owned by other AWS accounts


You can view public snapshots owned by other accounts in a particular AWS Region on the Public tab of
the Snapshots page in the Amazon RDS console. Your snapshots (those owned by your account) don't
appear on this tab.

To view public snapshots

1. Open the Amazon RDS console at https://console.aws.amazon.com/rds/.


2. In the navigation pane, choose Snapshots.
3. Choose the Public tab.

The public snapshots appear. You can see which account owns a public snapshot in the Owner
column.
Note
You might have to modify the page preferences, by selecting the gear icon at the upper
right of the Public snapshots list, to see this column.

Viewing your own public snapshots


You can use the following AWS CLI command (Unix only) to view the public snapshots owned by your
AWS account in a particular AWS Region.

aws rds describe-db-snapshots --snapshot-type public --include-public | grep account_number

The output returned is similar to the following example if you have public snapshots.

"DBSnapshotArn": "arn:aws:rds:us-east-1:123456789012:snapshot:mysnapshot1",
"DBSnapshotArn": "arn:aws:rds:us-east-1:123456789012:snapshot:mysnapshot2",

Note
You might see duplicate entries for DBSnapshotIdentifier or
SourceDBSnapshotIdentifier.

Sharing encrypted snapshots


You can share DB snapshots that have been encrypted "at rest" using the AES-256 encryption algorithm,
as described in Encrypting Amazon RDS resources (p. 2586). To do this, take the following steps:

634
Amazon Relational Database Service User Guide
Sharing a DB snapshot

1. Share the AWS KMS key that was used to encrypt the snapshot with any accounts that you want to be
able to access the snapshot.

You can share KMS keys with another AWS account by adding the other account to the KMS key policy.
For details on updating a key policy, see Key policies in the AWS KMS Developer Guide. For an example
of creating a key policy, see Allowing access to an AWS KMS key (p. 635) later in this topic.
2. Use the AWS Management Console, AWS CLI, or Amazon RDS API to share the encrypted snapshot
with the other accounts.

These restrictions apply to sharing encrypted snapshots:

• You can't share encrypted snapshots as public.


• You can't share Oracle or Microsoft SQL Server snapshots that are encrypted using Transparent Data
Encryption (TDE).
• You can't share a snapshot that has been encrypted using the default KMS key of the AWS account
that shared the snapshot.

Allowing access to an AWS KMS key


For another AWS account to copy an encrypted DB snapshot shared from your account, the account that
you share your snapshot with must have access to the AWS KMS key that encrypted the snapshot.

To allow another AWS account access to a KMS key, update the key policy for the KMS key. You update it
with the Amazon Resource Name (ARN) of the AWS account that you are sharing to as Principal in the
KMS key policy. Then you allow the kms:CreateGrant action.

After you have given an AWS account access to your KMS key, to copy your encrypted snapshot that AWS
account must create an AWS Identity and Access Management (IAM) role or user if it doesn't already
have one. In addition, that AWS account must also attach an IAM policy to that IAM permission set or
roles that allows the permission set or roles to copy an encrypted DB snapshot using your KMS key.
The account must be an IAM user and can't be a root AWS account identity due to AWS KMS security
restrictions.

In the following key policy example, user 111122223333 is the owner of the KMS key, and user
444455556666 is the account that the key is being shared with. This updated key policy gives the
AWS account access to the KMS key by including the ARN for the root AWS account identity for user
444455556666 as a Principal for the policy, and by allowing the kms:CreateGrant action.

{
"Id": "key-policy-1",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Allow use of the key",
"Effect": "Allow",
"Principal": {"AWS": [
"arn:aws:iam::111122223333:user/KeyUser",
"arn:aws:iam::444455556666:root"
]},
"Action": [
"kms:CreateGrant",
"kms:Encrypt",
"kms:Decrypt",
"kms:ReEncrypt*",
"kms:GenerateDataKey*",
"kms:DescribeKey"
],

635
Amazon Relational Database Service User Guide
Sharing a DB snapshot

"Resource": "*"
},
{
"Sid": "Allow attachment of persistent resources",
"Effect": "Allow",
"Principal": {"AWS": [
"arn:aws:iam::111122223333:user/KeyUser",
"arn:aws:iam::444455556666:root"
]},
"Action": [
"kms:CreateGrant",
"kms:ListGrants",
"kms:RevokeGrant"
],
"Resource": "*",
"Condition": {"Bool": {"kms:GrantIsForAWSResource": true}}
}
]
}

Creating an IAM policy to enable copying of the encrypted snapshot

Once the external AWS account has access to your KMS key, the owner of that AWS account can create
a policy that allows an IAM user created for that account to copy an encrypted snapshot encrypted with
that KMS key.

The following example shows a policy that can be attached to an IAM user for AWS account
444455556666 that enables the IAM user to copy a shared snapshot from AWS account 111122223333
that has been encrypted with the KMS key c989c1dd-a3f2-4a5d-8d96-e793d082ab26 in the us-
west-2 region.

{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowUseOfTheKey",
"Effect": "Allow",
"Action": [
"kms:Encrypt",
"kms:Decrypt",
"kms:ReEncrypt*",
"kms:GenerateDataKey*",
"kms:DescribeKey",
"kms:CreateGrant",
"kms:RetireGrant"
],
"Resource": ["arn:aws:kms:us-west-2:111122223333:key/c989c1dd-a3f2-4a5d-8d96-
e793d082ab26"]
},
{
"Sid": "AllowAttachmentOfPersistentResources",
"Effect": "Allow",
"Action": [
"kms:CreateGrant",
"kms:ListGrants",
"kms:RevokeGrant"
],
"Resource": ["arn:aws:kms:us-west-2:111122223333:key/c989c1dd-a3f2-4a5d-8d96-
e793d082ab26"],
"Condition": {
"Bool": {
"kms:GrantIsForAWSResource": true

636
Amazon Relational Database Service User Guide
Sharing a DB snapshot

}
}
}
]
}

For details on updating a key policy, see Key policies in the AWS KMS Developer Guide.

Sharing a snapshot
You can share a DB snapshot using the AWS Management Console, the AWS CLI, or the RDS API.

Console

Using the Amazon RDS console, you can share a manual DB snapshot with up to 20 AWS accounts. You
can also use the console to stop sharing a manual snapshot with one or more accounts.

To share a manual DB snapshot by using the Amazon RDS console

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Snapshots.
3. Select the manual snapshot that you want to share.
4. For Actions, choose Share snapshot.
5. Choose one of the following options for DB snapshot visibility.

• If the source is unencrypted, choose Public to permit all AWS accounts to restore a DB instance
from your manual DB snapshot, or choose Private to permit only AWS accounts that you specify
to restore a DB instance from your manual DB snapshot.
Warning
If you set DB snapshot visibility to Public, all AWS accounts can restore a DB instance
from your manual DB snapshot and have access to your data. Do not share any manual
DB snapshots that contain private information as Public.
• If the source is encrypted, DB snapshot visibility is set as Private because encrypted snapshots
can't be shared as public.
6. For AWS Account ID, type the AWS account identifier for an account that you want to permit
to restore a DB instance from your manual snapshot, and then choose Add. Repeat to include
additional AWS account identifiers, up to 20 AWS accounts.

If you make an error when adding an AWS account identifier to the list of permitted accounts, you
can delete it from the list by choosing Delete at the right of the incorrect AWS account identifier.

637
Amazon Relational Database Service User Guide
Sharing a DB snapshot

7. After you have added identifiers for all of the AWS accounts that you want to permit to restore the
manual snapshot, choose Save to save your changes.

To stop sharing a manual DB snapshot with an AWS account

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Snapshots.
3. Select the manual snapshot that you want to stop sharing.
4. Choose Actions, and then choose Share snapshot.
5. To remove permission for an AWS account, choose Delete for the AWS account identifier for that
account from the list of authorized accounts.

638
Amazon Relational Database Service User Guide
Sharing a DB snapshot

6. Choose Save to save your changes.

AWS CLI

To share a DB snapshot, use the aws rds modify-db-snapshot-attribute command. Use the --
values-to-add parameter to add a list of the IDs for the AWS accounts that are authorized to restore
the manual snapshot.

Example of sharing a snapshot with a single account

The following example enables AWS account identifier 123456789012 to restore the DB snapshot
named db7-snapshot.

For Linux, macOS, or Unix:

aws rds modify-db-snapshot-attribute \


--db-snapshot-identifier db7-snapshot \
--attribute-name restore \
--values-to-add 123456789012

For Windows:

aws rds modify-db-snapshot-attribute ^


--db-snapshot-identifier db7-snapshot ^
--attribute-name restore ^
--values-to-add 123456789012

639
Amazon Relational Database Service User Guide
Sharing a DB snapshot

Example of sharing a snapshot with multiple accounts

The following example enables two AWS account identifiers, 111122223333 and 444455556666, to
restore the DB snapshot named manual-snapshot1.

For Linux, macOS, or Unix:

aws rds modify-db-snapshot-attribute \


--db-snapshot-identifier manual-snapshot1 \
--attribute-name restore \
--values-to-add {"111122223333","444455556666"}

For Windows:

aws rds modify-db-snapshot-attribute ^


--db-snapshot-identifier manual-snapshot1 ^
--attribute-name restore ^
--values-to-add "[\"111122223333\",\"444455556666\"]"

Note
When using the Windows command prompt, you must escape double quotes (") in JSON code by
prefixing them with a backslash (\).

To remove an AWS account identifier from the list, use the --values-to-remove parameter.

Example of stopping snapshot sharing

The following example prevents AWS account ID 444455556666 from restoring the snapshot.

For Linux, macOS, or Unix:

aws rds modify-db-snapshot-attribute \


--db-snapshot-identifier manual-snapshot1 \
--attribute-name restore \
--values-to-remove 444455556666

For Windows:

aws rds modify-db-snapshot-attribute ^


--db-snapshot-identifier manual-snapshot1 ^
--attribute-name restore ^
--values-to-remove 444455556666

To list the AWS accounts enabled to restore a snapshot, use the describe-db-snapshot-attributes
AWS CLI command.

RDS API

You can also share a manual DB snapshot with other AWS accounts by using the Amazon RDS API. To do
so, call the ModifyDBSnapshotAttribute operation. Specify restore for AttributeName, and use
the ValuesToAdd parameter to add a list of the IDs for the AWS accounts that are authorized to restore
the manual snapshot.

To make a manual snapshot public and restorable by all AWS accounts, use the value all. However,
take care not to add the all value for any manual snapshots that contain private information that you
don't want to be available to all AWS accounts. Also, don't specify all for encrypted snapshots, because
making such snapshots public isn't supported.

640
Amazon Relational Database Service User Guide
Sharing a DB snapshot

To remove sharing permission for an AWS account, use the ModifyDBSnapshotAttribute operation
with AttributeName set to restore and the ValuesToRemove parameter. To mark a manual
snapshot as private, remove the value all from the values list for the restore attribute.

To list all of the AWS accounts permitted to restore a snapshot, use the
DescribeDBSnapshotAttributes API operation.

641
Amazon Relational Database Service User Guide
Exporting DB snapshot data to Amazon S3

Exporting DB snapshot data to Amazon S3


You can export DB snapshot data to an Amazon S3 bucket. The export process runs in the background
and doesn't affect the performance of your active DB instance.

When you export a DB snapshot, Amazon RDS extracts data from the snapshot and stores it in an
Amazon S3 bucket. The data is stored in an Apache Parquet format that is compressed and consistent.

You can export all types of DB snapshots—including manual snapshots, automated system snapshots,
and snapshots created by the AWS Backup service. By default, all data in the snapshot is exported.
However, you can choose to export specific sets of databases, schemas, or tables.

After the data is exported, you can analyze the exported data directly through tools like Amazon Athena
or Amazon Redshift Spectrum. For more information on using Athena to read Parquet data, see Parquet
SerDe in the Amazon Athena User Guide. For more information on using Redshift Spectrum to read
Parquet data, see COPY from columnar data formats in the Amazon Redshift Database Developer Guide.

Topics
• Region and version availability (p. 642)
• Limitations (p. 642)
• Overview of exporting snapshot data (p. 643)
• Setting up access to an Amazon S3 bucket (p. 644)
• Exporting a DB snapshot to an Amazon S3 bucket (p. 647)
• Monitoring snapshot exports (p. 650)
• Canceling a snapshot export task (p. 651)
• Failure messages for Amazon S3 export tasks (p. 652)
• Troubleshooting PostgreSQL permissions errors (p. 653)
• File naming convention (p. 653)
• Data conversion when exporting to an Amazon S3 bucket (p. 654)

Region and version availability


Feature availability and support varies across specific versions of each database engine and across AWS
Regions. For more information on version and Region availability with exporting snapshots to S3, see
Export snapshots to S3 (p. 133).

Limitations
Exporting DB snapshot data to Amazon S3 has the following limitations:

• You can't run multiple export tasks for the same DB snapshot simultaneously. This applies to both full
and partial exports.
• Exporting snapshots from DB instances that use magnetic storage isn't supported.
• The following characters in the S3 file path are converted to underscores (_) during export:

\ ` " (space)

• If a database, schema, or table has characters in its name other than the following, partial export isn't
supported. However, you can export the entire DB snapshot.
• Latin letters (A–Z)
• Digits (0–9)
• Dollar symbol ($)

642
Amazon Relational Database Service User Guide
Exporting DB snapshot data to Amazon S3

• Underscore (_)
• Spaces ( ) and certain characters aren't supported in database table column names. Tables with the
following characters in column names are skipped during export:

, ; { } ( ) \n \t = (space)

• Tables with slashes (/) in their names are skipped during export.
• RDS for PostgreSQL temporary and unlogged tables are skipped during export.
• If the data contains a large object, such as a BLOB or CLOB, that is close to or greater than 500 MB,
then the export fails.
• If a table contains a large row that is close to or greater than 2 GB, then the table is skipped during
export.
• We strongly recommend that you use a unique name for each export task. If you don't use a unique
task name, you might receive the following error message:

ExportTaskAlreadyExistsFault: An error occurred (ExportTaskAlreadyExists) when calling the


StartExportTask operation: The export task with the ID xxxxx already exists.
• You can delete a snapshot while you're exporting its data to S3, but you're still charged for the storage
costs for that snapshot until the export task has completed.
• You can't restore exported snapshot data from S3 to a new DB instance.

Overview of exporting snapshot data


You use the following process to export DB snapshot data to an Amazon S3 bucket. For more details, see
the following sections.

1. Identify the snapshot to export.

Use an existing automated or manual snapshot, or create a manual snapshot of a DB instance.


2. Set up access to the Amazon S3 bucket.

A bucket is a container for Amazon S3 objects or files. To provide the information to access a bucket,
take the following steps:

a. Identify the S3 bucket where the snapshot is to be exported to. The S3 bucket must be in the
same AWS Region as the snapshot. For more information, see Identifying the Amazon S3 bucket
for export (p. 644).
b. Create an AWS Identity and Access Management (IAM) role that grants the snapshot export task
access to the S3 bucket. For more information, see Providing access to an Amazon S3 bucket
using an IAM role (p. 644).
3. Create a symmetric encryption AWS KMS key for the server-side encryption. The KMS key is used by
the snapshot export task to set up AWS KMS server-side encryption when writing the export data
to S3. The KMS key policy must include both the kms:Encrypt and kms:Decrypt permissions. For
more information on using KMS keys in Amazon RDS, see AWS KMS key management (p. 2589).

If you have a deny statement in your KMS key policy, make sure to explicitly exclude the AWS service
principal export.rds.amazonaws.com.

You can use a KMS key within your AWS account, or you can use a cross-account KMS key. For more
information, see Using a cross-account AWS KMS key for encrypting Amazon S3 exports (p. 646).
4. Export the snapshot to Amazon S3 using the console or the start-export-task CLI command.
For more information, see Exporting a DB snapshot to an Amazon S3 bucket (p. 647).
5. To access your exported data in the Amazon S3 bucket, see Uploading, downloading, and managing
objects in the Amazon Simple Storage Service User Guide.

643
Amazon Relational Database Service User Guide
Exporting DB snapshot data to Amazon S3

Setting up access to an Amazon S3 bucket


To export DB snapshot data to an Amazon S3 file, you first give the snapshot permission to access the
Amazon S3 bucket. You then create an IAM role to allow the Amazon RDS service to write to the Amazon
S3 bucket.

Topics
• Identifying the Amazon S3 bucket for export (p. 644)
• Providing access to an Amazon S3 bucket using an IAM role (p. 644)
• Using a cross-account Amazon S3 bucket (p. 646)
• Using a cross-account AWS KMS key for encrypting Amazon S3 exports (p. 646)

Identifying the Amazon S3 bucket for export


Identify the Amazon S3 bucket to export the DB snapshot to. Use an existing S3 bucket or create a new
S3 bucket.
Note
The S3 bucket to export to must be in the same AWS Region as the snapshot.

For more information about working with Amazon S3 buckets, see the following in the Amazon Simple
Storage Service User Guide:

• How do I view the properties for an S3 bucket?


• How do I enable default encryption for an Amazon S3 bucket?
• How do I create an S3 bucket?

Providing access to an Amazon S3 bucket using an IAM role


Before you export DB snapshot data to Amazon S3, give the snapshot export tasks write-access
permission to the Amazon S3 bucket.

To grant this permission, create an IAM policy that provides access to the bucket, then create an IAM role
and attach the policy to the role. You later assign the IAM role to your snapshot export task.
Important
If you plan to use the AWS Management Console to export your snapshot, you can choose to
create the IAM policy and the role automatically when you export the snapshot. For instructions,
see Exporting a DB snapshot to an Amazon S3 bucket (p. 647).

To give DB snapshot tasks access to Amazon S3

1. Create an IAM policy. This policy provides the bucket and object permissions that allow your
snapshot export task to access Amazon S3.

In the policy, include the following required actions to allow the transfer of files from Amazon RDS
to an S3 bucket:

• s3:PutObject*
• s3:GetObject*
• s3:ListBucket
• s3:DeleteObject*
• s3:GetBucketLocation

644
Amazon Relational Database Service User Guide
Exporting DB snapshot data to Amazon S3

In the policy, include the following resources to identify the S3 bucket and objects in the bucket. The
following list of resources shows the Amazon Resource Name (ARN) format for accessing Amazon S3.

• arn:aws:s3:::your-s3-bucket
• arn:aws:s3:::your-s3-bucket/*

For more information on creating an IAM policy for Amazon RDS, see Creating and using an IAM
policy for IAM database access (p. 2646). See also Tutorial: Create and attach your first customer
managed policy in the IAM User Guide.

The following AWS CLI command creates an IAM policy named ExportPolicy with these options. It
grants access to a bucket named your-s3-bucket.
Note
After you create the policy, note the ARN of the policy. You need the ARN for a subsequent
step when you attach the policy to an IAM role.

aws iam create-policy --policy-name ExportPolicy --policy-document '{


"Version": "2012-10-17",
"Statement": [
{
"Sid": "ExportPolicy",
"Effect": "Allow",
"Action": [
"s3:PutObject*",
"s3:ListBucket",
"s3:GetObject*",
"s3:DeleteObject*",
"s3:GetBucketLocation"
],
"Resource": [
"arn:aws:s3:::your-s3-bucket",
"arn:aws:s3:::your-s3-bucket/*"
]
}
]
}'

2. Create an IAM role, so that Amazon RDS can assume this IAM role on your behalf to access your
Amazon S3 buckets. For more information, see Creating a role to delegate permissions to an IAM
user in the IAM User Guide.

The following example shows using the AWS CLI command to create a role named rds-s3-
export-role.

aws iam create-role --role-name rds-s3-export-role --assume-role-policy-document '{


"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "export.rds.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}'

3. Attach the IAM policy that you created to the IAM role that you created.

645
Amazon Relational Database Service User Guide
Exporting DB snapshot data to Amazon S3

The following AWS CLI command attaches the policy created earlier to the role named rds-s3-
export-role. Replace your-policy-arn with the policy ARN that you noted in an earlier step.

aws iam attach-role-policy --policy-arn your-policy-arn --role-name rds-s3-export-


role

Using a cross-account Amazon S3 bucket


You can use Amazon S3 buckets across AWS accounts. To use a cross-account bucket, add a bucket policy
to allow access to the IAM role that you're using for the S3 exports. For more information, see Example 2:
Bucket owner granting cross-account bucket permissions.

• Attach a bucket policy to your bucket, as shown in the following example.

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::123456789012:role/Admin"
},
"Action": [
"s3:PutObject*",
"s3:ListBucket",
"s3:GetObject*",
"s3:DeleteObject*",
"s3:GetBucketLocation"
],
"Resource": [
"arn:aws:s3:::mycrossaccountbucket",
"arn:aws:s3:::mycrossaccountbucket/*"
]
}
]
}

Using a cross-account AWS KMS key for encrypting Amazon S3 exports


You can use a cross-account AWS KMS key to encrypt Amazon S3 exports. First, you add a key policy to
the local account, then you add IAM policies in the external account. For more information, see Allowing
users in other accounts to use a KMS key.

To use a cross-account KMS key

1. Add a key policy to the local account.

The following example gives ExampleRole and ExampleUser in the external account
444455556666 permissions in the local account 123456789012.

{
"Sid": "Allow an external account to use this KMS key",
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::444455556666:role/ExampleRole",
"arn:aws:iam::444455556666:user/ExampleUser"

646
Amazon Relational Database Service User Guide
Exporting DB snapshot data to Amazon S3

]
},
"Action": [
"kms:Encrypt",
"kms:Decrypt",
"kms:ReEncrypt*",
"kms:GenerateDataKey*",
"kms:CreateGrant",
"kms:DescribeKey",
"kms:RetireGrant"
],
"Resource": "*"
}

2. Add IAM policies to the external account.

The following example IAM policy allows the principal to use the KMS key in account 123456789012
for cryptographic operations. To give this permission to ExampleRole and ExampleUser in
account 444455556666, attach the policy to them in that account.

{
"Sid": "Allow use of KMS key in account 123456789012",
"Effect": "Allow",
"Action": [
"kms:Encrypt",
"kms:Decrypt",
"kms:ReEncrypt*",
"kms:GenerateDataKey*",
"kms:CreateGrant",
"kms:DescribeKey",
"kms:RetireGrant"
],
"Resource": "arn:aws:kms:us-
west-2:123456789012:key/1234abcd-12ab-34cd-56ef-1234567890ab"
}

Exporting a DB snapshot to an Amazon S3 bucket


You can have up to five concurrent DB snapshot export tasks in progress per AWS account.
Note
Exporting RDS snapshots can take a while depending on your database type and size. The
export task first restores and scales the entire database before extracting the data to Amazon
S3. The task's progress during this phase displays as Starting. When the task switches to
exporting data to S3, progress displays as In progress.
The time it takes for the export to complete depends on the data stored in the database. For
example, tables with well-distributed numeric primary key or index columns export the fastest.
Tables that don't contain a column suitable for partitioning and tables with only one index on
a string-based column take longer. This longer export time occurs because the export uses a
slower single-threaded process.

You can export a DB snapshot to Amazon S3 using the AWS Management Console, the AWS CLI, or the
RDS API.

If you use a Lambda function to export a snapshot, add the kms:DescribeKey action to the Lambda
function policy. For more information, see AWS Lambda permissions.

Console
The Export to Amazon S3 console option appears only for snapshots that can be exported to Amazon
S3. A snapshot might not be available for export because of the following reasons:

647
Amazon Relational Database Service User Guide
Exporting DB snapshot data to Amazon S3

• The DB engine isn't supported for S3 export.


• The DB instance version isn't supported for S3 export.
• S3 export isn't supported in the AWS Region where the snapshot was created.

To export a DB snapshot

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Snapshots.
3. From the tabs, choose the type of snapshot that you want to export.
4. In the list of snapshots, choose the snapshot that you want to export.
5. For Actions, choose Export to Amazon S3.

The Export to Amazon S3 window appears.


6. For Export identifier, enter a name to identify the export task. This value is also used for the name
of the file created in the S3 bucket.
7. Choose the data to be exported:

• Choose All to export all data in the snapshot.


• Choose Partial to export specific parts of the snapshot. To identify which parts of the snapshot to
export, enter one or more databases, schemas, or tables for Identifiers, separated by spaces.

Use the following format:

database[.schema][.table] database2[.schema2][.table2] ... databasen[.scheman]


[.tablen]

For example:

mydatabase mydatabase2.myschema1 mydatabase2.myschema2.mytable1


mydatabase2.myschema2.mytable2

8. For S3 bucket, choose the bucket to export to.

To assign the exported data to a folder path in the S3 bucket, enter the optional path for S3 prefix.
9. For IAM role, either choose a role that grants you write access to your chosen S3 bucket, or create a
new role.

• If you created a role by following the steps in Providing access to an Amazon S3 bucket using an
IAM role (p. 644), choose that role.
• If you didn't create a role that grants you write access to your chosen S3 bucket, then choose
Create a new role to create the role automatically. Next, enter a name for the role in IAM role
name.
10. For AWS KMS key, enter the ARN for the key to use for encrypting the exported data.
11. Choose Export to Amazon S3.

AWS CLI

To export a DB snapshot to Amazon S3 using the AWS CLI, use the start-export-task command with the
following required options:

• --export-task-identifier

648
Amazon Relational Database Service User Guide
Exporting DB snapshot data to Amazon S3

• --source-arn
• --s3-bucket-name
• --iam-role-arn
• --kms-key-id

In the following examples, the snapshot export task is named my-snapshot-export, which exports a
snapshot to an S3 bucket named my-export-bucket.

Example

For Linux, macOS, or Unix:

aws rds start-export-task \


--export-task-identifier my-snapshot-export \
--source-arn arn:aws:rds:AWS_Region:123456789012:snapshot:snapshot-name \
--s3-bucket-name my-export-bucket \
--iam-role-arn iam-role \
--kms-key-id my-key

For Windows:

aws rds start-export-task ^


--export-task-identifier my-snapshot-export ^
--source-arn arn:aws:rds:AWS_Region:123456789012:snapshot:snapshot-name ^
--s3-bucket-name my-export-bucket ^
--iam-role-arn iam-role ^
--kms-key-id my-key

Sample output follows.

{
"Status": "STARTING",
"IamRoleArn": "iam-role",
"ExportTime": "2019-08-12T01:23:53.109Z",
"S3Bucket": "my-export-bucket",
"PercentProgress": 0,
"KmsKeyId": "my-key",
"ExportTaskIdentifier": "my-snapshot-export",
"TotalExtractedDataInGB": 0,
"TaskStartTime": "2019-11-13T19:46:00.173Z",
"SourceArn": "arn:aws:rds:AWS_Region:123456789012:snapshot:snapshot-name"
}

To provide a folder path in the S3 bucket for the snapshot export, include the --s3-prefix option in
the start-export-task command.

RDS API

To export a DB snapshot to Amazon S3 using the Amazon RDS API, use the StartExportTask operation
with the following required parameters:

• ExportTaskIdentifier
• SourceArn
• S3BucketName
• IamRoleArn

649
Amazon Relational Database Service User Guide
Exporting DB snapshot data to Amazon S3

• KmsKeyId

Monitoring snapshot exports


You can monitor DB snapshot exports using the AWS Management Console, the AWS CLI, or the RDS API.

Console

To monitor DB snapshot exports

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Snapshots.
3. To view the list of snapshot exports, choose the Exports in Amazon S3 tab.
4. To view information about a specific snapshot export, choose the export task.

AWS CLI

To monitor DB snapshot exports using the AWS CLI, use the describe-export-tasks command.

The following example shows how to display current information about all of your snapshot exports.

Example

aws rds describe-export-tasks

{
"ExportTasks": [
{
"Status": "CANCELED",
"TaskEndTime": "2019-11-01T17:36:46.961Z",
"S3Prefix": "something",
"ExportTime": "2019-10-24T20:23:48.364Z",
"S3Bucket": "examplebucket",
"PercentProgress": 0,
"KmsKeyId": "arn:aws:kms:AWS_Region:123456789012:key/K7MDENG/
bPxRfiCYEXAMPLEKEY",
"ExportTaskIdentifier": "anewtest",
"IamRoleArn": "arn:aws:iam::123456789012:role/export-to-s3",
"TotalExtractedDataInGB": 0,
"TaskStartTime": "2019-10-25T19:10:58.885Z",
"SourceArn": "arn:aws:rds:AWS_Region:123456789012:snapshot:parameter-groups-
test"
},
{
"Status": "COMPLETE",
"TaskEndTime": "2019-10-31T21:37:28.312Z",
"WarningMessage": "{\"skippedTables\":[],\"skippedObjectives\":[],\"general\":
[{\"reason\":\"FAILED_TO_EXTRACT_TABLES_LIST_FOR_DATABASE\"}]}",
"S3Prefix": "",
"ExportTime": "2019-10-31T06:44:53.452Z",
"S3Bucket": "examplebucket1",
"PercentProgress": 100,
"KmsKeyId": "arn:aws:kms:AWS_Region:123456789012:key/2Zp9Utk/
h3yCo8nvbEXAMPLEKEY",
"ExportTaskIdentifier": "thursday-events-test",
"IamRoleArn": "arn:aws:iam::123456789012:role/export-to-s3",
"TotalExtractedDataInGB": 263,
"TaskStartTime": "2019-10-31T20:58:06.998Z",

650
Amazon Relational Database Service User Guide
Exporting DB snapshot data to Amazon S3

"SourceArn":
"arn:aws:rds:AWS_Region:123456789012:snapshot:rds:example-1-2019-10-31-06-44"
},
{
"Status": "FAILED",
"TaskEndTime": "2019-10-31T02:12:36.409Z",
"FailureCause": "The S3 bucket edgcuc-export isn't located in the current AWS
Region. Please, review your S3 bucket name and retry the export.",
"S3Prefix": "",
"ExportTime": "2019-10-30T06:45:04.526Z",
"S3Bucket": "examplebucket2",
"PercentProgress": 0,
"KmsKeyId": "arn:aws:kms:AWS_Region:123456789012:key/2Zp9Utk/
h3yCo8nvbEXAMPLEKEY",
"ExportTaskIdentifier": "wednesday-afternoon-test",
"IamRoleArn": "arn:aws:iam::123456789012:role/export-to-s3",
"TotalExtractedDataInGB": 0,
"TaskStartTime": "2019-10-30T22:43:40.034Z",
"SourceArn":
"arn:aws:rds:AWS_Region:123456789012:snapshot:rds:example-1-2019-10-30-06-45"
}
]
}

To display information about a specific snapshot export, include the --export-task-identifier


option with the describe-export-tasks command. To filter the output, include the --Filters
option. For more options, see the describe-export-tasks command.

RDS API

To display information about DB snapshot exports using the Amazon RDS API, use the
DescribeExportTasks operation.

To track completion of the export workflow or to initiate another workflow, you can subscribe to Amazon
Simple Notification Service topics. For more information on Amazon SNS, see Working with Amazon RDS
event notification (p. 855).

Canceling a snapshot export task


You can cancel a DB snapshot export task using the AWS Management Console, the AWS CLI, or the RDS
API.
Note
Canceling a snapshot export task doesn't remove any data that was exported to Amazon S3. For
information about how to delete the data using the console, see How do I delete objects from
an S3 bucket? To delete the data using the CLI, use the delete-object command.

Console

To cancel a snapshot export task

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Snapshots.
3. Choose the Exports in Amazon S3 tab.
4. Choose the snapshot export task that you want to cancel.
5. Choose Cancel.
6. Choose Cancel export task on the confirmation page.

651
Amazon Relational Database Service User Guide
Exporting DB snapshot data to Amazon S3

AWS CLI

To cancel a snapshot export task using the AWS CLI, use the cancel-export-task command. The command
requires the --export-task-identifier option.

Example

aws rds cancel-export-task --export-task-identifier my_export


{
"Status": "CANCELING",
"S3Prefix": "",
"ExportTime": "2019-08-12T01:23:53.109Z",
"S3Bucket": "examplebucket",
"PercentProgress": 0,
"KmsKeyId": "arn:aws:kms:AWS_Region:123456789012:key/K7MDENG/bPxRfiCYEXAMPLEKEY",
"ExportTaskIdentifier": "my_export",
"IamRoleArn": "arn:aws:iam::123456789012:role/export-to-s3",
"TotalExtractedDataInGB": 0,
"TaskStartTime": "2019-11-13T19:46:00.173Z",
"SourceArn": "arn:aws:rds:AWS_Region:123456789012:snapshot:export-example-1"
}

RDS API

To cancel a snapshot export task using the Amazon RDS API, use the CancelExportTask operation with
the ExportTaskIdentifier parameter.

Failure messages for Amazon S3 export tasks


The following table describes the messages that are returned when Amazon S3 export tasks fail.

Failure message Description

An unknown internal error occurred. The task has failed because of an unknown error,
exception, or failure.

An unknown internal error occurred The task has failed because of an unknown error,
writing the export task's metadata to the exception, or failure.
S3 bucket [bucket name].

The RDS export failed to write the export The export task assumes your IAM role to validate
task's metadata because it can't assume whether it is allowed to write metadata to your S3 bucket.
the IAM role [role ARN]. If the task can't assume your IAM role, it fails.

The RDS export failed to write the export One or more permissions are missing, so the export task
task's metadata to the S3 bucket [bucket can't access the S3 bucket. This failure message is raised
name] using the IAM role [role ARN] with when receiving one of the following error codes:
the KMS key [key ID]. Error code: [error
code] • AWSSecurityTokenServiceException with the
error code AccessDenied
• AmazonS3Exception with the error
code NoSuchBucket, AccessDenied,
KMS.KMSInvalidStateException, 403 Forbidden,
or KMS.DisabledException

These error codes indicate settings are misconfigured for


the IAM role, S3 bucket, or KMS key.

652
Amazon Relational Database Service User Guide
Exporting DB snapshot data to Amazon S3

Failure message Description

The IAM role [role ARN] isn't authorized to The IAM policy is misconfigured. Permission for the
call [S3 action] on the S3 bucket [bucket specific S3 action on the S3 bucket is missing, which
name]. Review your permissions and retry causes the export task to fail.
the export.

KMS key check failed. Check the The KMS key credential check failed.
credentials on your KMS key and try again.

S3 credential check failed. Check the The S3 credential check failed.


permissions on your S3 bucket and IAM
policy.

The S3 bucket [bucket name] isn't valid. The S3 bucket is invalid.


Either it isn't located in the current AWS
Region or it doesn't exist. Review your S3
bucket name and retry the export.

The S3 bucket [bucket name] isn't located The S3 bucket is in the wrong AWS Region.
in the current AWS Region. Review your S3
bucket name and retry the export.

Troubleshooting PostgreSQL permissions errors


When exporting PostgreSQL databases to Amazon S3, you might see a PERMISSIONS_DO_NOT_EXIST
error stating that certain tables were skipped. This error usually occurs when the superuser, which you
specified when creating the DB instance, doesn't have permissions to access those tables.

To fix this error, run the following command:

GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA schema_name TO superuser_name

For more information on superuser privileges, see Master user account privileges (p. 2682).

File naming convention


Exported data for specific tables is stored in the format base_prefix/files, where the base prefix is
the following:

export_identifier/database_name/schema_name.table_name/

For example:

export-1234567890123-459/rdststdb/rdststdb.DataInsert_7ADB5D19965123A2/

There are two conventions for how files are named. The current convention is the following:

partition_index/part-00000-random_uuid.format-based_extension

For example:

1/part-00000-c5a881bb-58ff-4ee6-1111-b41ecff340a3-c000.gz.parquet
2/part-00000-d7a881cc-88cc-5ab7-2222-c41ecab340a4-c000.gz.parquet
3/part-00000-f5a991ab-59aa-7fa6-3333-d41eccd340a7-c000.gz.parquet

653
Amazon Relational Database Service User Guide
Exporting DB snapshot data to Amazon S3

The older convention is the following:

part-partition_index-random_uuid.format-based_extension

For example:

part-00000-c5a881bb-58ff-4ee6-1111-b41ecff340a3-c000.gz.parquet
part-00001-d7a881cc-88cc-5ab7-2222-c41ecab340a4-c000.gz.parquet
part-00002-f5a991ab-59aa-7fa6-3333-d41eccd340a7-c000.gz.parquet

The file naming convention is subject to change. Therefore, when reading target tables, we recommend
that you read everything inside the base prefix for the table.

Data conversion when exporting to an Amazon S3 bucket


When you export a DB snapshot to an Amazon S3 bucket, Amazon RDS converts data to, exports data
in, and stores data in the Parquet format. For more information about Parquet, see the Apache Parquet
website.

Parquet stores all data as one of the following primitive types:

• BOOLEAN
• INT32
• INT64
• INT96
• FLOAT
• DOUBLE
• BYTE_ARRAY – A variable-length byte array, also known as binary
• FIXED_LEN_BYTE_ARRAY – A fixed-length byte array used when the values have a constant size

The Parquet data types are few to reduce the complexity of reading and writing the format. Parquet
provides logical types for extending primitive types. A logical type is implemented as an annotation with
the data in a LogicalType metadata field. The logical type annotation explains how to interpret the
primitive type.

When the STRING logical type annotates a BYTE_ARRAY type, it indicates that the byte array should be
interpreted as a UTF-8 encoded character string. After an export task completes, Amazon RDS notifies
you if any string conversion occurred. The underlying data exported is always the same as the data from
the source. However, due to the encoding difference in UTF-8, some characters might appear different
from the source when read in tools such as Athena.

For more information, see Parquet logical type definitions in the Parquet documentation.

Topics
• MySQL and MariaDB data type mapping to Parquet (p. 654)
• PostgreSQL data type mapping to Parquet (p. 657)

MySQL and MariaDB data type mapping to Parquet


The following table shows the mapping from MySQL and MariaDB data types to Parquet data types
when data is converted and exported to Amazon S3.

654
Amazon Relational Database Service User Guide
Exporting DB snapshot data to Amazon S3

Source data type Parquet primitive type Logical type Conversion notes
annotation

Numeric data types

BIGINT INT64

BIGINT UNSIGNED FIXED_LEN_BYTE_ARRAY(9)


DECIMAL(20,0) Parquet supports only
signed types, so the
mapping requires an
additional byte (8
plus 1) to store the
BIGINT_UNSIGNED
type.

BIT BYTE_ARRAY

DECIMAL INT32 DECIMAL(p,s) If the source value is


31
less than 2 , it's stored
as INT32.
31
INT64 DECIMAL(p,s) If the source value is 2
or greater, but less than
63
2 , it's stored as INT64.
63
FIXED_LEN_BYTE_ARRAY(N)
DECIMAL(p,s) If the source value is 2
or greater, it's stored as
FIXED_LEN_BYTE_ARRAY(N).

BYTE_ARRAY STRING Parquet doesn't support


Decimal precision
greater than 38.
The Decimal value is
converted to a string in
a BYTE_ARRAY type and
encoded as UTF8.

DOUBLE DOUBLE

FLOAT DOUBLE

INT INT32

INT UNSIGNED INT64

MEDIUMINT INT32

MEDIUMINT UNSIGNED INT64

NUMERIC INT32 DECIMAL(p,s) If the source value is


31
less than 2 , it's stored
as INT32.
31
INT64 DECIMAL(p,s) If the source value is 2
or greater, but less than
63
2 , it's stored as INT64.
63
FIXED_LEN_ARRAY(N) DECIMAL(p,s) If the source value is 2
or greater, it's stored as
FIXED_LEN_BYTE_ARRAY(N).

655
Amazon Relational Database Service User Guide
Exporting DB snapshot data to Amazon S3

Source data type Parquet primitive type Logical type Conversion notes
annotation

BYTE_ARRAY STRING Parquet doesn't support


Numeric precision
greater than 38. This
Numeric value is
converted to a string in
a BYTE_ARRAY type and
encoded as UTF8.

SMALLINT INT32

SMALLINT UNSIGNED INT32

TINYINT INT32

TINYINT UNSIGNED INT32

String data types

BINARY BYTE_ARRAY

BLOB BYTE_ARRAY

CHAR BYTE_ARRAY

ENUM BYTE_ARRAY STRING

LINESTRING BYTE_ARRAY

LONGBLOB BYTE_ARRAY

LONGTEXT BYTE_ARRAY STRING

MEDIUMBLOB BYTE_ARRAY

MEDIUMTEXT BYTE_ARRAY STRING

MULTILINESTRING BYTE_ARRAY

SET BYTE_ARRAY STRING

TEXT BYTE_ARRAY STRING

TINYBLOB BYTE_ARRAY

TINYTEXT BYTE_ARRAY STRING

VARBINARY BYTE_ARRAY

VARCHAR BYTE_ARRAY STRING

Date and time data types

DATE BYTE_ARRAY STRING A date is converted to a


string in a BYTE_ARRAY
type and encoded as
UTF8.

DATETIME INT64 TIMESTAMP_MICROS

656
Amazon Relational Database Service User Guide
Exporting DB snapshot data to Amazon S3

Source data type Parquet primitive type Logical type Conversion notes
annotation

TIME BYTE_ARRAY STRING A TIME type is


converted to a string
in a BYTE_ARRAY and
encoded as UTF8.

TIMESTAMP INT64 TIMESTAMP_MICROS

YEAR INT32

Geometric data types

GEOMETRY BYTE_ARRAY

GEOMETRYCOLLECTION BYTE_ARRAY

MULTIPOINT BYTE_ARRAY

MULTIPOLYGON BYTE_ARRAY

POINT BYTE_ARRAY

POLYGON BYTE_ARRAY

JSON data type

JSON BYTE_ARRAY STRING

PostgreSQL data type mapping to Parquet


The following table shows the mapping from PostgreSQL data types to Parquet data types when data is
converted and exported to Amazon S3.

PostgreSQL data type Parquet primitive type Logical type Mapping notes
annotation

Numeric data types

BIGINT INT64

BIGSERIAL INT64

DECIMAL BYTE_ARRAY STRING A DECIMAL type is


converted to a string in
a BYTE_ARRAY type and
encoded as UTF8.

This conversion is to
avoid complications due
to data precision and
data values that are not
a number (NaN).

DOUBLE PRECISION DOUBLE

INTEGER INT32

657
Amazon Relational Database Service User Guide
Exporting DB snapshot data to Amazon S3

PostgreSQL data type Parquet primitive type Logical type Mapping notes
annotation

MONEY BYTE_ARRAY STRING

REAL FLOAT

SERIAL INT32

SMALLINT INT32 INT_16

SMALLSERIAL INT32 INT_16

String and related data types

ARRAY BYTE_ARRAY STRING An array is converted to


a string and encoded as
BINARY (UTF8).

This conversion is to
avoid complications due
to data precision, data
values that are not a
number (NaN), and time
data values.

BIT BYTE_ARRAY STRING

BIT VARYING BYTE_ARRAY STRING

BYTEA BINARY

CHAR BYTE_ARRAY STRING

CHAR(N) BYTE_ARRAY STRING

ENUM BYTE_ARRAY STRING

NAME BYTE_ARRAY STRING

TEXT BYTE_ARRAY STRING

TEXT SEARCH BYTE_ARRAY STRING

VARCHAR(N) BYTE_ARRAY STRING

XML BYTE_ARRAY STRING

Date and time data types

DATE BYTE_ARRAY STRING

INTERVAL BYTE_ARRAY STRING

TIME BYTE_ARRAY STRING

TIME WITH TIME ZONE BYTE_ARRAY STRING

TIMESTAMP BYTE_ARRAY STRING

TIMESTAMP WITH TIME BYTE_ARRAY STRING


ZONE

658
Amazon Relational Database Service User Guide
Exporting DB snapshot data to Amazon S3

PostgreSQL data type Parquet primitive type Logical type Mapping notes
annotation

Geometric data types

BOX BYTE_ARRAY STRING

CIRCLE BYTE_ARRAY STRING

LINE BYTE_ARRAY STRING

LINESEGMENT BYTE_ARRAY STRING

PATH BYTE_ARRAY STRING

POINT BYTE_ARRAY STRING

POLYGON BYTE_ARRAY STRING

JSON data types

JSON BYTE_ARRAY STRING

JSONB BYTE_ARRAY STRING

Other data types

BOOLEAN BOOLEAN

CIDR BYTE_ARRAY STRING Network data type

COMPOSITE BYTE_ARRAY STRING

DOMAIN BYTE_ARRAY STRING

INET BYTE_ARRAY STRING Network data type

MACADDR BYTE_ARRAY STRING

OBJECT IDENTIFIER N/A

PG_LSN BYTE_ARRAY STRING

RANGE BYTE_ARRAY STRING

UUID BYTE_ARRAY STRING

659
Amazon Relational Database Service User Guide
Restoring a DB instance to a specified time

Restoring a DB instance to a specified time


You can restore a DB instance to a specific point in time, creating a new DB instance.

When you restore a DB instance to a point in time, you can choose the default virtual private cloud (VPC)
security group. Or you can apply a custom VPC security group to your DB instance.

Restored DB instances are automatically associated with the default DB parameter and option groups.
However, you can apply a custom parameter group and option group by specifying them during a
restore.

If the source DB instance has resource tags, RDS adds the latest tags to the restored DB instance.

RDS uploads transaction logs for DB instances to Amazon S3 every five minutes. To see the latest
restorable time for a DB instance, use the AWS CLI describe-db-instances command and look at the value
returned in the LatestRestorableTime field for the DB instance. To see the latest restorable time for
each DB instance in the Amazon RDS console, choose Automated backups.

You can restore to any point in time within your backup retention period. To see the earliest restorable
time for each DB instance, choose Automated backups in the Amazon RDS console.

Note
We recommend that you restore to the same or similar DB instance size—and IOPS if using
Provisioned IOPS storage—as the source DB instance. You might get an error if, for example, you
choose a DB instance size with an incompatible IOPS value.

Some of the database engines used by Amazon RDS have special considerations when restoring from a
point in time:

• When you restore an Oracle DB instance to a point in time, you can specify a different Oracle DB
engine, license model, and DBName (SID) to be used by the new DB instance.
• When you restore a Microsoft SQL Server DB instance to a point in time, each database within that
instance is restored to a point in time within 1 second of each other database within the instance.
Transactions that span multiple databases within the instance might be restored inconsistently.
• For a SQL Server DB instance, the OFFLINE, EMERGENCY, and SINGLE_USER modes aren't supported.
Setting any database into one of these modes causes the latest restorable time to stop moving ahead
for the whole instance.
• Some actions, such as changing the recovery model of a SQL Server database, can break the sequence
of logs that are used for point-in-time recovery. In some cases, Amazon RDS can detect this issue
and the latest restorable time is prevented from moving forward. In other cases, such as when a SQL
Server database uses the BULK_LOGGED recovery model, the break in log sequence isn't detected. It

660
Amazon Relational Database Service User Guide
Restoring a DB instance to a specified time

might not be possible to restore a SQL Server DB instance to a point in time if there is a break in the
log sequence. For these reasons, Amazon RDS doesn't support changing the recovery model of SQL
Server databases.

You can also use AWS Backup to manage backups of Amazon RDS DB instances. If your DB instance
is associated with a backup plan in AWS Backup, that backup plan is used for point-in-time recovery.
Backups that were created with AWS Backup have names ending in awsbackup:AWS-Backup-job-
number. For information about AWS Backup, see the AWS Backup Developer Guide.
Note
Information in this topic applies to Amazon RDS. For information on restoring an Amazon
Aurora DB cluster, see Restoring a DB cluster to a specified time.

You can restore a DB instance to a point in time using the AWS Management Console, the AWS CLI, or
the RDS API.
Note
You can't reduce the amount of storage when you restore a DB instance. When you increase the
allocated storage, it must be by at least 10 percent. If you try to increase the value by less than
10 percent, you get an error. You can't increase the allocated storage when restoring RDS for
SQL Server DB instances.

Console
To restore a DB instance to a specified time

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Automated backups.

The automated backups are displayed on the Current Region tab.


3. Choose the DB instance that you want to restore.
4. For Actions, choose Restore to point in time.

The Restore to point in time window appears.


5. Choose Latest restorable time to restore to the latest possible time, or choose Custom to choose a
time.

If you chose Custom, enter the date and time to which you want to restore the instance.
Note
Times are shown in your local time zone, which is indicated by an offset from Coordinated
Universal Time (UTC). For example, UTC-5 is Eastern Standard Time/Central Daylight Time.
6. For DB instance identifier, enter the name of the target restored DB instance. The name must be
unique.
7. Choose other options as needed, such as DB instance class, storage, and whether you want to use
storage autoscaling.

For information about each setting, see Settings for DB instances (p. 308).
8. Choose Restore to point in time.

AWS CLI
To restore a DB instance to a specified time, use the AWS CLI command restore-db-instance-to-point-in-
time to create a new DB instance. This example also sets the allocated storage size and enables storage
autoscaling.

661
Amazon Relational Database Service User Guide
Restoring a DB instance to a specified time

Resource tagging is supported for this operation. When you use the --tags option, the source DB
instance tags are ignored and the provided ones are used. Otherwise, the latest tags from the source
instance are used.

You can specify other settings. For information about each setting, see Settings for DB instances (p. 308).

Example

For Linux, macOS, or Unix:

aws rds restore-db-instance-to-point-in-time \


--source-db-instance-identifier mysourcedbinstance \
--target-db-instance-identifier mytargetdbinstance \
--restore-time 2017-10-14T23:45:00.000Z \
--allocated-storage 100 \
--max-allocated-storage 1000

For Windows:

aws rds restore-db-instance-to-point-in-time ^


--source-db-instance-identifier mysourcedbinstance ^
--target-db-instance-identifier mytargetdbinstance ^
--restore-time 2017-10-14T23:45:00.000Z ^
--allocated-storage 100 ^
--max-allocated-storage 1000

RDS API
To restore a DB instance to a specified time, call the Amazon RDS API
RestoreDBInstanceToPointInTime operation with the following parameters:

• SourceDBInstanceIdentifier
• TargetDBInstanceIdentifier
• RestoreTime

662
Amazon Relational Database Service User Guide
Deleting a DB snapshot

Deleting a DB snapshot
You can delete DB snapshots managed by Amazon RDS when you no longer need them.
Note
To delete backups managed by AWS Backup, use the AWS Backup console. For information
about AWS Backup, see the AWS Backup Developer Guide.

Deleting a DB snapshot
You can delete a manual, shared, or public DB snapshot using the AWS Management Console, the AWS
CLI, or the RDS API.

To delete a shared or public snapshot, you must sign in to the AWS account that owns the snapshot.

If you have automated DB snapshots that you want to delete without deleting the DB instance, change
the backup retention period for the DB instance to 0. The automated snapshots are deleted when
the change is applied. You can apply the change immediately if you don't want to wait until the next
maintenance period. After the change is complete, you can then re-enable automatic backups by setting
the backup retention period to a number greater than 0. For information about modifying a DB instance,
see Modifying an Amazon RDS DB instance (p. 401).

Retained automated backups and manual snapshots incur billing charges until they're deleted. For more
information, see Retention costs (p. 596).

If you deleted a DB instance, you can delete its automated DB snapshots by removing the automated
backups for the DB instance. For information about automated backups, see Working with
backups (p. 591).

Console

To delete a DB snapshot

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Snapshots.

The Manual snapshots list appears.


3. Choose the DB snapshot that you want to delete.
4. For Actions, choose Delete snapshot.
5. Choose Delete on the confirmation page.

AWS CLI

You can delete a DB snapshot by using the AWS CLI command delete-db-snapshot.

The following options are used to delete a DB snapshot.

• --db-snapshot-identifier – The identifier for the DB snapshot.

Example

The following code deletes the mydbsnapshot DB snapshot.

For Linux, macOS, or Unix:

663
Amazon Relational Database Service User Guide
Deleting a DB snapshot

aws rds delete-db-snapshot \


--db-snapshot-identifier mydbsnapshot

For Windows:

aws rds delete-db-snapshot ^


--db-snapshot-identifier mydbsnapshot

RDS API

You can delete a DB snapshot by using the Amazon RDS API operation DeleteDBSnapshot.

The following parameters are used to delete a DB snapshot.

• DBSnapshotIdentifier – The identifier for the DB snapshot.

664
Amazon Relational Database Service User Guide
Tutorial: Restore a DB instance from a DB snapshot

Tutorial: Restore an Amazon RDS DB instance from a


DB snapshot
Often, when working with Amazon RDS you might have a DB instance that you work with occasionally
but don't need full time. For example, suppose that you have a quarterly customer survey that uses an
Amazon EC2 instance to host a customer survey website. You also have a DB instance that is used to
store the survey results. One way to save money on such a scenario is to take a DB snapshot of the DB
instance after the survey is completed. You then delete the DB instance and restore it when you need to
conduct the survey again.

When you restore the DB instance, you provide the name of the DB snapshot to restore from. You then
provide a name for the new DB instance that's created from the restore operation.

For more detailed information on restoring DB instances from snapshots, see Restoring from a DB
snapshot (p. 615).

Restoring a DB instance from a DB snapshot


Use the following procedure to restore from a snapshot in the AWS Management Console.

To restore a DB instance from a DB snapshot

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Snapshots.
3. Choose the DB snapshot that you want to restore from.
4. For Actions, choose Restore snapshot.

The Restore snapshot page appears.

665
Amazon Relational Database Service User Guide
Tutorial: Restore a DB instance from a DB snapshot

5. Under DB instance settings, use the default settings for DB engine and License model (for Oracle
or Microsoft SQL Server).
6. Under Settings, for DB instance identifier enter the unique name that you want to use for the
restored DB instance, for example mynewdbinstance.

If you're restoring from a DB instance that you deleted after you made the DB snapshot, you can use
the name of that DB instance.
7. Under Availability & durability, choose whether to create a standby instance in another Availability
Zone.

For this tutorial, don't create a standby instance.


8. Under Connectivity, use the default settings for the following:

• Virtual private cloud (VPC)


• DB subnet group
• Public access
• VPC security group (firewall)
9. Choose the DB instance class.

For this tutorial, choose Burstable classes (includes t classes), and then choose db.t3.small.
10. For Encryption, use the default settings.

If the source DB instance for the snapshot was encrypted, the restored DB instance is also encrypted.
You can't make it unencrypted.
11. Expand Additional configuration at the bottom of the page.

666
Amazon Relational Database Service User Guide
Tutorial: Restore a DB instance from a DB snapshot

12. Do the following under Database options:

a. Choose the DB parameter group.

For this tutorial, use the default parameter group.


b. Choose the Option group.

For this tutorial, use the default option group.


Important
In some cases, you might restore from a DB snapshot of a DB instance that uses a
persistent or permanent option. If so, make sure to choose an option group that uses
the same option.
c. For Deletion protection, choose the Enable deletion protection check box.
13. Choose Restore DB instance.

The Databases page displays the restored DB instance, with a status of Creating.

667
Amazon Relational Database Service User Guide
Backing up and restoring a Multi-AZ DB cluster

Backing up and restoring a Multi-AZ DB cluster


This section shows how to back up and restore Multi-AZ DB clusters.

Topics
• Creating a Multi-AZ DB cluster snapshot (p. 669)
• Restoring from a snapshot to a Multi-AZ DB cluster (p. 671)
• Restoring from a Multi-AZ DB cluster snapshot to a DB instance (p. 673)
• Restoring a Multi-AZ DB cluster to a specified time (p. 675)

In addition, the following topics apply to both DB instances and Multi-AZ DB clusters:

• the section called “Sharing a DB snapshot” (p. 633)


• the section called “Deleting a DB snapshot” (p. 663)

668
Amazon Relational Database Service User Guide
Creating a Multi-AZ DB cluster snapshot

Creating a Multi-AZ DB cluster snapshot


When you create a Multi-AZ DB cluster snapshot, make sure to identify which Multi-AZ DB cluster you
are going to back up, and then give your DB cluster snapshot a name so you can restore from it later.
You can also share a Multi-AZ DB cluster snapshot. For instructions, see the section called “Sharing a DB
snapshot” (p. 633).

You can create a Multi-AZ DB cluster snapshot using the AWS Management Console, the AWS CLI, or the
RDS API.

Console

To create a DB cluster snapshot

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases.
3. In the list, choose the Multi-AZ DB cluster for which you want to take a snapshot.
4. For Actions, choose Take snapshot.

The Take DB snapshot window appears.


5. For Snapshot name, enter the name of the snapshot.
6. Choose Take snapshot.

The Snapshots page appears, with the new Multi-AZ DB cluster snapshot's status shown as Creating.
After its status is Available, you can see its creation time.

AWS CLI
You can create a Multi-AZ DB cluster snapshot by using the AWS CLI create-db-cluster-snapshot
command with the following options:

• --db-cluster-identifier
• --db-cluster-snapshot-identifier

In this example, you create a Multi-AZ DB cluster snapshot called mymultiazdbclustersnapshot for a
DB cluster called mymultiazdbcluster.

Example

For Linux, macOS, or Unix:

aws rds create-db-cluster-snapshot \


--db-cluster-identifier mymultiazdbcluster \
--db-cluster-snapshot-identifier mymultiazdbclustersnapshot

For Windows:

aws rds create-db-cluster-snapshot ^


--db-cluster-identifier mymultiazdbcluster ^
--db-cluster snapshot-identifier mymultiazdbclustersnapshot

669
Amazon Relational Database Service User Guide
Creating a Multi-AZ DB cluster snapshot

RDS API
You can create a Multi-AZ DB cluster snapshot by using the Amazon RDS API CreateDBClusterSnapshot
operation with the following parameters:

• DBClusterIdentifier
• DBClusterSnapshotIdentifier

Deleting a Multi-AZ DB cluster snapshot


You can delete Multi-AZ DB snapshots managed by Amazon RDS when you no longer need them. For
instructions, see the section called “Deleting a DB snapshot” (p. 663).

670
Amazon Relational Database Service User Guide
Restoring from a snapshot to a Multi-AZ DB cluster

Restoring from a snapshot to a Multi-AZ DB cluster


You can restore a snapshot to a Multi-AZ DB cluster using the AWS Management Console, the AWS CLI,
or the RDS API. You can restore each of these types of snapshots to a Multi-AZ DB cluster:

• A snapshot of a Single-AZ deployment


• A snapshot of a Multi-AZ DB instance deployment with a single DB instance
• A snapshot of a Multi-AZ DB cluster

For information about Multi-AZ deployments, see Configuring and managing a Multi-AZ
deployment (p. 492).
Tip
You can migrate a Single-AZ deployment or a Multi-AZ DB instance deployment to a Multi-AZ
DB cluster deployment by restoring a snapshot.

Console
To restore a snapshot to a Multi-AZ DB cluster

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Snapshots.
3. Choose the snapshot that you want to restore from.
4. For Actions, choose Restore snapshot.
5. On the Restore snapshot page, in Availability and durability, choose Multi-AZ DB cluster.

6. For DB cluster identifier, enter the name for your restored Multi-AZ DB cluster.
7. For the remaining sections, specify your DB cluster settings. For information about each setting, see
Settings for creating Multi-AZ DB clusters (p. 514).
8. Choose Restore DB instance.

AWS CLI
To restore a snapshot to a Multi-AZ DB cluster, use the AWS CLI command restore-db-cluster-from-
snapshot.

In the following example, you restore from a previously created snapshot named mysnapshot. You
restore to a new Multi-AZ DB cluster named mynewmultiazdbcluster. You also specify the DB

671
Amazon Relational Database Service User Guide
Restoring from a snapshot to a Multi-AZ DB cluster

instance class used by the DB instances in the Multi-AZ DB cluster. Specify either mysql or postgres for
the DB engine.

For the --snapshot-identifier option, you can use either the name or the Amazon Resource Name
(ARN) to specify a DB cluster snapshot. However, you can use only the ARN to specify a DB snapshot.

For the --db-cluster-instance-class option, specify the DB instance class for the new Multi-
AZ DB cluster. Multi-AZ DB clusters only support specific DB instance classes, such as the db.m6gd
and db.r6gd DB instance classes. For more information about DB instance classes, see DB instance
classes (p. 11).

You can also specify other options.

Example

For Linux, macOS, or Unix:

aws rds restore-db-cluster-from-snapshot \


--db-cluster-identifier mynewmultiazdbcluster \
--snapshot-identifier mysnapshot \
--engine mysql|postgres \
--db-cluster-instance-class db.r6gd.xlarge

For Windows:

aws rds restore-db-cluster-from-snapshot ^


--db-cluster-identifier mynewmultiazdbcluster ^
--snapshot-identifier mysnapshot ^
--engine mysql|postgres ^
--db-cluster-instance-class db.r6gd.xlarge

After you restore the DB cluster, you can add the Multi-AZ DB cluster to the security group associated
with the DB cluster or DB instance that you used to create the snapshot, if applicable. Completing this
action provides the same functions of the previous DB cluster or DB instance.

RDS API
To restore a snapshot to a Multi-AZ DB cluster, call the RDS API operation
RestoreDBClusterFromSnapshot with the following parameters:

• DBClusterIdentifier
• SnapshotIdentifier
• Engine

You can also specify other optional parameters.

After you restore the DB cluster, you can add the Multi-AZ DB cluster to the security group associated
with the DB cluster or DB instance that you used to create the snapshot, if applicable. Completing this
action provides the same functions of the previous DB cluster or DB instance.

672
Amazon Relational Database Service User Guide
Restoring from a Multi-AZ DB
cluster snapshot to a DB instance

Restoring from a Multi-AZ DB cluster snapshot to a


DB instance
A Multi-AZ DB cluster snapshot is a storage volume snapshot of your DB cluster, backing up the entire
DB cluster and not just individual databases. You can restore a Multi-AZ DB cluster snapshot to a Single-
AZ deployment or Multi-AZ DB instance deployment. For information about Multi-AZ deployments, see
Configuring and managing a Multi-AZ deployment (p. 492).
Note
You can also restore a Multi-AZ DB cluster snapshot to a new Multi-AZ DB cluster. For
instructions, see Restoring from a snapshot to a Multi-AZ DB cluster (p. 671).

Use the AWS Management Console, the AWS CLI, or the RDS API to restore a Multi-AZ DB cluster
snapshot to a Single-AZ deployment or Multi-AZ DB instance deployment.

Console
To restore a Multi-AZ DB cluster snapshot to a Single-AZ deployment or Multi-AZ DB
instance deployment

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Snapshots.
3. Choose the Multi-AZ DB cluster snapshot that you want to restore from.
4. For Actions, choose Restore snapshot.
5. On the Restore snapshot page, in Availability and durability, choose one of the following:

• Single DB instance – Restores the snapshot to one DB instance with no standby DB instance.
• Multi-AZ DB instance – Restores the snapshot to a Multi-AZ DB instance deployment with one
primary DB instance and one standby DB instance.
6. For DB instance identifier, enter the name for your restored DB instance.
7. For the remaining sections, specify your DB instance settings. For information about each setting,
see Settings for DB instances (p. 308).
8. Choose Restore DB instance.

AWS CLI
To restore a Multi-AZ DB cluster snapshot to a DB instance deployment, use the AWS CLI command
restore-db-instance-from-db-snapshot.

In the following example, you restore from a previously created Multi-AZ DB cluster snapshot named
myclustersnapshot. You restore to a new Multi-AZ DB instance deployment with a primary DB
instance named mynewdbinstance. For the --db-cluster-snapshot-identifier option, specify
the name of the Multi-AZ DB cluster snapshot.

For the --db-instance-class option, specify the DB instance class for the new DB instance
deployment. For more information about DB instance classes, see DB instance classes (p. 11).

You can also specify other options.

Example

For Linux, macOS, or Unix:

673
Amazon Relational Database Service User Guide
Restoring from a Multi-AZ DB
cluster snapshot to a DB instance

aws rds restore-db-instance-from-db-snapshot \


--db-instance-identifier mynewdbinstance \
--db-cluster-snapshot-identifier myclustersnapshot \
--engine mysql \
--multi-az \
--db-instance-class db.r6g.xlarge

For Windows:

aws rds restore-db-instance-from-db-snapshot ^


--db-instance-identifier mynewdbinstance ^
--db-cluster-snapshot-identifier myclustersnapshot ^
--engine mysql ^
--multi-az ^
--db-instance-class db.r6g.xlarge

After you restore the DB instance, you can add it to the security group associated with the Multi-AZ DB
cluster that you used to create the snapshot, if applicable. Completing this action provides the same
functions of the previous Multi-AZ DB cluster.

RDS API
To restore a Multi-AZ DB cluster snapshot to a DB instance deployment, call the RDS API operation
RestoreDBInstanceFromDBSnapshot with the following parameters:

• DBInstanceIdentifier
• DBClusterSnapshotIdentifier
• Engine

You can also specify other optional parameters.

After you restore the DB instance, you can add it to the security group associated with the Multi-AZ DB
cluster that you used to create the snapshot, if applicable. Completing this action provides the same
functions of the previous Multi-AZ DB cluster.

674
Amazon Relational Database Service User Guide
Restoring a Multi-AZ DB cluster to a specified time

Restoring a Multi-AZ DB cluster to a specified time


You can restore a Multi-AZ DB cluster to a specific point in time, creating a new Multi-AZ DB cluster.

RDS uploads transaction logs for Multi-AZ DB clusters to Amazon S3 continuously. You can restore
to any point in time within your backup retention period. To see the earliest restorable time for a
Multi-AZ DB cluster, use the AWS CLI describe-db-clusters command. Look at the value returned in the
EarliestRestorableTime field for the DB cluster. To see the latest restorable time for a Multi-AZ DB
cluster, look at the value returned in the LatestRestorableTime field for the DB cluster.

When you restore a Multi-AZ DB cluster to a point in time, you can choose the default VPC security group
for your Multi-AZ DB cluster. Or you can apply a custom VPC security group to your Multi-AZ DB cluster.

Restored Multi-AZ DB clusters are automatically associated with the default DB cluster parameter group.
However, you can apply a customer DB cluster parameter group by specifying it during a restore.

If the source DB instance has resource tags, RDS adds the latest tags to the restored DB instance.
Note
We recommend that you restore to the same or similar Multi-AZ DB cluster size as the source DB
cluster. We also recommend that you restore with the same or similar IOPS value if you're using
Provisioned IOPS storage. You might get an error if, for example, you choose a DB cluster size
with an incompatible IOPS value.

You can restore a Multi-AZ DB cluster to a point in time using the AWS Management Console, the AWS
CLI, or the RDS API.

Console

To restore a Multi-AZ DB cluster to a specified time

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases.
3. Choose the Multi-AZ DB cluster that you want to restore.
4. For Actions, choose Restore to point in time.

The Restore to point in time window appears.


5. Choose Latest restorable time to restore to the latest possible time, or choose Custom to choose a
time.

If you chose Custom, enter the date and time to which you want to restore the Multi-AZ DB cluster.
Note
Times are shown in your local time zone, which is indicated by an offset from Coordinated
Universal Time (UTC). For example, UTC-5 is Eastern Standard Time/Central Daylight Time.
6. For DB cluster identifier, enter the name for your restored Multi-AZ DB cluster.
7. In Availability and durability, choose Multi-AZ DB cluster.

675
Amazon Relational Database Service User Guide
Restoring a Multi-AZ DB cluster to a specified time

8. In DB instance class, choose a DB instance class.

Currently, Multi-AZ DB clusters only support db.m6gd and db.r6gd DB instance classes. For more
information about DB instance classes, see DB instance classes (p. 11).
9. For the remaining sections, specify your DB cluster settings. For information about each setting, see
Settings for creating Multi-AZ DB clusters (p. 514).
10. Choose Restore to point in time.

AWS CLI
To restore a Multi-AZ DB cluster to a specified time, use the AWS CLI command restore-db-cluster-to-
point-in-time to create a new Multi-AZ DB cluster.

Currently, Multi-AZ DB clusters only support db.m6gd and db.r6gd DB instance classes. For more
information about DB instance classes, see DB instance classes (p. 11).

Example

For Linux, macOS, or Unix:

aws rds restore-db-cluster-to-point-in-time \


--source-db-cluster-identifier mysourcemultiazdbcluster \
--db-cluster-identifier mytargetmultiazdbcluster \
--restore-to-time 2021-08-14T23:45:00.000Z \
--db-cluster-instance-class db.r6gd.xlarge

For Windows:

aws rds restore-db-cluster-to-point-in-time ^


--source-db-cluster-identifier mysourcemultiazdbcluster ^
--db-cluster-identifier mytargetmultiazdbcluster ^
--restore-to-time 2021-08-14T23:45:00.000Z ^
--db-cluster-instance-class db.r6gd.xlarge

RDS API
To restore a DB cluster to a specified time, call the Amazon RDS API RestoreDBClusterToPointInTime
operation with the following parameters:

• SourceDBClusterIdentifier
• DBClusterIdentifier

676
Amazon Relational Database Service User Guide
Restoring a Multi-AZ DB cluster to a specified time

• RestoreToTime

677
Amazon Relational Database Service User Guide

Monitoring metrics in an Amazon


RDS instance
In the following sections, you can find an overview of Amazon RDS monitoring and an explanation
about how to access metrics. To learn how to monitor events, logs, and database activity streams, see
Monitoring events, logs, and streams in an Amazon RDS DB instance (p. 846).

Topics
• Overview of monitoring metrics in Amazon RDS (p. 679)
• Viewing instance status and recommendations (p. 683)
• Viewing metrics in the Amazon RDS console (p. 696)
• Viewing combined metrics in the Amazon RDS console (p. 699)
• Monitoring Amazon RDS metrics with Amazon CloudWatch (p. 706)
• Monitoring DB load with Performance Insights on Amazon RDS (p. 720)
• Analyzing performance anomalies with Amazon DevOps Guru for Amazon RDS (p. 789)
• Monitoring OS metrics with Enhanced Monitoring (p. 797)
• Metrics reference for Amazon RDS (p. 806)

678
Amazon Relational Database Service User Guide
Overview of monitoring

Overview of monitoring metrics in Amazon RDS


Monitoring is an important part of maintaining the reliability, availability, and performance of Amazon
RDS and your AWS solutions. To more easily debug multi-point failures, we recommend that you collect
monitoring data from all parts of your AWS solution.

Topics
• Monitoring plan (p. 679)
• Performance baseline (p. 679)
• Performance guidelines (p. 679)
• Monitoring tools (p. 680)

Monitoring plan
Before you start monitoring Amazon RDS, create a monitoring plan. This plan should answer the
following questions:

• What are your monitoring goals?


• Which resources will you monitor?
• How often will you monitor these resources?
• Which monitoring tools will you use?
• Who will perform the monitoring tasks?
• Whom should be notified when something goes wrong?

Performance baseline
To achieve your monitoring goals, you need to establish a baseline. To do this, measure performance
under different load conditions at various times in your Amazon RDS environment. You can monitor
metrics such as the following:

• Network throughput
• Client connections
• I/O for read, write, or metadata operations
• Burst credit balances for your DB instances

We recommend that you store historical performance data for Amazon RDS. Using the stored data, you
can compare current performance against past trends. You can also distinguish normal performance
patterns from anomalies, and devise techniques to address issues.

Performance guidelines
In general, acceptable values for performance metrics depend on what your application is doing relative
to your baseline. Investigate consistent or trending variances from your baseline. The following metrics
are often the source of performance issues:

• High CPU or RAM consumption – High values for CPU or RAM consumption might be appropriate,
if they're in keeping with your goals for your application (like throughput or concurrency) and are
expected.

679
Amazon Relational Database Service User Guide
Monitoring tools

• Disk space consumption – Investigate disk space consumption if space used is consistently at or above
85 percent of the total disk space. See if it is possible to delete data from the instance or archive data
to a different system to free up space.
• Network traffic – For network traffic, talk with your system administrator to understand what
expected throughput is for your domain network and internet connection. Investigate network traffic if
throughput is consistently lower than expected.
• Database connections – If you see high numbers of user connections and also decreases in instance
performance and response time, consider constraining database connections. The best number of
user connections for your DB instance varies based on your instance class and the complexity of the
operations being performed. To determine the number of database connections, associate your DB
instance with a parameter group where the User Connections parameter is set to a value other
than 0 (unlimited). You can either use an existing parameter group or create a new one. For more
information, see Working with parameter groups (p. 347).
• IOPS metrics – The expected values for IOPS metrics depend on disk specification and server
configuration, so use your baseline to know what is typical. Investigate if values are consistently
different than your baseline. For best IOPS performance, make sure that your typical working set fits
into memory to minimize read and write operations.

When performance falls outside your established baseline, you might need to make changes to optimize
your database availability for your workload. For example, you might need to change the instance class
of your DB instance. Or you might need to change the number of DB instances and read replicas that are
available for clients.

Monitoring tools
Monitoring is an important part of maintaining the reliability, availability, and performance of Amazon
RDS and your other AWS solutions. AWS provides various monitoring tools to watch Amazon RDS, report
when something is wrong, and take automatic actions when appropriate.

Topics
• Automated monitoring tools (p. 680)
• Manual monitoring tools (p. 681)

Automated monitoring tools


We recommend that you automate monitoring tasks as much as possible.

Topics
• Amazon RDS instance status and recommendations (p. 680)
• Amazon CloudWatch metrics for Amazon RDS (p. 681)
• Amazon RDS Performance Insights and operating-system monitoring (p. 681)
• Integrated services (p. 681)

Amazon RDS instance status and recommendations


You can use the following automated tools to watch Amazon RDS and report when something is wrong:

• Amazon RDS instance status — View details about the current status of your instance by using the
Amazon RDS console, the AWS CLI, or the RDS API.
• Amazon RDS recommendations — Respond to automated recommendations for database resources,
such as DB instances, read replicas, and DB parameter groups. For more information, see Viewing
Amazon RDS recommendations (p. 688).

680
Amazon Relational Database Service User Guide
Monitoring tools

Amazon CloudWatch metrics for Amazon RDS


Amazon RDS integrates with Amazon CloudWatch for additional monitoring capabilities.

• Amazon CloudWatch – This service monitors your AWS resources and the applications you run on AWS
in real time. You can use the following Amazon CloudWatch features with Amazon RDS:
• Amazon CloudWatch metrics – Amazon RDS automatically sends metrics to CloudWatch every
minute for each active database. You don't get additional charges for Amazon RDS metrics
in CloudWatch. For more information, see Monitoring Amazon RDS metrics with Amazon
CloudWatch (p. 706).
• Amazon CloudWatch alarms – You can watch a single Amazon RDS metric over a specific time
period. You can then perform one or more actions based on the value of the metric relative to a
threshold that you set. For more information, see Monitoring Amazon RDS metrics with Amazon
CloudWatch (p. 706).

Amazon RDS Performance Insights and operating-system monitoring


You can use the following automated tools to monitor Amazon RDS performance:

• Amazon RDS Performance Insights – Assess the load on your database, and determine when and
where to take action. For more information, see Monitoring DB load with Performance Insights on
Amazon RDS (p. 720).
• Amazon RDS Enhanced Monitoring – Look at metrics in real time for the operating system. For more
information, see Monitoring OS metrics with Enhanced Monitoring (p. 797).

Integrated services
The following AWS services are integrated with Amazon RDS:

• Amazon EventBridge is a serverless event bus service that makes it easy to connect your
applications with data from a variety of sources. For more information, see Monitoring Amazon RDS
events (p. 850).
• Amazon CloudWatch Logs lets you monitor, store, and access your log files from Amazon RDS
instances, CloudTrail, and other sources. For more information, see Monitoring Amazon RDS log
files (p. 895).
• AWS CloudTrail captures API calls and related events made by or on behalf of your AWS account and
delivers the log files to an Amazon S3 bucket that you specify. For more information, see Monitoring
Amazon RDS API calls in AWS CloudTrail (p. 940).
• Database Activity Streams is an Amazon RDS feature that provides a near-real-time stream of the
activity in your Oracle DB instance. For more information, see Monitoring Amazon RDS with Database
Activity Streams (p. 944).

Manual monitoring tools


You need to manually monitor those items that the CloudWatch alarms don't cover. The Amazon RDS,
CloudWatch, AWS Trusted Advisor and other AWS console dashboards provide an at-a-glance view of the
state of your AWS environment. We recommend that you also check the log files on your DB instance.

• From the Amazon RDS console, you can monitor the following items for your resources:
• The number of connections to a DB instance
• The amount of read and write operations to a DB instance
• The amount of storage that a DB instance is currently using
• The amount of memory and CPU being used for a DB instance

681
Amazon Relational Database Service User Guide
Monitoring tools

• The amount of network traffic to and from a DB instance


• From the Trusted Advisor dashboard, you can review the following cost optimization, security, fault
tolerance, and performance improvement checks:
• Amazon RDS Idle DB Instances
• Amazon RDS Security Group Access Risk
• Amazon RDS Backups
• Amazon RDS Multi-AZ

For more information on these checks, see Trusted Advisor best practices (checks).
• CloudWatch home page shows:
• Current alarms and status
• Graphs of alarms and resources
• Service health status

In addition, you can use CloudWatch to do the following:


• Create customized dashboards to monitor the services that you care about.
• Graph metric data to troubleshoot issues and discover trends.
• Search and browse all your AWS resource metrics.
• Create and edit alarms to be notified of problems.

682
Amazon Relational Database Service User Guide
Viewing instance status and recommendations

Viewing instance status and recommendations


Using the Amazon RDS console, you can quickly access the status of your DB instance and respond to
Amazon RDS recommendations.

Topics
• Viewing Amazon RDS DB instance status (p. 684)
• Viewing Amazon RDS recommendations (p. 688)

683
Amazon Relational Database Service User Guide
Viewing Amazon RDS DB instance status

Viewing Amazon RDS DB instance status


The status of a DB instance indicates the health of the DB instance. You can use the following procedures
to view the DB instance status in the Amazon RDS console, the AWS CLI command, or the API operation.
Note
Amazon RDS also uses another status called maintenance status, which is shown in the
Maintenance column of the Amazon RDS console. This value indicates the status of any
maintenance patches that need to be applied to a DB instance. Maintenance status is
independent of DB instance status. For more information about maintenance status, see
Applying updates for a DB instance (p. 421).

Find the possible status values for DB instances in the following table. This table also shows whether you
will be billed for the DB instance and storage, billed only for storage, or not billed. For all DB instance
statuses, you are always billed for backup usage.

DB instance status Billed Description

Available Billed The DB instance is healthy and available.

Backing-up Billed The DB instance is currently being backed up.

Configuring-enhanced- Billed Enhanced Monitoring is being enabled or disabled for this DB


monitoring instance.

Configuring-iam- Billed AWS Identity and Access Management (IAM) database


database-auth authentication is being enabled or disabled for this DB instance.

Configuring-log- Billed Publishing log files to Amazon CloudWatch Logs is being enabled
exports or disabled for this DB instance.

Converting-to-vpc Billed The DB instance is being converted from a DB instance that is not
in an Amazon Virtual Private Cloud (Amazon VPC) to a DB instance
that is in an Amazon VPC.

Creating Not The DB instance is being created. The DB instance is inaccessible


billed while it is being created.

Delete-precheck Not Amazon RDS is validating that read replicas are healthy and are
billed safe to delete.

Deleting Not The DB instance is being deleted.


billed

Failed Not The DB instance has failed and Amazon RDS can't recover it.
billed Perform a point-in-time restore to the latest restorable time of the
DB instance to recover the data.

Inaccessible- Not The AWS KMS key used to encrypt or decrypt the DB instance can't
encryption-credentials billed be accessed or recovered.

Inaccessible- Billed The KMS key used to encrypt or decrypt the DB instance can't
encryption-credentials- for be accessed. However, if the KMS key is active, restarting the DB
recoverable storage instance can recover it.

For more information, see Encrypting a DB instance (p. 2587).

Incompatible-network Not Amazon RDS is attempting to perform a recovery action on a


billed DB instance but can't do so because the VPC is in a state that
prevents the action from being completed. This status can occur

684
Amazon Relational Database Service User Guide
Viewing Amazon RDS DB instance status

DB instance status Billed Description


if, for example, all available IP addresses in a subnet are in use and
Amazon RDS can't get an IP address for the DB instance.

Incompatible-option- Billed Amazon RDS attempted to apply an option group change but
group can't do so, and Amazon RDS can't roll back to the previous option
group state. For more information, check the Recent Events list
for the DB instance. This status can occur if, for example, the
option group contains an option such as TDE and the DB instance
doesn't contain encrypted information.

Incompatible- Billed Amazon RDS can't start the DB instance because the parameters
parameters specified in the DB instance's DB parameter group aren't
compatible with the DB instance. Revert the parameter changes
or make them compatible with the DB instance to regain access to
your DB instance. For more information about the incompatible
parameters, check the Recent Events list for the DB instance.

Incompatible-restore Not Amazon RDS can't do a point-in-time restore. Common causes for
billed this status include using temp tables, using MyISAM tables with
MySQL, or using Aria tables with MariaDB.

Insufficient-capacity Not Amazon RDS can’t create your instance because sufficient capacity
billed isn’t currently available. To create your DB instance in the same AZ
with the same instance type, delete your DB instance, wait a few
hours, and try to create again. Alternatively, create a new instance
using a different instance class or AZ.

Maintenance Billed Amazon RDS is applying a maintenance update to the DB instance.


This status is used for instance-level maintenance that RDS
schedules well in advance.

Modifying Billed The DB instance is being modified because of a customer request


to modify the DB instance.

Moving-to-vpc Billed The DB instance is being moved to a new Amazon Virtual Private
Cloud (Amazon VPC).

Rebooting Billed The DB instance is being rebooted because of a customer request


or an Amazon RDS process that requires the rebooting of the DB
instance.

Resetting-master- Billed The master credentials for the DB instance are being reset because
credentials of a customer request to reset them.

Renaming Billed The DB instance is being renamed because of a customer request


to rename it.

Restore-error Billed The DB instance encountered an error attempting to restore to a


point-in-time or from a snapshot.

Starting Billed The DB instance is starting.


for
storage

Stopped Billed The DB instance is stopped.


for
storage

685
Amazon Relational Database Service User Guide
Viewing Amazon RDS DB instance status

DB instance status Billed Description

Stopping Billed The DB instance is being stopped.


for
storage

Storage-full Billed The DB instance has reached its storage capacity allocation. This
is a critical status, and we recommend that you fix this issue
immediately. To do so, scale up your storage by modifying the DB
instance. To avoid this situation, set Amazon CloudWatch alarms
to warn you when storage space is getting low.

Storage-optimization Billed Amazon RDS is optimizing the storage of your DB instance. The DB
instance is fully operational. The storage optimization process is
usually short, but can sometimes take up to and even beyond 24
hours.

Upgrading Billed The database engine version is being upgraded.

Console

To view the status of a DB instance

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases.

The Databases page appears with the list of DB instances. For each DB instance , the status value is
displayed.

CLI
To view DB instance and its status information by using the AWS CLI, use the describe-db-instances
command. For example, the following AWS CLI command lists all the DB instances information .

aws rds describe-db-instances

To view a specific DB instance and its status, call the describe-db-instances command with the following
option:

• DBInstanceIdentifier – The name of the DB instance.

686
Amazon Relational Database Service User Guide
Viewing Amazon RDS DB instance status

aws rds describe-db-instances --db-instance-identifier mydbinstance

To view just the status of all the DB instances, use the following query in AWS CLI.

aws rds describe-db-instances --query 'DBInstances[*].


[DBInstanceIdentifier,DBInstanceStatus]' --output table

API
To view the status of the DB instance using the Amazon RDS API, call the DescribeDBInstances operation.

687
Amazon Relational Database Service User Guide
Viewing Amazon RDS recommendations

Viewing Amazon RDS recommendations


Amazon RDS provides automated recommendations for database resources, such as DB instances, read
replicas, and DB parameter groups. These recommendations provide best practice guidance by analyzing
DB instance configuration, usage, and performance data.

You can find examples of these recommendations in the following table.

Type Description Recommendation Additional


information

DB instance Your DB instance isn't We recommend that you use Multi- Amazon RDS Multi-AZ
isn't a Multi- using the Multi-AZ AZ deployment. The Multi-AZ
AZ DB deployment. deployments enhance the availability
instance and durability of the DB instance.

For information about Amazon RDS


Multi-AZ pricing, see Pricing.

Storage Your DB instance We recommend that you turn on Managing capacity


autoscaling doesn't have the storage autoscaling with a automatically with
isn't turned Amazon RDS maximum allocated storage of Amazon RDS storage
on storage autoscaling {{MaxAllocatedStorage}} autoscaling (p. 480)
turned on. Storage GB for your DB instance
autoscaling {{DBInstanceIdentifier}}.
automatically scales
the storage capacity
when there is an
increase in the
database size, with
zero downtime.

Engine Your DB instance is We recommend that you upgrade to Upgrading a DB


version not running the latest the latest version because it contains instance engine
outdated minor engine version. the latest security fixes and other version (p. 429)
improvements.

Pending You have pending We recommend that you perform Maintaining a DB


maintenance maintenance the pending maintenance available instance (p. 418)
available available on your DB on your DB instance. Updates to the
instance. operating system most often occur
for security issues and should be done
as soon as possible.

Automated Your DB instance has We recommend that you enable Working with
backups automated backups automated backups on your DB backups (p. 591)
disabled disabled. instance. Automated backups enable
point-in-time recovery of your DB
instance. You receive backup storage
up to the storage size of your DB
instance at no additional charge.

Magnetic Your DB instance Magnetic storage is not recommended Amazon RDS


volumes in is using magnetic for most DB instances. We DB instance
use storage. recommend switching to General storage (p. 101)
Purpose (SSD) storage or provisioned
IOPS storage.

688
Amazon Relational Database Service User Guide
Viewing Amazon RDS recommendations

Type Description Recommendation Additional


information

Enhanced Your DB instance We recommend enabling Enhanced Monitoring


Monitoring doesn't have Monitoring. Enhanced Monitoring OS metrics
disabled Enhanced Monitoring provides real-time operating with Enhanced
enabled. system metrics for monitoring and Monitoring (p. 797)
troubleshooting.

Performance Your DB instance We recommend enabling Overview of


Insights doesn't have Performance Insights. Performance Performance
disabled Performance Insights Insights monitors your database Insights on Amazon
enabled. load for better analysis and RDS (p. 720)
troubleshooting.

Encryption Your DB instance We recommend enabling encryption. Encrypting


disabled doesn't have You can encrypt your existing Amazon Amazon RDS
encryption enabled. RDS DB instances by restoring from resources (p. 2586)
an encrypted snapshot.

Previous Your DB instance Previous-generation DB instance DB instance


generation is running on a classes have been replaced by DB classes (p. 11)
DB instance previous-generation instance classes with better price,
class in use DB instance class. better performance, or both. We
recommend running your DB instance
on a later generation DB instance
class.

Huge pages The For increased database scalability, Turning on


not used for use_large_pages we recommend setting HugePages for an
an Oracle DB parameter is not set use_large_pages to ONLY in the RDS for Oracle
instance to ONLY in the DB DB parameter group used by your DB instance (p. 1942)
parameter group instance.
used by your DB
instance.

Nondefault Your DB parameter Settings that diverge too much Working with
custom group sets memory from the default values can cause parameter
memory parameters that poor performance and errors. We groups (p. 347)
parameters diverge too much recommend setting custom memory
from the default parameters to their default values in
values. the DB parameter group used by the
DB instance.

Found an Your DB instance We recommend that Best practices


unsafe has an unsafe you set the value of the for configuring
durability value for the innodb_flush_log_at_trx_commit parameters for
parameter innodb_flush_log_at_trx_commit
parameter to 1. The current value Amazon RDS for
value for a parameter. This might improve performance but MySQL, part 1:
MySQL DB parameter controls transactions can be lost if the Parameters related to
instance the persistence of database crashes. performance on the
commit operations to AWS Database Blog
disk.

689
Amazon Relational Database Service User Guide
Viewing Amazon RDS recommendations

Type Description Recommendation Additional


information

Optimizer Your DB instance isn't Global statistics persistence is Best practices


statistics configured to persist disabled. We recommend that you set for configuring
aren't the InnoDB statistics the innodb_stats_persistent parameters for
persisted to to the disk. When parameter to ON. Amazon RDS for
the disk for it isn't configured, MySQL, part 1:
a MySQL DB the statistics Parameters related to
instance may recalculate performance on the
frequently, which AWS Database Blog
leads to variations in
query execution plan.
You can modify the
value of this global
parameter at the
table level.

General Your DB instance has Evaluate your required general Managing table-
logging is the general logging logging usage. General logging can based MySQL
enabled for turned on. Turning increase the amount of I/O operations logs (p. 920)
a MySQL DB on general logging and allocated storage space, and
instance increases the amount lead to contention and performance
of I/O operations degradation.
and allocated storage
space,which can
lead to contention
and performance
degradation.

Maximum Your DB instance has We recommend that you set the innodb_open_files
InnoDB open a low value for the innodb_open_files parameter to a
files setting is maximum number of minimum value of 65.
misconfigured files InnoDB can open
for a MySQL at one time.
DB instance

Number Your DB instance has We recommend that you Setting Account


of allowed a low value for the increase the setting of the Resource Limits
simultaneous maximum number max_user_connections parameter
connections of simultaneous to a number greater than 5. The
for a given connections for each current max_user_connections
database database account. value is low which impacts the
user is database health checks and regular
misconfigured operations.
for a MySQL
DB instance

Read replica Your DB instance has We recommend that you don't change Best practices
is open in the Read replica in MySQL read replicas to writable mode for configuring
writable writable mode, which for a long duration. This setting can parameters for
mode for a allows updates from cause replication errors and data Amazon RDS for
MySQL DB clients. consistency issues. MySQL, part 2:
instance Parameters related
to replication on the
AWS Database Blog

690
Amazon Relational Database Service User Guide
Viewing Amazon RDS recommendations

Type Description Recommendation Additional


information

Found an The synchronization We recommend that you set the Best practices
unsafe of the binary sync_binlog parameter to 1. for configuring
durability log to disk isn't Currently, the synchronization of parameters for
parameter enforced before the the binary log to disk isn't enforced Amazon RDS for
value for a acknowledgement before acknowledgement of the MySQL, part 2:
MySQL DB of the transactions transactions commit. If there is a Parameters related
instance commit in your DB power failure or the operating system to replication on the
instance. crashes, the committed transactions AWS Database Blog
can be lost.

Found an Your DB instance has We recommend that you Changes in MySQL


unsafe the the following change the current value of the 8.0.26 (2021-07-20,
setting of the known issue: innodb_default_row_format General Availability)
innodb_default_row_format parameter to DYNAMIC.
parameter for A table created in
a MySQL DB a MySQL version
instance lower than 8.0.26
with row_format
COMPACT or
REDUNDANT will
be inaccessible and
unrecoverable when
the index exceeds 767
bytes.

Change Your DB parameter Change buffering allows a MySQL Best practices


buffering group has change DB instance to defer some writes for configuring
enabled for buffering enabled. necessary to maintain secondary parameters for
a MySQL DB indexes. This configuration can Amazon RDS for
instance improve performance slightly, but MySQL, part 1:
it can create a large delay in crash Parameters related to
recovery. During crash recovery, the performance on the
secondary index must be brought up AWS Database Blog
to date. So, the benefits of change
buffering are outweighed by the
potentially very long crash recovery
events. We recommend disabling
change buffering.

Query cache Your DB parameter The query cache can cause the DB Best practices
enabled for group has query instance to appear to stall when for configuring
a MySQL DB cache parameter changes require the cache to be parameters for
instance enabled. purged. Most workloads don't benefit Amazon RDS for
from a query cache. The query cache MySQL, part 1:
was removed from MySQL version 8.0. Parameters related to
We recommend that you disable the performance on the
query cache parameter. AWS Database Blog

691
Amazon Relational Database Service User Guide
Viewing Amazon RDS recommendations

Type Description Recommendation Additional


information

Autovacuum Your DB instance We recommend that you set the Understanding


is disabled for has autovacuum autovacuum parameter to on. autovacuum in
a PostgreSQL turned off. Turning Amazon RDS
DB instance off autovacuum for PostgreSQL
increases table and environments
index bloat and
impacts performance.

Synchronous When the We recommend that you turn on the Asynchronous


commit is synchronous_commit synchronous_commit parameter. Commit
turned off for parameter is set
a PostgreSQL to OFF, it causes
DB instance data loss when the
database crashes,
which can impact
the durability of the
database.

track_counts If the track_counts We recommend that you set track_counts


parameter is parameter is turned track_counts parameter to ON. (boolean)
disabled for off, the database
a PostgreSQL doesn't collect
DB instance the database
activity statistics.
Autovacuum requires
these statistics to
work correctly.

Index only The query planner We recommend that you set the enable_indexonlyscan
scan plan or optimizer can't parameter enable_indexonlyscan (boolean)
type is use the index only to ON.
disabled for scan plan when it is
a PostgreSQL disabled.
DB instance

index-scan The query planner We recommend that you set the enable_indexscan
plan type is or optimizer can't parameter enable_indexscan to (boolean)
disabled for use the index scan ON.
a PostgreSQL plan types when it is
DB instance disabled.

Logging to Your DB parameter Setting logging output to TABLE MySQL database log
table group sets logging uses more storage than setting this files (p. 915)
output to TABLE. parameter to FILE. To avoid reaching
the storage limit, we recommend
setting the logging output parameter
to FILE.

Amazon RDS generates recommendations for a resource when the resource is created or modified.
Amazon RDS also periodically scans your resources and generates recommendations.

692
Amazon Relational Database Service User Guide
Viewing Amazon RDS recommendations

To view Amazon RDS recommendations

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Recommendations.

The Recommendations page appears.

3. On the Recommendations page, choose one of the following:

• Active – Shows the current recommendations that you can apply, dismiss, or schedule.

693
Amazon Relational Database Service User Guide
Viewing Amazon RDS recommendations

• Dismissed – Shows the recommendations that have been dismissed. When you choose Dismissed,
you can apply these dismissed recommendations.
• Scheduled – Shows the recommendations that are scheduled but not yet applied. These
recommendations will be applied in the next scheduled maintenance window.
• Applied – Shows the recommendations that are currently applied.

From any list of recommendations, you can open a section to view the recommendations in that
section.

To configure preferences for displaying recommendations in each section, choose the Preferences
icon.

From the Preferences window that appears, you can set display options. These options include the
visible columns and the number of recommendations to display on the page.
4. (optional) Respond to your active recommendations as follows:

694
Amazon Relational Database Service User Guide
Viewing Amazon RDS recommendations

a. Choose Active and open one or more sections to view the recommendations in them.
b. Choose one or more recommendations and choose Apply now (to apply them immediately),
Schedule (to apply them in next maintenance window), or Dismiss.

If the Apply now button appears for a recommendation but is unavailable (grayed out), the DB
instance is not available. You can apply recommendations immediately only if the DB instance
status is available. For example, you can't apply recommendations immediately to the DB
instance if its status is modifying. In this case, wait for the DB instance to be available and then
apply the recommendation.

If the Apply now button doesn't appear for a recommendation, you can't apply the
recommendation using the Recommendations page. You can modify the DB instance to apply
the recommendation manually.

For more information about modifying a DB instance, see Modifying an Amazon RDS DB
instance (p. 401).
Note
When you choose Apply now, a brief DB instance outage might result.

695
Amazon Relational Database Service User Guide
Viewing metrics in the Amazon RDS console

Viewing metrics in the Amazon RDS console


Amazon RDS integrates with Amazon CloudWatch to display a variety of RDS DB instance metrics in the
RDS console. For descriptions of these metrics, see Metrics reference for Amazon RDS (p. 806).

For your DB instance, the following categories of metrics are monitored:

• CloudWatch – Shows the Amazon CloudWatch metrics for RDS that you can access in the RDS console.
You can also access these metrics in the CloudWatch console. Each metric includes a graph that
shows the metric monitored over a specific time span. For a list of CloudWatch metrics, see Amazon
CloudWatch metrics for Amazon RDS (p. 806).
• Enhanced monitoring – Shows a summary of operating-system metrics when your RDS DB instance
has turned on Enhanced Monitoring. RDS delivers the metrics from Enhanced Monitoring to
your Amazon CloudWatch Logs account. Each OS metric includes a graph showing the metric
monitored over a specific time span. For an overview, see Monitoring OS metrics with Enhanced
Monitoring (p. 797). For a list of Enhanced Monitoring metrics, see OS metrics in Enhanced
Monitoring (p. 837).
• OS Process list – Shows details for each process running in your DB instance.
• Performance Insights – Opens the Amazon RDS Performance Insights dashboard for a DB instance.
For an overview of Performance Insights, see Monitoring DB load with Performance Insights on
Amazon RDS (p. 720). For a list of Performance Insights metrics, see Amazon CloudWatch metrics for
Performance Insights (p. 813).

Amazon RDS now provides a consolidated view of Performance Insights and CloudWatch metrics in the
Performance Insights dashboard. Performance Insights must be turned on for your DB instance to use
this view. You can choose the new monitoring view in the Monitoring tab or Performance Insights in
the navigation pane. To view the instructions for choosing this view, see Viewing combined metrics in the
Amazon RDS console (p. 699).

If you want to continue with the legacy monitoring view, continue with this procedure.
Note
The legacy monitoring view will be discontinued on December 15, 2023.

To view metrics for your DB instance in the legacy monitoring view:

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases.
3. Choose the name of the DB instance that you want to monitor.

The database page appears. The following example shows an Oracle database named orclb.

696
Amazon Relational Database Service User Guide
Viewing metrics in the Amazon RDS console

4. Scroll down and choose Monitoring.

The monitoring section appears. By default, CloudWatch metrics are shown. For descriptions of
these metrics, see Amazon CloudWatch metrics for Amazon RDS (p. 806).

5. Choose Monitoring to see the metric categories.

6. Choose the category of metrics that you want to see.

The following example shows Enhanced Monitoring metrics. For descriptions of these metrics, see
OS metrics in Enhanced Monitoring (p. 837).
Note
Currently, viewing OS metrics for a Multi-AZ standby replica is not supported for MariaDB
DB instances.

697
Amazon Relational Database Service User Guide
Viewing metrics in the Amazon RDS console

Tip
To choose the time range of the metrics represented by the graphs, you can use the time
range list.
To bring up a more detailed view, you can choose any graph. You can also apply metric-
specific filters to the data.

698
Amazon Relational Database Service User Guide
Viewing combined metrics in the Amazon RDS console

Viewing combined metrics in the Amazon RDS


console
Amazon RDS now provides a consolidated view of Performance Insights and CloudWatch metrics for your
DB instance in the Performance Insights dashboard. You can use the preconfigured dashboard or create
a custom dashboard. The preconfigured dashboard provides the most commonly used metrics to help
diagnose performance issues for a database engine. Alternatively, you can create a custom dashboard
with the metrics for a database engine that meet your analysis requirements. Then, use this dashboard
for all the DB instances of that database engine type in your AWS account.

You can choose the new monitoring view in the Monitoring tab or Performance Insights in the
navigation pane. When you navigate to the Performance Insights page, you see the options to choose
between the new monitoring view and legacy view. The option you choose is saved as the default view.

Performance Insights must be turned on for your DB instance to view the combined metrics in the
Performance Insights dashboard. For more information about turning on Performance Insights, see
Turning Performance Insights on and off (p. 727).
Note
We recommend that you choose the new monitoring view. You can continue to use the legacy
monitoring view until it is discontinued on December 15, 2023.

Choosing the new monitoring view in the Monitoring


tab
To choose the new monitoring view in the Monitoring tab:

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the left navigation pane, choose Databases.
3. Choose the DB instance that you want to monitor.

The database page appears.


4. Scroll down and choose the Monitoring tab.

A banner appears with the option to choose the new monitoring view. The following example shows
the banner to choose the new monitoring view.

5. Choose Go to new monitoring view to open the Performance Insights dashboard with Performance
Insights and CloudWatch metrics for your DB instance.
6. (Optional) If Performance Insights is turned off for your DB instance, a banner appears with the
option to modify your DB cluster and turn on Performance Insights.

The following example shows the banner to modify the DB cluster in the Monitoring tab .

699
Amazon Relational Database Service User Guide
Choosing the new monitoring view with
Performance Insights in the navigation pane

Choose Modify to modify your DB cluster and turn on Performance Insights. For more information
about turning on Performance Insights, see Turning Performance Insights on and off (p. 727)

Choosing the new monitoring view with


Performance Insights in the navigation pane
To choose the new monitoring view with Performance Insights in the navigation pane:

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the left navigation pane, choose Performance Insights.
3. Choose a DB instance to open a window that has the monitoring view options.

The following example shows the window with the monitoring view options.

4. Choose the Performance Insights and CloudWatch metrics view (New) option, and then choose
Continue.

You can now view the Performance Insights dashboard that shows both Performance Insights and
CloudWatch metrics for your DB instance. The following example shows the Performance Insights
and CloudWatch metrics in the dashboard.

700
Amazon Relational Database Service User Guide
Choosing the legacy view with Performance
Insights in the navigation pane

Choosing the legacy view with Performance Insights


in the navigation pane
You can choose the legacy monitoring view to view only the Performance Insights metrics for your DB
instance.
Note
This view will be discontinued on December 15, 2023.

To choose the legacy monitoring view with Performance Insights in the navigation pane:

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the left navigation pane, choose Performance Insights.
3. Choose a DB instance.
4. Choose the settings icon on the Performance Insights dashboard.

You can now see the Settings window that shows the option to choose the legacy Performance
Insights view.

The following example shows the window with the option for the legacy monitoring view.

701
Amazon Relational Database Service User Guide
Creating a custom dashboard with
Performance Insights in the navigation pane

5. Select the Performance Insights view option and choose Continue.

A warning message appears. Any dashboard configurations that you saved won't be available in this
view.
6. Choose Confirm to continue to the legacy Performance Insights view.

You can now view the Performance Insights dashboard that shows only Performance Insights
metrics for the DB instance.

Creating a custom dashboard with Performance


Insights in the navigation pane
In the new monitoring view, you can create a custom dashboard with the metrics you need to meet your
analysis requirements.

You can create a custom dashboard by selecting Performance Insights and CloudWatch metrics for your
DB instance. You can use this custom dashboard for other DB instances of the same database engine
type in your AWS account.
Note
The customized dashboard supports up to 50 metrics.

Use the widget settings menu to edit or delete the dashboard, and move or resize the widget window.

To create a custom dashboard with Performance Insights in the navigation pane:

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the left navigation pane, choose Performance Insights.

702
Amazon Relational Database Service User Guide
Creating a custom dashboard with
Performance Insights in the navigation pane

3. Choose a DB instance.
4. Scroll down to the Metrics tab in the window.
5. Select the custom dashboard from the drop down list. The following example shows the custom
dashboard creation.

6. Choose Add widget to open the Add widget window. You can open and view the available operating
system (OS) metrics, database metrics, and CloudWatch metrics in the window.

The following example shows the Add widget window with the metrics.

703
Amazon Relational Database Service User Guide
Creating a custom dashboard with
Performance Insights in the navigation pane

7. Select the metrics that you want to view in the dashboard and choose Add widget. You can use the
search field to find a specific metric.

The selected metrics appear on your dashboard.

704
Amazon Relational Database Service User Guide
Choosing the preconfigured dashboard with
Performance Insights in the navigation pane

8. (Optional) If you want to modify or delete your dashboard, choose the settings icon on the upper
right of the widget, and then select one of the following actions in the menu.

• Edit – Modify the metrics list in the window. Choose Update widget after you select the metrics
for your dashboard.
• Delete – Deletes the widget. Choose Delete in the confirmation window.

Choosing the preconfigured dashboard with


Performance Insights in the navigation pane
You can view the most commonly used metrics with the preconfigured dashboard. This dashboard helps
diagnose performance issues with a database engine and reduce the average recovery time from hours
to minutes.
Note
This dashboard can't be edited.

To choose the preconfigured dashboard with Performance Insights in the navigation pane:

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the left navigation pane, choose Performance Insights.
3. Choose a DB instance.
4. Scroll down to the Metrics tab in the window
5. Select a preconfigured dashboard from the drop down list.

You can view the metrics for the DB instance in the dashboard. The following example shows a
preconfigured metrics dashboard.

705
Amazon Relational Database Service User Guide
Monitoring RDS with CloudWatch

Monitoring Amazon RDS metrics with Amazon


CloudWatch
Amazon CloudWatch is a metrics repository. The repository collects and processes raw data from
Amazon RDS into readable, near real-time metrics. For a complete list of Amazon RDS metrics sent to
CloudWatch, see Metrics reference for Amazon RDS.

Topics
• Overview of Amazon RDS and Amazon CloudWatch (p. 707)
• Viewing DB instance metrics in the CloudWatch console and AWS CLI (p. 708)
• Creating CloudWatch alarms to monitor Amazon RDS (p. 713)
• Tutorial: Creating an Amazon CloudWatch alarm for Multi-AZ DB cluster replica lag (p. 713)

706
Amazon Relational Database Service User Guide
Overview of Amazon RDS and Amazon CloudWatch

Overview of Amazon RDS and Amazon CloudWatch


By default, Amazon RDS automatically sends metric data to CloudWatch in 1-minute periods. For
example, the CPUUtilization metric records the percentage of CPU utilization for a DB instance over
time. Data points with a period of 60 seconds (1 minute) are available for 15 days. This means that you
can access historical information and see how your web application or service is performing.

As shown in the following diagram, you can set up alarms for your CloudWatch metrics. For example,
you might create an alarm that signals when the CPU utilization for an instance is over 70%. You can
configure Amazon Simple Notification Service to email you when the threshold is passed.

Amazon RDS publishes the following types of metrics to Amazon CloudWatch:

• Metrics for your RDS DB instances

For a table of these metrics, see Amazon CloudWatch metrics for Amazon RDS (p. 806).
• Performance Insights metrics

For a table of these metrics, see Amazon CloudWatch metrics for Performance Insights (p. 813) and
Performance Insights counter metrics (p. 814).

707
Amazon Relational Database Service User Guide
Viewing CloudWatch metrics

• Enhanced Monitoring metrics (published to Amazon CloudWatch Logs)

For a table of these metrics, see OS metrics in Enhanced Monitoring (p. 837).
• Usage metrics for the Amazon RDS service quotas in your AWS account

For a table of these metrics, see Amazon CloudWatch usage metrics for Amazon RDS (p. 812). For
more information about Amazon RDS quotas, see Quotas and constraints for Amazon RDS (p. 2720).

For more information about CloudWatch, see What is Amazon CloudWatch? in the Amazon CloudWatch
User Guide. For more information about CloudWatch metrics retention, see Metrics retention.

Viewing DB instance metrics in the CloudWatch


console and AWS CLI
Following, you can find details about how to view metrics for your DB instance using CloudWatch.
For information on monitoring metrics for your DB instance's operating system in real time using
CloudWatch Logs, see Monitoring OS metrics with Enhanced Monitoring (p. 797).

When you use Amazon RDS resources, Amazon RDS sends metrics and dimensions to Amazon
CloudWatch every minute. You can use the following procedures to view the metrics for Amazon RDS in
the CloudWatch console and CLI.

Console

To view metrics using the Amazon CloudWatch console

Metrics are grouped first by the service namespace, and then by the various dimension combinations
within each namespace.

1. Open the CloudWatch console at https://console.aws.amazon.com/cloudwatch/.

The CloudWatch overview home page appears.

708
Amazon Relational Database Service User Guide
Viewing CloudWatch metrics

709
Amazon Relational Database Service User Guide
Viewing CloudWatch metrics

2. If necessary, change the AWS Region. From the navigation bar, choose the AWS Region where your
AWS resources are. For more information, see Regions and endpoints.
3. In the navigation pane, choose Metrics and then All metrics.

4. Scroll down and choose the RDS metric namespace.

The page displays the Amazon RDS dimensions. For descriptions of these dimensions, see Amazon
CloudWatch dimensions for Amazon RDS (p. 813).

5. Choose a metric dimension, for example By Database Class.

710
Amazon Relational Database Service User Guide
Viewing CloudWatch metrics

6. Do any of the following actions:

• To sort the metrics, use the column heading.


• To graph a metric, select the check box next to the metric.
• To filter by resource, choose the resource ID, and then choose Add to search.
• To filter by metric, choose the metric name, and then choose Add to search.

The following example filters on the db.t3.medium class and graphs the CPUUtilization metric.

711
Amazon Relational Database Service User Guide
Viewing CloudWatch metrics

AWS CLI
To obtain metric information by using the AWS CLI, use the CloudWatch command list-metrics. In
the following example, you list all metrics in the AWS/RDS namespace.

aws cloudwatch list-metrics --namespace AWS/RDS

To obtain metric statistics, use the command get-metric-statistics. The following command gets
CPUUtilization statistics for instance my-instance over the specific 24-hour period, with a 5-minute
granularity.

Example

For Linux, macOS, or Unix:

aws cloudwatch get-metric-statistics --namespace AWS/RDS \


--metric-name CPUUtilization \
--start-time 2021-12-15T00:00:00Z \
--end-time 2021-12-16T00:00:00Z \
--period 360 \
--statistics Minimum \
--dimensions Name=DBInstanceIdentifier,Value=my-instance

For Windows:

aws cloudwatch get-metric-statistics --namespace AWS/RDS ^


--metric-name CPUUtilization ^
--start-time 2021-12-15T00:00:00Z ^
--end-time 2021-12-16T00:00:00Z ^
--period 360 ^
--statistics Minimum ^
--dimensions Name=DBInstanceIdentifier,Value=my-instance

Sample output appears as follows:

{
"Datapoints": [
{
"Timestamp": "2021-12-15T18:00:00Z",
"Minimum": 8.7,
"Unit": "Percent"
},
{
"Timestamp": "2021-12-15T23:54:00Z",
"Minimum": 8.12486458559024,
"Unit": "Percent"
},
{
"Timestamp": "2021-12-15T17:24:00Z",
"Minimum": 8.841666666666667,
"Unit": "Percent"
}, ...
{
"Timestamp": "2021-12-15T22:48:00Z",
"Minimum": 8.366248354248954,
"Unit": "Percent"
}
],
"Label": "CPUUtilization"

712
Amazon Relational Database Service User Guide
Creating CloudWatch alarms

For more information, see Getting statistics for a metric in the Amazon CloudWatch User Guide.

Creating CloudWatch alarms to monitor Amazon RDS


You can create a CloudWatch alarm that sends an Amazon SNS message when the alarm changes state.
An alarm watches a single metric over a time period that you specify. The alarm can also perform one
or more actions based on the value of the metric relative to a given threshold over a number of time
periods. The action is a notification sent to an Amazon SNS topic or Amazon EC2 Auto Scaling policy.

Alarms invoke actions for sustained state changes only. CloudWatch alarms don't invoke actions simply
because they are in a particular state. The state must have changed and have been maintained for a
specified number of time periods.

You can use the DB_PERF_INSIGHTS metric math function in the CloudWatch console to query Amazon
RDS for Performance Insights counter metrics. The DB_PERF_INSIGHTS function also includes the
DBLoad metric at sub-minute intervals. You can set CloudWatch alarms on these metrics.

For more details on how to create an alarm, see Create an alarm on Performance Insights counter
metrics from an AWS database.

To set an alarm using the AWS CLI

• Call put-metric-alarm. For more information, see AWS CLI Command Reference.

To set an alarm using the CloudWatch API

• Call PutMetricAlarm. For more information, see Amazon CloudWatch API Reference

For more information about setting up Amazon SNS topics and creating alarms, see Using Amazon
CloudWatch alarms.

Tutorial: Creating an Amazon CloudWatch alarm for


Multi-AZ DB cluster replica lag
You can create an Amazon CloudWatch alarm that sends an Amazon SNS message when replica lag for
a Multi-AZ DB cluster has exceeded a threshold. An alarm watches the ReplicaLag metric over a time
period that you specify. The action is a notification sent to an Amazon SNS topic or Amazon EC2 Auto
Scaling policy.

To set a CloudWatch alarm for Multi-AZ DB cluster replica lag

1. Sign in to the AWS Management Console and open the CloudWatch console at https://
console.aws.amazon.com/cloudwatch/.
2. In the navigation pane, choose Alarms, All alarms.
3. Choose Create alarm.
4. On the Specify metric and conditions page, choose Select metric.
5. In the search box, enter the name of your Multi-AZ DB cluster and press Enter.

The following image shows the Select metric page with a Multi-AZ DB cluster named rds-cluster
entered.

713
Amazon Relational Database Service User Guide
Tutorial: Creating a CloudWatch
alarm for DB cluster replica lag

6. Choose RDS, Per-Database Metrics.


7. In the search box, enter ReplicaLag and press Enter, then select each DB instance in the DB cluster.

The following image shows the Select metric page with the DB instances selected for the
ReplicaLag metric.

This alarm considers the replica lag for all three of the DB instances in the Multi-AZ DB cluster. The
alarm responds when any DB instance exceeds the threshold. It uses a math expression that returns

714
Amazon Relational Database Service User Guide
Tutorial: Creating a CloudWatch
alarm for DB cluster replica lag

the maximum value of the three metrics. Start by sorting by metric name, and then choose all three
ReplicaLag metrics.
8. From Add math, choose All functions, MAX.

9. Choose the Graphed metrics tab, and edit the details for Expression1 to MAX([m1,m2,m3]).
10. For all three ReplicaLag metrics, change the Period to 1 minute.
11. Clear selection from all metrics except for Expression1.

The Select metric page should look similar to the following image.

715
Amazon Relational Database Service User Guide
Tutorial: Creating a CloudWatch
alarm for DB cluster replica lag

12. Choose Select metric.


13. On the Specify metric and conditions page, change the label to a meaningful name, such as
ClusterReplicaLag, and enter a number of seconds in Define the threshold value. For this
tutorial, enter 1200 seconds (20 minutes). You can adjust this value for your workload requirements.

The Specify metric and conditions page should look similar to the following image.

716
Amazon Relational Database Service User Guide
Tutorial: Creating a CloudWatch
alarm for DB cluster replica lag

14. Choose Next, and the Configure actions page appears.


15. Keep In alarm selected, choose Create new topic, and enter the topic name and a valid email
address.

717
Amazon Relational Database Service User Guide
Tutorial: Creating a CloudWatch
alarm for DB cluster replica lag

16. Choose Create topic, and then choose Next.


17. On the Add name and description page, enter the Alarm name and Alarm description, and then
choose Next.

718
Amazon Relational Database Service User Guide
Tutorial: Creating a CloudWatch
alarm for DB cluster replica lag

18. Preview the alarm that you're about to create on the Preview and create page, and then choose
Create alarm.

719
Amazon Relational Database Service User Guide
Monitoring DB load with Performance Insights

Monitoring DB load with Performance Insights on


Amazon RDS
Performance Insights expands on existing Amazon RDS monitoring features to illustrate and help you
analyze your database performance. With the Performance Insights dashboard, you can visualize the
database load on your Amazon RDS DB instance load and filter the load by waits, SQL statements, hosts,
or users. For information about using Performance Insights with Amazon DocumentDB, see Amazon
DocumentDB Developer Guide.

Topics
• Overview of Performance Insights on Amazon RDS (p. 720)
• Turning Performance Insights on and off (p. 727)
• Turning on the Performance Schema for Performance Insights on Amazon RDS for MariaDB or
MySQL (p. 731)
• Configuring access policies for Performance Insights (p. 734)
• Analyzing metrics with the Performance Insights dashboard (p. 738)
• Retrieving metrics with the Performance Insights API (p. 769)
• Logging Performance Insights calls using AWS CloudTrail (p. 786)

Overview of Performance Insights on Amazon RDS


By default, Performance Insights is turned on in the console create wizard for all Amazon RDS engines. If
you have more than one database on a DB instance, Performance Insights aggregates performance data.

You can find an overview of Performance Insights for Amazon RDS in the following video.

Using Performance Insights to Analyze Performance of Amazon Aurora PostgreSQL


Important
The following topics describe using Amazon RDS Performance Insights with non-Aurora DB
engines. For information about using Amazon RDS Performance Insights with Amazon Aurora,
see Using Amazon RDS Performance Insights in the Amazon Aurora User Guide.

Topics
• Database load (p. 720)
• Maximum CPU (p. 724)
• Amazon RDS DB engine, Region, and instance class support for Performance Insights (p. 724)
• Pricing and data retention for Performance Insights (p. 726)

Database load
Database load (DB load) measures the level of session activity in your database. The key metric in
Performance Insights is DBLoad, which is collected every second.

Topics
• Active sessions (p. 721)
• Average active sessions (p. 721)
• Average active executions (p. 721)
• Dimensions (p. 722)

720
Amazon Relational Database Service User Guide
Overview of Performance Insights

Active sessions
A database session represents an application's dialogue with a relational database. An active session is a
connection that has submitted work to the DB engine and is waiting for a response.

A session is active when it's either running on CPU or waiting for a resource to become available so that it
can proceed. For example, an active session might wait for a page (or block) to be read into memory, and
then consume CPU while it reads data from the page.

Average active sessions


The average active sessions (AAS) is the unit for the DBLoad metric in Performance Insights. It measures
how many sessions are concurrently active on the database.

Every second, Performance Insights samples the number of sessions concurrently running a query. For
each active session, Performance Insights collects the following data:

• SQL statement
• Session state (running on CPU or waiting)
• Host
• User running the SQL

Performance Insights calculates the AAS by dividing the total number of sessions by the number of
samples for a specific time period. For example, the following table shows 5 consecutive samples of a
running query taken at 1-second intervals.

Sample Number of sessions running AAS Calculation


query

1 2 2 2 total sessions / 1 sample

2 0 1 2 total sessions / 2 samples

3 4 2 6 total sessions / 3 samples

4 0 1.5 6 total sessions / 4 samples

5 4 2 10 total sessions / 5 samples

In the preceding example, the DB load for the time interval was 2 AAS. This measurement means that, on
average, 2 sessions were active at any given time during the interval when the 5 samples were taken.

An analogy for DB load is worker activity in a warehouse. Suppose that the warehouse employs 100
workers. If 1 order comes in, 1 worker fulfills the order while 99 workers are idle. If 100 orders come
in, all 100 workers fulfill orders simultaneously. If every 15 minutes a manager writes down how many
workers are simultaneously active, adds these numbers at the end of the day, and then divides the total
by the number of samples, the manager calculates the average number of workers active at any given
time. If the average was 50 workers yesterday and 75 workers today, then the average activity level in
the warehouse increased. Similarly, DB load increases as database session activity increases.

Average active executions


The average active executions (AAE) per second is related to AAS. To calculate the AAE, Performance
Insights divides the total execution time of a query by the time interval. The following table shows the
AAE calculation for the same query in the preceding table.

721
Amazon Relational Database Service User Guide
Overview of Performance Insights

Elapsed time Total execution time AAE Calculation


(sec) (sec)

60 120 2 120 execution seconds/60


elapsed seconds

120 120 1 120 execution


seconds/120 elapsed
seconds

180 380 2.11 380 execution


seconds/180 elapsed
seconds

240 380 1.58 380 execution


seconds/240 elapsed
seconds

300 600 2 600 execution


seconds/300 elapsed
seconds

In most cases, the AAS and AAE for a query are approximately the same. However, because the inputs to
the calculations are different data sources, the calculations often vary slightly.

Dimensions
The db.load metric is different from the other time-series metrics because you can break it into
subcomponents called dimensions. You can think of dimensions as "slice by" categories for the different
characteristics of the DBLoad metric.

When you are diagnosing performance issues, the following dimensions are often the most useful:

Topics
• Wait events (p. 722)
• Top SQL (p. 723)
• Plans (p. 723)

For a complete list of dimensions for the Amazon RDS engines, see DB load sliced by
dimensions (p. 743).

Wait events

A wait event causes a SQL statement to wait for a specific event to happen before it can continue
running. Wait events are an important dimension, or category, for DB load because they indicate where
work is impeded.

Every active session is either running on the CPU or waiting. For example, sessions consume CPU when
they search memory for a buffer, perform a calculation, or run procedural code. When sessions aren't
consuming CPU, they might be waiting for a memory buffer to become free, a data file to be read, or a
log to be written to. The more time that a session waits for resources, the less time it runs on the CPU.

When you tune a database, you often try to find out the resources that sessions are waiting for. For
example, two or three wait events might account for 90 percent of DB load. This measure means that, on
average, active sessions are spending most of their time waiting for a small number of resources. If you
can find out the cause of these waits, you can attempt a solution.

722
Amazon Relational Database Service User Guide
Overview of Performance Insights

Consider the analogy of a warehouse worker. An order comes in for a book. The worker might be delayed
in fulfilling the order. For example, a different worker might be currently restocking the shelves, a trolley
might not be available. Or the system used to enter the order status might be slow. The longer the
worker waits, the longer it takes to fulfill the order. Waiting is a natural part of the warehouse workflow,
but if wait time becomes excessive, productivity decreases. In the same way, repeated or lengthy session
waits can degrade database performance. For more information, see Tuning with wait events for Aurora
PostgreSQL and Tuning with wait events for Aurora MySQL in the Amazon Aurora User Guide.

Wait events vary by DB engine:

• For information about all MariaDB and MySQL wait events, see Wait Event Summary Tables in the
MySQL documentation.
• For information about all PostgreSQL wait events, see The Statistics Collector > Wait Event tables in
the PostgreSQL documentation.
• For information about all Oracle wait events, see Descriptions of Wait Events in the Oracle
documentation.
• For information about all SQL Server wait events, see Types of Waits in the SQL Server
documentation.

Note
For Oracle, background processes sometimes do work without an associated SQL statement. In
these cases, Performance Insights reports the type of background process concatenated with a
colon and the wait class associated with that background process. Types of background process
include LGWR, ARC0, PMON, and so on.
For example, when the archiver is performing I/O, the Performance Insights report for it is
similar to ARC1:System I/O. Occasionally, the background process type is also missing, and
Performance Insights only reports the wait class, for example :System I/O.

Top SQL

Where wait events show bottlenecks, top SQL shows which queries are contributing the most to DB
load. For example, many queries might be currently running on the database, but a single query might
consume 99 percent of the DB load. In this case, the high load might indicate a problem with the query.

By default, the Performance Insights console displays top SQL queries that are contributing to the
database load. The console also shows relevant statistics for each statement. To diagnose performance
problems for a specific statement, you can examine its execution plan.

Plans

An execution plan, also called simply a plan, is a sequence of steps that access data. For example, a plan
for joining tables t1 and t2 might loop through all rows in t1 and compare each row to a row in t2. In a
relational database, an optimizer is built-in code that determines the most efficient plan for a SQL query.

For Oracle DB instances, Performance Insights collects execution plans automatically. To diagnose SQL
performance problems, examine the captured plans for high-resource Oracle SQL queries. The plans
show how Oracle Database has parsed and run queries.

To learn how to analyze DB load using plans, see Analyzing Oracle execution plans using the
Performance Insights dashboard (p. 766).

Plan capture

Every five minutes, Performance Insights identifies the most resource-intensive Oracle queries and
captures their plans. Thus, you don't need to manually collect and manage a huge number of plans.
Instead, you can use the Top SQL tab to focus on the plans for the most problematic queries.

723
Amazon Relational Database Service User Guide
Overview of Performance Insights

Note
Performance Insights doesn't capture plans for queries whose text exceeds the maximum
collectable query text limit. For more information, see Accessing more SQL text in the
Performance Insights dashboard (p. 761).

The retention period for execution plans is the same as for your Performance Insights data. The retention
setting in the free tier is Default (7 days). To retain your performance data for longer, specify 1–24
months. For more information about retention periods, see Pricing and data retention for Performance
Insights (p. 726).

Digest queries
The Top SQL tab shows digest queries by default. A digest query doesn't itself have a plan,
but all queries that use literal values have plans. For example, a digest query might include
the text WHERE `email`=?. The digest might contain two queries, one with the text WHERE
[email protected] and another with WHERE [email protected]. Each of these
literal queries might include multiple plans.

If you select a digest query, the console shows all plans for child statements of the selected digest. Thus,
you don't need to look through all the child statements to find the plan. You might see plans that aren’t
in the displayed list of top 10 child statements. The console shows plans for all child queries for which
plans have been collected, regardless of whether the queries are in the top 10.

Maximum CPU
In the dashboard, the Database load chart collects, aggregates, and displays session information. To see
whether active sessions are exceeding the maximum CPU, look at their relationship to the Max vCPU line.
The Max vCPU value is determined by the number of vCPU (virtual CPU) cores for your DB instance.

One process can run on a vCPU at a time. If the number of processes exceed the number of vCPUs, the
processes start queuing. When the queuing increase, the performance is impacted. If the DB load is
often above the Max vCPU line, and the primary wait state is CPU, the CPU is overloaded. In this case,
you might want to throttle connections to the instance, tune any SQL queries with a high CPU load, or
consider a larger instance class. High and consistent instances of any wait state indicate that there might
be bottlenecks or resource contention issues to resolve. This can be true even if the DB load doesn't cross
the Max vCPU line.

Amazon RDS DB engine, Region, and instance class support for


Performance Insights
The following table provides Amazon RDS DB engines that support Performance Insights.
Note
For Amazon Aurora, see Amazon Aurora DB engine support for Performance Insights in Amazon
Aurora User Guide.

Amazon RDS DB Supported engine versions and Instance class restrictions


engine Regions

Amazon RDS for For more information on version and Performance Insights isn't supported
MariaDB Region availability of Performance for the following instance classes:
Insights with RDS for MariaDB, see
Performance Insights (p. 150). • db.t2.micro
• db.t2.small
• db.t3.micro
• db.t3.small
• db.t4g.micro

724
Amazon Relational Database Service User Guide
Overview of Performance Insights

Amazon RDS DB Supported engine versions and Instance class restrictions


engine Regions
• db.t4g.small

RDS for MySQL For more information on version and Performance Insights isn't supported
Region availability of Performance for the following instance classes:
Insights with RDS for MySQL, see
Performance Insights (p. 150). • db.t2.micro
• db.t2.small
• db.t3.micro
• db.t3.small
• db.t4g.micro
• db.t4g.small

Amazon RDS for For more information on version and N/A


Microsoft SQL Server Region availability of Performance
Insights with RDS for SQL Server, see
Performance Insights (p. 150).

Amazon RDS for For more information on version and N/A


PostgreSQL Region availability of Performance
Insights with RDS for PostgreSQL,
see Performance Insights (p. 150).

Amazon RDS for For more information on version and N/A


Oracle Region availability of Performance
Insights with RDS for Oracle, see
Performance Insights (p. 150).

Amazon RDS DB engine, Region, and instance class support for Performance
Insights features
The following table provides Amazon RDS DB engines that support Performance Insights features.

Feature Supported regions Supported DB engines Instance classes

SQL statistics for All All except RDS for All


Performance Insights Microsoft SQL Server

Analyzing execution All RDS for Oracle All


plans using
Performance Insights

Analyzing performance • US East (Ohio) RDS for PostgreSQL Provisioned


for a period of time • US East (N. Virginia)
• US West (N.
California)
• US West (Oregon)
• Asia Pacific (Mumbai)
• Asia Pacific (Seoul)
• Asia Pacific
(Singapore)
• Asia Pacific (Sydney)

725
Amazon Relational Database Service User Guide
Overview of Performance Insights

Feature Supported regions Supported DB engines Instance classes


• Asia Pacific (Tokyo)
• Canada (Central)
• Europe (Frankfurt)
• Europe (Ireland)
• Europe (London)
• Europe (Paris)
• Europe (Stockholm)

Pricing and data retention for Performance Insights


By default, Performance Insights offers a free tier that includes 7 days of performance data history and
1 million API requests per month. You can also purchase longer retention periods. For complete pricing
information, see Performance Insights Pricing.

In the RDS console, you can choose any of the following retention periods for your Performance Insights
data:

• Default (7 days)
• n months, where n is a number from 1–24

726
Amazon Relational Database Service User Guide
Turning Performance Insights on and off

To learn how to set a retention period using the AWS CLI, see AWS CLI (p. 729).

Turning Performance Insights on and off


You can turn on Performance Insights for your DB instance or Multi-AZ DB cluster when you create it.
If needed, you can turn it off later. Turning Performance Insights on and off doesn't cause downtime, a
reboot, or a failover.
Note
Performance Schema is an optional performance tool used by Amazon RDS for MariaDB or
MySQL. If you turn Performance Schema on or off, you need to reboot. If you turn Performance
Insights on or off, however, you don't need to reboot. For more information, see Turning
on the Performance Schema for Performance Insights on Amazon RDS for MariaDB or
MySQL (p. 731).

727
Amazon Relational Database Service User Guide
Turning Performance Insights on and off

The Performance Insights agent consumes limited CPU and memory on the DB host. When the DB load is
high, the agent limits the performance impact by collecting data less frequently.

Console
In the console, you can turn Performance Insights on or off when you create or modify a DB instance or
Multi-AZ DB cluster.

Turning Performance Insights on or off when creating a DB instance or Multi-AZ DB cluster


When you create a new DB instance or Multi-AZ DB cluster, turn on Performance Insights by choosing
Enable Performance Insights in the Performance Insights section. Or choose Disable Performance
Insights. For more information, see the following topics:

• To create a DB instance, follow the instructions for your DB engine in Creating an Amazon RDS DB
instance (p. 300).
• To create a Multi-AZ DB cluster, follow the instructions for your DB engine in Creating a Multi-AZ DB
cluster (p. 508).

The following screenshot shows the Performance Insights section.

If you choose Enable Performance Insights, you have the following options:

• Retention – The amount of time to retain Performance Insights data. The retention setting in the
free tier is Default (7 days). To retain your performance data for longer, specify 1–24 months.
For more information about retention periods, see Pricing and data retention for Performance
Insights (p. 726).
• AWS KMS key – Specify your AWS KMS key. Performance Insights encrypts all potentially sensitive
data using your KMS key. Data is encrypted in flight and at rest. For more information, see Configuring
an AWS KMS policy for Performance Insights (p. 736).

Turning Performance Insights on or off when modifying a DB instance or Multi-AZ DB cluster


In the console, you can modify a DB instance or Multi-AZ DB cluster to turn Performance Insights on or
off.

To turn Performance Insights on or off for a DB instance or Multi-AZ DB cluster using the


console

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. Choose Databases.
3. Choose a DB instance or Multi-AZ DB cluster, and choose Modify.
4. In the Performance Insights section, choose either Enable Performance Insights or Disable
Performance Insights.

728
Amazon Relational Database Service User Guide
Turning Performance Insights on and off

If you choose Enable Performance Insights, you have the following options:

• Retention – The amount of time to retain Performance Insights data. The retention setting in the
free tier is Default (7 days). To retain your performance data for longer, specify 1–24 months.
For more information about retention periods, see Pricing and data retention for Performance
Insights (p. 726).
• AWS KMS key – Specify your KMS key. Performance Insights encrypts all potentially sensitive data
using your KMS key. Data is encrypted in flight and at rest. For more information, see Encrypting
Amazon RDS resources (p. 2586).
5. Choose Continue.
6. For Scheduling of Modifications, choose Apply immediately. If you choose Apply during the next
scheduled maintenance window, your instance ignores this setting and turns on Performance
Insights immediately.
7. Choose Modify instance.

AWS CLI
When you use the create-db-instance AWS CLI command, turn on Performance Insights by specifying
--enable-performance-insights. Or turn off Performance Insights by specifying --no-enable-
performance-insights.

You can also specify these values using the following AWS CLI commands:

• create-db-instance-read-replica
• modify-db-instance
• restore-db-instance-from-s3
• create-db-cluster (Multi-AZ DB cluster)
• modify-db-cluster (Multi-AZ DB cluster)

The following procedure describes how to turn Performance Insights on or off for an existing DB instance
using the AWS CLI.

To turn Performance Insights on or off for a DB instance using the AWS CLI

• Call the modify-db-instance AWS CLI command and supply the following values:

• --db-instance-identifier – The name of the DB instance.


• --enable-performance-insights to turn on or --no-enable-performance-insights to
turn off

The following example turns on Performance Insights for sample-db-instance.

For Linux, macOS, or Unix:

aws rds modify-db-instance \


--db-instance-identifier sample-db-instance \
--enable-performance-insights

For Windows:

aws rds modify-db-instance ^


--db-instance-identifier sample-db-instance ^

729
Amazon Relational Database Service User Guide
Turning Performance Insights on and off

--enable-performance-insights

When you turn on Performance Insights in the CLI, you can optionally specify the number of days to
retain Performance Insights data with the --performance-insights-retention-period option.
You can specify 7, month * 31 (where month is a number from 1–23), or 731. For example, if you want to
retain your performance data for 3 months, specify 93, which is 3 * 31. The default is 7 days. For more
information about retention periods, see Pricing and data retention for Performance Insights (p. 726).

The following example turns on Performance Insights for sample-db-instance and specifies that
Performance Insights data is retained for 93 days (3 months).

For Linux, macOS, or Unix:

aws rds modify-db-instance \


--db-instance-identifier sample-db-instance \
--enable-performance-insights \
--performance-insights-retention-period 93

For Windows:

aws rds modify-db-instance ^


--db-instance-identifier sample-db-instance ^
--enable-performance-insights ^
--performance-insights-retention-period 93

If you specify a retention period such as 94 days, which isn't a valid value, RDS issues an error.

An error occurred (InvalidParameterValue) when calling the CreateDBInstance operation:


Invalid Performance Insights retention period. Valid values are: [7, 31, 62, 93, 124, 155,
186, 217,
248, 279, 310, 341, 372, 403, 434, 465, 496, 527, 558, 589, 620, 651, 682, 713, 731]

RDS API
When you create a new DB instance using the CreateDBInstance operation Amazon RDS API operation,
turn on Performance Insights by setting EnablePerformanceInsights to True. To turn off
Performance Insights, set EnablePerformanceInsights to False.

You can also specify the EnablePerformanceInsights value using the following API operations:

• ModifyDBInstance
• CreateDBInstanceReadReplica
• RestoreDBInstanceFromS3
• CreateDBCluster (Multi-AZ DB cluster)
• ModifyDBCluster (Multi-AZ DB cluster)

When you turn on Performance Insights, you can optionally specify the amount of time, in days, to
retain Performance Insights data with the PerformanceInsightsRetentionPeriod parameter. You
can specify 7, month * 31 (where month is a number from 1–23), or 731. For example, if you want to
retain your performance data for 3 months, specify 93, which is 3 * 31. The default is 7 days. For more
information about retention periods, see Pricing and data retention for Performance Insights (p. 726).

730
Amazon Relational Database Service User Guide
Turning on the Performance Schema for MariaDB or MySQL

Turning on the Performance Schema for Performance


Insights on Amazon RDS for MariaDB or MySQL
The Performance Schema is an optional feature for monitoring Amazon RDS for MariaDB or MySQL
runtime performance at a low level of detail. The Performance Schema is designed to have minimal
impact on database performance. Performance Insights is a separate feature that you can use with or
without the Performance Schema.

Topics
• Overview of the Performance Schema (p. 731)
• Performance Insights and the Performance Schema (p. 731)
• Automatic management of the Performance Schema by Performance Insights (p. 732)
• Effect of a reboot on the Performance Schema (p. 732)
• Determining whether Performance Insights is managing the Performance Schema (p. 733)
• Configuring the Performance Schema for automatic management (p. 733)

Overview of the Performance Schema


The Performance Schema monitors events in MariaDB and MySQL databases. An event is a database
server action that consumes time and has been instrumented so that timing information can be
collected. Examples of events include the following:

• Function calls
• Waits for the operating system
• Stages of SQL execution
• Groups of SQL statements

The PERFORMANCE_SCHEMA storage engine is a mechanism for implementing the Performance


Schema feature. This engine collects event data using instrumentation in the database source code.
The engine stores events in memory-only tables in the performance_schema database. You can
query performance_schema just as you can query any other tables. For more information, see MySQL
Performance Schema in the MySQL Reference Manual.

Performance Insights and the Performance Schema


Performance Insights and the Performance Schema are separate features, but they are connected. The
behavior of Performance Insights for Amazon RDS for MariaDB or MySQL depends on whether the
Performance Schema is turned on, and if so, whether Performance Insights manages the Performance
Schema automatically. The following table describes the behavior.

Performance Performance Performance Insights behavior


Schema turned Insights
on management
mode

Yes Automatic • Collects detailed, low-level monitoring information


• Collects active session metrics every second
• Displays DB load categorized by detailed wait events, which
you can use to identify bottlenecks

Yes Manual • Collects wait events and per-SQL metrics

731
Amazon Relational Database Service User Guide
Turning on the Performance Schema for MariaDB or MySQL

Performance Performance Performance Insights behavior


Schema turned Insights
on management
mode
• Collects active session metrics every five seconds instead of
every second
• Reports user states such as inserting and sending, which
don't help you identify bottlenecks

No N/A • Doesn't collect wait events, per-SQL metrics, or other


detailed, low-level monitoring information
• Collects active session metrics every five seconds instead of
every second
• Reports user states such as inserting and sending, which
don't help you identify bottlenecks

Automatic management of the Performance Schema by


Performance Insights
When you create an Amazon RDS for MariaDB or MySQL DB instance with Performance Insights turned
on, the Performance Schema is also turned on. In this case, Performance Insights automatically manages
your Performance Schema parameters. This is the recommended configuration.
Note
Automatic management of the Performance Schema isn't supported for the t4g.medium
instance class.

For automatic management of the Performance Schema, the following conditions must be true:

• The performance_schema parameter is set to 0.


• The Source is set to system, which is the default.

If you change the performance_schema parameter value manually, and then later want to
change to automatic management, see Configuring the Performance Schema for automatic
management (p. 733).
Important
When Performance Insights turns on the Performance Schema, it doesn't change the parameter
group values. However, the values are changed on the DB instances that are running. The only
way to see the changed values is to run the SHOW GLOBAL VARIABLES command.

Effect of a reboot on the Performance Schema


Performance Insights and the Performance Schema differ in their requirements for DB instance reboots:

Performance Schema

To turn this feature on or off, you must reboot the DB instance.


Performance Insights

To turn this feature on or off, you don't need to reboot the DB instance.

If the Performance Schema isn't currently turned on, and you turn on Performance Insights without
rebooting the DB instance, the Performance Schema won't be turned on.

732
Amazon Relational Database Service User Guide
Turning on the Performance Schema for MariaDB or MySQL

Determining whether Performance Insights is managing the


Performance Schema
To find out whether Performance Insights is currently managing the Performance Schema for major
engine versions 5.6, 5.7, and 8.0, review the following table.

Setting of Setting of the Source column Performance Insights is


performance_schema managing the Performance
parameter Schema?

0 system Yes

0 or 1 user No

To determine whether Performance Insights is managing the Performance Schema


automatically

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. Choose Parameter groups.
3. Select the name of the parameter group for your DB instance.
4. Enter performance_schema in the search bar.
5. Check whether Source is the system default and Values is 0. If so, Performance Insights is
managing the Performance Schema automatically. If not, Performance Insights isn't managing the
Performance Schema automatically.

Configuring the Performance Schema for automatic


management
Assume that Performance Insights is turned on for your DB instance or Multi-AZ DB cluster but isn't
currently managing the Performance Schema. If you want to allow Performance Insights to manage the
Performance Schema automatically, complete the following steps.

To configure the Performance Schema for automatic management

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. Choose Parameter groups.
3. Select the name of the parameter group for your DB instance or Multi-AZ DB cluster.
4. Enter performance_schema in the search bar.
5. Select the performance_schema parameter.
6. Choose Edit parameters.
7. Select the performance_schema parameter.

733
Amazon Relational Database Service User Guide
Performance Insights policies

8. In Values, choose 0.
9. Choose Reset and then Reset parameters.
10. Reboot the DB instance or Multi-AZ DB cluster.
Important
Whenever you turn the Performance Schema on or off, make sure to reboot the DB instance
or Multi-AZ DB cluster.

For more information about modifying instance parameters, see Modifying parameters in a DB
parameter group (p. 352). For more information about the dashboard, see Analyzing metrics with
the Performance Insights dashboard (p. 738). For more information about the MySQL performance
schema, see MySQL 8.0 Reference Manual.

Configuring access policies for Performance Insights


To access Performance Insights, a principal must have the appropriate permissions from AWS Identity
and Access Management (IAM). You can grant access in the following ways:

• Attach the AmazonRDSPerformanceInsightsReadOnly managed policy to a permission set or role


to access all read-only operations of the Performance Insights API.
• Attach the AmazonRDSPerformanceInsightsFullAccess managed policy to a permission set or
role to access all operations of the Performance Insights API.
• Create a custom IAM policy and attach it to a permission set or role.

If you specified a customer managed key when you turned on Performance Insights, make sure that users
in your account have the kms:Decrypt and kms:GenerateDataKey permissions on the KMS key.

Attaching the AmazonRDSPerformanceInsightsReadOnly policy


to an IAM principal
AmazonRDSPerformanceInsightsReadOnly is an AWS-managed policy that grants access to all read-
only operations of the Amazon RDS Performance Insights API.

If you attach AmazonRDSPerformanceInsightsReadOnly to a permission set or role, the recipient


can use Performance Insights with other console features.

For more information, see AWS managed policy: AmazonRDSPerformanceInsightsReadOnly (p. 2630).

Attaching the AmazonRDSPerformanceInsightsFullAccess policy


to an IAM principal
AmazonRDSPerformanceInsightsFullAccess is an AWS-managed policy that grants access to all
operations of the Amazon RDS Performance Insights API.

If you attach AmazonRDSPerformanceInsightsFullAccess to a permission set or role, the recipient


can use Performance Insights with other console features.

For more information, see AWS managed policy: AmazonRDSPerformanceInsightsFullAccess (p. 2630).

Creating a custom IAM policy for Performance Insights


For users who don't have the AmazonRDSPerformanceInsightsReadOnly or
AmazonRDSPerformanceInsightsFullAccess policy, you can grant access to Performance Insights

734
Amazon Relational Database Service User Guide
Performance Insights policies

by creating or modifying a user-managed IAM policy. When you attach the policy to an IAM permission
set or role, the recipient can use Performance Insights.

To create a custom policy

1. Open the IAM console at https://console.aws.amazon.com/iam/.


2. In the navigation pane, choose Policies.
3. Choose Create policy.
4. On the Create Policy page, choose the JSON tab.
5. Copy and paste the text provided in the JSON policy document section in the AWS
Managed Policy Reference Guide for AmazonRDSPerformanceInsightsReadOnly or
AmazonRDSPerformanceInsightsFullAccess policy.
6. Choose Review policy.
7. Provide a name for the policy and optionally a description, and then choose Create policy.

You can now attach the policy to a permission set or role. The following procedure assumes that you
already have a user available for this purpose.

To attach the policy to a user

1. Open the IAM console at https://console.aws.amazon.com/iam/.


2. In the navigation pane, choose Users.
3. Choose an existing user from the list.
Important
To use Performance Insights, make sure that you have access to Amazon RDS in addition
to the custom policy. For example, the AmazonRDSPerformanceInsightsReadOnly
predefined policy provides read-only access to Amazon RDS. For more information, see
Managing access using policies (p. 2609).
4. On the Summary page, choose Add permissions.
5. Choose Attach existing policies directly. For Search, type the first few characters of your policy
name, as shown following.

6. Choose your policy, and then choose Next: Review.


7. Choose Add permissions.

735
Amazon Relational Database Service User Guide
Performance Insights policies

Configuring an AWS KMS policy for Performance Insights


Performance Insights uses an AWS KMS key to encrypt sensitive data. When you enable Performance
Insights through the API or the console, you can do either of the following:

• Choose the default AWS managed key.

Amazon RDS uses the AWS managed key for your new DB instance. Amazon RDS creates an AWS
managed key for your AWS account. Your AWS account has a different AWS managed key for Amazon
RDS for each AWS Region.
• Choose a customer managed key.

If you specify a customer managed key, users in your account that call the Performance Insights API
need the kms:Decrypt and kms:GenerateDataKey permissions on the KMS key. You can configure
these permissions through IAM policies. However, we recommend that you manage these permissions
through your KMS key policy. For more information, see Using key policies in AWS KMS.

Example

The following example shows how to add statements to your KMS key policy. These statements allow
access to Performance Insights. Depending on how you use the KMS key, you might want to change
some restrictions. Before adding statements to your policy, remove all comments.

{
"Version" : "2012-10-17",
"Id" : "your-policy",
"Statement" : [ {
//This represents a statement that currently exists in your policy.
}
....,
//Starting here, add new statement to your policy for Performance Insights.
//We recommend that you add one new statement for every RDS instance
{
"Sid" : "Allow viewing RDS Performance Insights",
"Effect": "Allow",
"Principal": {
"AWS": [
//One or more principals allowed to access Performance Insights
"arn:aws:iam::444455556666:role/Role1"
]
},
"Action": [
"kms:Decrypt",
"kms:GenerateDataKey"
],
"Resource": "*",
"Condition" : {
"StringEquals" : {
//Restrict access to only RDS APIs (including Performance Insights).
//Replace region with your AWS Region.
//For example, specify us-west-2.
"kms:ViaService" : "rds.region.amazonaws.com"
},
"ForAnyValue:StringEquals": {
//Restrict access to only data encrypted by Performance Insights.
"kms:EncryptionContext:aws:pi:service": "rds",
"kms:EncryptionContext:service": "pi",

//Restrict access to a specific RDS instance.


//The value is a DbiResourceId.
"kms:EncryptionContext:aws:rds:db-id": "db-AAAAABBBBBCCCCDDDDDEEEEE"

736
Amazon Relational Database Service User Guide
Performance Insights policies

}
}
}

How Performance Insights uses AWS KMS customer managed key


Performance Insights uses customer managed keys to encrypt sensitive data. When you turn on
Performance Insights, you can provide an AWS KMS key through the API. Performance Insights creates
KMS permissions on this key. It uses the key and performs the necessary operations to process sensitive
data. Sensitive data includes fields such as user, database, application, and SQL query text. Performance
Insights ensures that the data remains encrypted both at rest and in-flight.

How Performance Insights IAM works with AWS KMS


IAM gives permissions to specific APIs. Performance Insights has the following public APIs, which you can
restrict using IAM policies:

• DescribeDimensionKeys
• GetDimensionKeyDetails
• GetResourceMetadata
• GetResourceMetrics
• ListAvailableResourceDimensions
• ListAvailableResourceMetrics

You can use the following API requests to get sensitive data.

• DescribeDimensionKeys
• GetDimensionKeyDetails
• GetResourceMetrics

When you use the API to get sensitive data, Performance Insights leverages the caller's credentials. This
check ensures that access to sensitive data is limited to those with access to the KMS key.

When calling these APIs, you need permissions to call the API through the IAM policy and permissions to
invoke the kms:decrypt action through the AWS KMS key policy.

The GetResourceMetrics API can return both sensitive and non-sensitive data. The request
parameters determine whether the response should include sensitive data. The API returns sensitive data
when the request includes a sensitive dimension in either the filter or group-by parameters.

For more information about the dimensions that you can use with the GetResourceMetrics API, see
DimensionGroup.

Example Examples
The following example requests the sensitive data for the db.user group:

POST / HTTP/1.1
Host: <Hostname>
Accept-Encoding: identity
X-Amz-Target: PerformanceInsightsv20180227.GetResourceMetrics
Content-Type: application/x-amz-json-1.1
User-Agent: <UserAgentString>
X-Amz-Date: <Date>
Authorization: AWS4-HMAC-SHA256 Credential=<Credential>, SignedHeaders=<Headers>,
Signature=<Signature>

737
Amazon Relational Database Service User Guide
Analyzing metrics with the Performance Insights dashboard

Content-Length: <PayloadSizeBytes>
{
"ServiceType": "RDS",
"Identifier": "db-ABC1DEFGHIJKL2MNOPQRSTUV3W",
"MetricQueries": [
{
"Metric": "db.load.avg",
"GroupBy": {
"Group": "db.user",
"Limit": 2
}
}
],
"StartTime": 1693872000,
"EndTime": 1694044800,
"PeriodInSeconds": 86400
}

Example

The following example requests the non-sensitive data for the db.load.avg metric:

POST / HTTP/1.1
Host: <Hostname>
Accept-Encoding: identity
X-Amz-Target: PerformanceInsightsv20180227.GetResourceMetrics
Content-Type: application/x-amz-json-1.1
User-Agent: <UserAgentString>
X-Amz-Date: <Date>
Authorization: AWS4-HMAC-SHA256 Credential=<Credential>, SignedHeaders=<Headers>,
Signature=<Signature>
Content-Length: <PayloadSizeBytes>
{
"ServiceType": "RDS",
"Identifier": "db-ABC1DEFGHIJKL2MNOPQRSTUV3W",
"MetricQueries": [
{
"Metric": "db.load.avg"
}
],
"StartTime": 1693872000,
"EndTime": 1694044800,
"PeriodInSeconds": 86400
}

Analyzing metrics with the Performance Insights


dashboard
The Performance Insights dashboard contains database performance information to help you analyze
and troubleshoot performance issues. On the main dashboard page, you can view information about the
database load. You can "slice" DB load by dimensions such as wait events or SQL.

Performance Insights dashboard


• Overview of the Performance Insights dashboard (p. 739)
• Accessing the Performance Insights dashboard (p. 746)
• Analyzing DB load by wait events (p. 749)

738
Amazon Relational Database Service User Guide
Analyzing metrics with the Performance Insights dashboard

• Analyzing database performance for a period of time (p. 750)


• Analyzing queries in the Performance Insights dashboard (p. 756)
• Analyzing Oracle execution plans using the Performance Insights dashboard (p. 766)

Overview of the Performance Insights dashboard


The dashboard is the easiest way to interact with Performance Insights. The following example shows
the dashboard for a MySQL DB instance.

Topics
• Time range filter (p. 739)
• Counter metrics chart (p. 741)
• Database load chart (p. 743)
• Top dimensions table (p. 745)

Time range filter


By default, the Performance Insights dashboard shows DB load for the last hour. You can adjust this
range to be as short as 5 minutes or as long as 2 years. You can also select a custom relative range.

739
Amazon Relational Database Service User Guide
Analyzing metrics with the Performance Insights dashboard

You can select an absolute range with a beginning and ending date and time. The following example
shows the time range beginning at midnight on 4/11/22 and ending at 11:59 PM on 4/14/22.

740
Amazon Relational Database Service User Guide
Analyzing metrics with the Performance Insights dashboard

Counter metrics chart


With counter metrics, you can customize the Performance Insights dashboard to include up to 10
additional graphs. These graphs show a selection of dozens of operating system and database
performance metrics. You can correlate this information with DB load to help identify and analyze
performance problems.

The Counter metrics chart displays data for performance counters. The default metrics depend on the
DB engine:

• MySQL and MariaDB – db.SQL.Innodb_rows_read.avg


• Oracle – db.User.user calls.avg
• Microsoft SQL Server – db.Databases.Active Transactions(_Total).avg
• PostgreSQL – db.Transactions.xact_commit.avg

741
Amazon Relational Database Service User Guide
Analyzing metrics with the Performance Insights dashboard

To change the performance counters, choose Manage Metrics. You can select multiple OS metrics or
Database metrics, as shown in the following screenshot. To see details for any metric, hover over the
metric name.

For descriptions of the counter metrics that you can add for each DB engine, see Performance Insights
counter metrics (p. 814).

742
Amazon Relational Database Service User Guide
Analyzing metrics with the Performance Insights dashboard

Database load chart


The Database load chart shows how the database activity compares to DB instance capacity as
represented by the Max vCPU line. By default, the stacked line chart represents DB load as average active
sessions per unit of time. The DB load is sliced (grouped) by wait states.

DB load sliced by dimensions

You can choose to display load as active sessions grouped by any supported dimensions. The following
table shows which dimensions are supported for the different engines.

Dimension Oracle SQL Server PostgreSQL MySQL

Host Yes Yes Yes Yes

SQL Yes Yes Yes Yes

User Yes Yes Yes Yes

Waits Yes Yes Yes Yes

Plans Yes No No No

Application No No Yes No

Database No No Yes Yes

Session type No No Yes No

The following image shows the dimensions for a PostgreSQL DB instance.

743
Amazon Relational Database Service User Guide
Analyzing metrics with the Performance Insights dashboard

DB load details for a dimension item

To see details about a DB load item within a dimension, hover over the item name. The following image
shows details for a SQL statement.

To see details for any item for the selected time period in the legend, hover over that item.

744
Amazon Relational Database Service User Guide
Analyzing metrics with the Performance Insights dashboard

Top dimensions table


The Top dimensions table slices DB load by different dimensions. A dimension is a category or "slice by"
for different characteristics of DB load. If the dimension is SQL, Top SQL shows the SQL statements that
contribute the most to DB load.

Choose any of the following dimension tabs.

Tab Description Supported engines

Top SQL The SQL statements that are All


currently running

Top waits The event for which the All


database backend is waiting

745
Amazon Relational Database Service User Guide
Analyzing metrics with the Performance Insights dashboard

Tab Description Supported engines

Top hosts The host name of the connected All


client

Top users The user logged in to the All


database

Top databases The name of the database to PostgreSQL, MySQL, and


which the client is connected MariaDB only

Top applications The name of the application that PostgreSQL only


is connected to the database

Top session types The type of the current session PostgreSQL only

To learn how to analyze queries by using the Top SQL tab, see Overview of the Top SQL tab (p. 756).

Accessing the Performance Insights dashboard


Amazon RDS provides a consolidated view of Performance Insights and CloudWatch metrics in the
Performance Insights dashboard.

To access the Performance Insights dashboard, use the following procedure.

To view the Performance Insights dashboard in the AWS Management Console

1. Open the Amazon RDS console at https://console.aws.amazon.com/rds/.


2. In the left navigation pane, choose Performance Insights.
3. Choose a DB instance.
4. Choose the default monitoring view in the displayed window.

• Select the Performance Insights and CloudWatch metrics view (New) option and choose
Continue to view Performance Insights and CloudWatch metrics.
• Select the Performance Insights view option and choose Continue for the legacy monitoring
view. Then, continue with this procedure.
Note
This view will be discontinued on December 15, 2023.

The Performance Insights dashboard appears for the DB instance.

For DB instances with Performance Insights turned on, you can also access the dashboard by
choosing the Sessions item in the list of DB instances. Under Current activity, the Sessions item
shows the database load in average active sessions over the last five minutes. The bar graphically
shows the load. When the bar is empty, the DB instance is idle. As the load increases, the bar fills
with blue. When the load passes the number of virtual CPUs (vCPUs) on the DB instance class, the
bar turns red, indicating a potential bottleneck.

746
Amazon Relational Database Service User Guide
Analyzing metrics with the Performance Insights dashboard

5. (Optional) Choose the date or time range in the upper right and specify a different relative or
absolute time interval. You can now specify a time period, and generate a database performance
analysis report. The report provides the identified insights and recommendations. For more
information, see Analyzing database performance for a period of time (p. 750).

In the following screenshot, the DB load interval is 5 hours.

747
Amazon Relational Database Service User Guide
Analyzing metrics with the Performance Insights dashboard

6. (Optional) To zoom in on a portion of the DB load chart, choose the start time and drag to the end
of the time period you want.

The selected area is highlighted in the DB load chart.

When you release the mouse, the DB load chart zooms in on the selected AWS Region, and the Top
dimensions table is recalculated.

748
Amazon Relational Database Service User Guide
Analyzing metrics with the Performance Insights dashboard

7. (Optional) To refresh your data automatically, select Auto refresh.

The Performance Insights dashboard automatically refreshes with new data. The refresh rate
depends on the amount of data displayed:

• 5 minutes refreshes every 10 seconds.


• 1 hour refreshes every 5 minutes.
• 5 hours refreshes every 5 minutes.
• 24 hours refreshes every 30 minutes.
• 1 week refreshes every day.
• 1 month refreshes every day.

Analyzing DB load by wait events


If the Database load chart shows a bottleneck, you can find out where the load is coming from. To do
so, look at the top load items table below the Database load chart. Choose a particular item, like a SQL
query or a user, to drill down into that item and see details about it.

DB load grouped by waits and top SQL queries is the default Performance Insights dashboard view.
This combination typically provides the most insight into performance issues. DB load grouped by waits

749
Amazon Relational Database Service User Guide
Analyzing metrics with the Performance Insights dashboard

shows if there are any resource or concurrency bottlenecks in the database. In this case, the SQL tab of
the top load items table shows which queries are driving that load.

Your typical workflow for diagnosing performance issues is as follows:

1. Review the Database load chart and see if there are any incidents of database load exceeding the Max
CPU line.
2. If there is, look at the Database load chart and identify which wait state or states are primarily
responsible.
3. Identify the digest queries causing the load by seeing which of the queries the SQL tab on the top
load items table are contributing most to those wait states. You can identify these by the DB Load by
Wait column.
4. Choose one of these digest queries in the SQL tab to expand it and see the child queries that it is
composed of.

For example, in the dashboard following, log file sync waits account for most of the DB load. The LGWR
all worker groups wait is also high. The Top SQL chart shows what is causing the log file sync waits:
frequent COMMIT statements. In this case, committing less frequently will reduce DB load.

Analyzing database performance for a period of time


You can create a performance analysis report for a period of time and find out any performance issues
such as resource bottlenecks or changes in a query in your DB instance. The Performance Insights
dashboard allows you to select a time period and create a performance analysis report. You can also add
one or more tags to the report.

To use this feature, you must be using the paid tier retention period. For more information, see Pricing
and data retention for Performance Insights (p. 726)

750
Amazon Relational Database Service User Guide
Analyzing metrics with the Performance Insights dashboard

The report is available in the Performance analysis reports - new tab to select and view. The report
contains the insights, related metrics, and recommendations to resolve the performance issue. The
report is available to view for the duration of Performance Insights retention period.

The report is deleted if the start time of the report analysis period is outside of the retention period. You
can also delete the report before the retention period ends.

To detect the performance issues and generate the analysis report for your DB instance, you must turn
on Performance Insights. For more information about turning on Performance Insights, see Turning
Performance Insights on and off (p. 727).

For the region, DB engine, and instance class support information for this feature, see Amazon RDS DB
engine, Region, and instance class support for Performance Insights features (p. 725)

Creating a performance analysis report


You can create a performance analysis report for a specific period in the Performance Insights
dashboard. You can select a time period and add one or more tags to the analysis report.

The analysis period can range from 5 minutes to 6 days. There must be at least 24 hours of performance
data before the analysis start time.

To create a performance analysis report for a time period

1. Open the Amazon RDS console at https://console.aws.amazon.com/rds/.


2. In the left navigation pane, choose Performance Insights.
3. Choose a DB instance.

The Performance Insights dashboard appears for the DB instance.


4. Choose Analyze performance in Database load section on the dashboard.

The fields to set the time period and add one or more tags to the performance analysis report are
displayed.

5. Choose the time period. If you set a time period in the Relative range or Absolute range in the
upper right, you can only enter or select the analysis report date and time within this time period. If
you select the analysis period outside of this time period, an error message displays.

To set the time period, you can do any of the following:

• Press and drag any of the sliders on the DB load chart.

The Performance analysis period box displays the selected time period and DB load chart
highlights the selected time period.
• Choose the Start date, Start time, End date, and End time in the Performance analysis period
box.

751
Amazon Relational Database Service User Guide
Analyzing metrics with the Performance Insights dashboard

6. (Optional) Enter Key and Value-optional to add a tag for the report.

7. Choose Analyze performance.

A banner displays a message whether the report generation is successful or failed. The message also
provides the link to view the report.

The following example shows the banner with the report creation successful message.

752
Amazon Relational Database Service User Guide
Analyzing metrics with the Performance Insights dashboard

The report is available to view in Performance analysis reports - new tab.

Viewing a performance analysis report


The Performance analysis reports - new tab lists all the reports that are created for the DB instance.
The following are displayed for each report:

• ID: Unique identifier of the report.


• Name: Tag key added to the report.
• Report creation time: Time you created the report.
• Analysis start time: Start time of the analysis in the report.
• Analysis end time: End time of the analysis in the report.

To view a performance analysis report

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the left navigation pane, choose Performance Insights.
3. Choose a DB instance for which you want to view the analysis report.

The Performance Insights dashboard appears for the DB instance.


4. Scroll down and choose Performance analysis reports - new tab.

All the analysis reports for the different time periods are displayed.
5. Choose ID of the report you want to view.

The DB load chart displays the entire analysis period by default if more than one insight is identified.
If the report has identified one insight then the DB load chart displays the insight by default.

The dashboard also lists the tags for the report in the Tags section.

The following example shows the entire analysis period for the report.

6. Choose the insight in the Database load insights list you want to view if more than one insight is
identified in the report.

The dashboard displays the insight message, DB load chart highlighting the time period of the
insight, analysis and recommendations, and the list of report tags.

The following example shows the DB load insight in the report.

753
Amazon Relational Database Service User Guide
Analyzing metrics with the Performance Insights dashboard

Adding tags to a performance analysis report


You can add a tag when you create or view a report. You can add up to 50 tags for a report.

You need permissions to add the tags. For more information about the access policies for Performance
Insights, see Configuring access policies for Performance Insights (p. 734)

To add one or more tags while creating a report, see step 6 in the procedure Analyzing database
performance for a period of time (p. 751).

To add one or more tags when viewing a report

1. Open the Amazon RDS console at https://console.aws.amazon.com/rds/.


2. In the left navigation pane, choose Performance Insights.
3. Choose a DB instance.

The Performance Insights dashboard appears for the DB instance.


4. Scroll down and choose Performance analysis reports - new tab.
5. Choose the report for which you want to add the tags.

The dashboard displays the report.


6. Scroll down to Tags and choose Manage tags.
7. Choose Add new tag.
8. Enter the Key and Value - optional, and choose Add new tag.

The following example provides the option to add a new tag for the selected report.

754
Amazon Relational Database Service User Guide
Analyzing metrics with the Performance Insights dashboard

A new tag is created for the report.

The list of tags for the report is displayed in the Tags section on the dashboard. If you want to
remove a tag from the report, choose Remove next to the tag.

Deleting a performance analysis report


You can delete a report from the list of reports displayed in the Performance analysis reports tab or
while viewing a report.

To delete a report

1. Open the Amazon RDS console at https://console.aws.amazon.com/rds/.


2. In the left navigation pane, choose Performance Insights.
3. Choose a DB instance.

The Performance Insights dashboard appears for the DB instance.


4. Scroll down and choose Performance analysis reports - new tab.
5. Select the report you want to delete and choose Delete in the upper right.

A confirmation window is displayed. The report is deleted after you choose confirm.
6. (Optional) Choose ID of the report you want to delete.

In the report page, choose Delete in the upper right.

755
Amazon Relational Database Service User Guide
Analyzing metrics with the Performance Insights dashboard

A confirmation window is displayed. The report is deleted after you choose confirm.

Analyzing queries in the Performance Insights dashboard


In the Amazon RDS Performance Insights dashboard, you can find information about running and recent
queries in the Top SQL tab in the Top dimensions table. You can use this information to tune your
queries.
Note
RDS for SQL Server doesn't show SQL-level statistics.

Topics
• Overview of the Top SQL tab (p. 756)
• Accessing more SQL text in the Performance Insights dashboard (p. 761)
• Viewing SQL statistics in the Performance Insights dashboard (p. 763)

Overview of the Top SQL tab


By default, the Top SQL tab shows the 25 queries that are contributing the most to DB load. To help
tune your queries, you can analyze information such as the query text and SQL statistics. You can also
choose the statistics that you want to appear in the Top SQL tab.

Topics
• SQL text (p. 756)
• SQL statistics (p. 757)
• Load by waits (AAS) (p. 758)
• SQL information (p. 759)
• Preferences (p. 759)

SQL text

By default, each row in the Top SQL table shows 500 bytes of text for each statement.

To learn how to see more than the default 500 bytes of SQL text, see Accessing more SQL text in the
Performance Insights dashboard (p. 761).

756
Amazon Relational Database Service User Guide
Analyzing metrics with the Performance Insights dashboard

A SQL digest is a composite of multiple actual queries that are structurally similar but might have
different literal values. The digest replaces hardcoded values with a question mark. For example, a
digest might be SELECT * FROM emp WHERE lname= ?. This digest might include the following child
queries:

SELECT * FROM emp WHERE lname = 'Sanchez'


SELECT * FROM emp WHERE lname = 'Olagappan'
SELECT * FROM emp WHERE lname = 'Wu'

To see the literal SQL statements in a digest, select the query, and then choose the plus symbol (+). In
the following example, the selected query is a digest.

Note
A SQL digest groups similar SQL statements, but doesn't redact sensitive information.

Performance Insights can show Oracle SQL text as Unknown. The text has this status in the following
situations:

• An Oracle database user other than SYS is active but not currently executing SQL. For example, when
a parallel query completes, the query coordinator waits for helper processes to send their session
statistics. For the duration of the wait, the query text shows Unknown.
• For an RDS for Oracle instance on Standard Edition 2, Oracle Resource Manager limits the number of
parallel threads. The background process doing this work causes the query text to show as Unknown.

SQL statistics

SQL statistics are performance-related metrics about SQL queries. For example, Performance Insights
might show executions per second or rows processed per second. Performance Insights collects statistics
for only the most common queries. Typically, these match the top queries by load shown in the
Performance Insights dashboard.

Every line in the Top SQL table shows relevant statistics for the SQL statement or digest, as shown in the
following example.

757
Amazon Relational Database Service User Guide
Analyzing metrics with the Performance Insights dashboard

Performance Insights can report 0.00 and - (unknown) for SQL statistics. This situation occurs under the
following conditions:

• Only one sample exists. For example, Performance Insights calculates rates of change for RDS
PostgreSQL queries based on multiple samples from the pg_stats_statements view. When a
workload runs for a short time, Performance Insights might collect only one sample, which means that
it can't calculate a rate of change. The unknown value is represented with a dash (-).
• Two samples have the same values. Performance Insights can't calculate a rate of change because no
change has occurred, so it reports the rate as 0.00.
• An RDS PostgreSQL statement lacks a valid identifier. PostgreSQL creates a identifier for a statement
only after parsing and analysis. Thus, a statement can exist in the PostgreSQL internal in-memory
structures with no identifier. Because Performance Insights samples internal in-memory structures
once per second, low-latency queries might appear for only a single sample. If the query identifier isn't
available for this sample, Performance Insights can't associate this statement with its statistics. The
unknown value is represented with a dash (-).

For a description of the SQL statistics for the Amazon RDS engines, see SQL statistics for Performance
Insights (p. 830).

Load by waits (AAS)

In Top SQL, the Load by waits (AAS) column illustrates the percentage of the database load associated
with each top load item. This column reflects the load for that item by whatever grouping is currently
selected in the DB Load Chart. For more information about Average active sessions (AAS), see Average
active sessions (p. 721).

For example, you might group the DB load chart by wait states. You examine SQL queries in the top load
items table. In this case, the DB Load by Waits bar is sized, segmented, and color-coded to show how
much of a given wait state that query is contributing to. It also shows which wait states are affecting the
selected query.

758
Amazon Relational Database Service User Guide
Analyzing metrics with the Performance Insights dashboard

SQL information

In the Top SQL table, you can open a statement to view its information. The information appears in the
bottom pane.

The following types of identifiers (IDs) that are associated with SQL statements:

• Support SQL ID – A hash value of the SQL ID. This value is only for referencing a SQL ID when you are
working with AWS Support. AWS Support doesn't have access to your actual SQL IDs and SQL text.
• Support Digest ID – A hash value of the digest ID. This value is only for referencing a digest ID when
you are working with AWS Support. AWS Support doesn't have access to your actual digest IDs and
SQL text.

Preferences

You can control the statistics displayed in the Top SQL tab by choosing the Preferences icon.

759
Amazon Relational Database Service User Guide
Analyzing metrics with the Performance Insights dashboard

When you choose the Preferences icon, the Preferences window opens. The following screenshot is an
example of the Preferences window.

To enable the statistics that you want to appear in the Top SQL tab, use your mouse to scroll to the
bottom of the window, and then choose Continue.

760
Amazon Relational Database Service User Guide
Analyzing metrics with the Performance Insights dashboard

For more information about per-second or per-call statistics for the Amazon RDS engines, see the engine
specific SQL statistics section in SQL statistics for Performance Insights (p. 830)

Accessing more SQL text in the Performance Insights dashboard


By default, each row in the Top SQL table shows 500 bytes of SQL text for each SQL statement.

When a SQL statement exceeds 500 bytes, you can view more text in the SQL text section below the
Top SQL table. In this case, the maximum length for the text displayed in SQL text is 4 KB. This limit is
introduced by the console and is subject to the limits set by the database engine. To save the text shown
in SQL text, choose Download.

Topics
• Text size limits for Amazon RDS engines (p. 761)
• Setting the SQL text limit for Amazon RDS for PostgreSQL DB instances (p. 761)
• Viewing and downloading SQL text in the Performance Insights dashboard (p. 762)

Text size limits for Amazon RDS engines

When you download SQL text, the database engine determines its maximum length. You can download
SQL text up to the following per-engine limits.

DB engine Maximum length of downloaded text

Amazon RDS for MySQL and MariaDB 1,024 bytes

Amazon RDS for Microsoft SQL Server 4,096 characters

Amazon RDS for Oracle 1,000 bytes

The SQL text section of the Performance Insights console displays up to the maximum that the engine
returns. For example, if MySQL returns at most 1 KB to Performance Insights, it can only collect and
show 1 KB, even if the original query is larger. Thus, when you view the query in SQL text or download it,
Performance Insights returns the same number of bytes.

If you use the AWS CLI or API, Performance Insights doesn't have the 4 KB limit enforced by the
console. DescribeDimensionKeys and GetResourceMetrics return at most 500 bytes.
GetDimensionKeyDetails returns the full query, but the size is subject to the engine limit.

Setting the SQL text limit for Amazon RDS for PostgreSQL DB instances

Amazon RDS for PostgreSQL handles text differently. You can set the text size limit with the DB instance
parameter track_activity_query_size. This parameter has the following characteristics:

Default text size

On Amazon RDS for PostgreSQL version 9.6, the default setting for the
track_activity_query_size parameter is 1,024 bytes. On Amazon RDS for PostgreSQL version
10 or higher, the default is 4,096 bytes.
Maximum text size

The limit for track_activity_query_size is 102,400 bytes for Amazon RDS for PostgreSQL
version 12 and lower. The maximum is 1 MB for version 13 and higher.

761
Amazon Relational Database Service User Guide
Analyzing metrics with the Performance Insights dashboard

If the engine returns 1 MB to Performance Insights, the console displays only the first 4 KB. If you
download the query, you get the full 1 MB. In this case, viewing and downloading return different
numbers of bytes. For more information about the track_activity_query_size DB instance
parameter, see Run-time Statistics in the PostgreSQL documentation.

To increase the SQL text size, increase the track_activity_query_size limit. To modify the
parameter, change the parameter setting in the parameter group that is associated with the Amazon RDS
for PostgreSQL DB instance.

To change the setting when the instance uses the default parameter group

1. Create a new DB instance parameter group for the appropriate DB engine and DB engine version.
2. Set the parameter in the new parameter group.
3. Associate the new parameter group with the DB instance.

For information about setting a DB instance parameter, see Modifying parameters in a DB parameter
group (p. 352).

Viewing and downloading SQL text in the Performance Insights dashboard

In the Performance Insights dashboard, you can view or download SQL text.

To view more SQL text in the Performance Insights dashboard

1. Open the Amazon RDS console at https://console.aws.amazon.com/rds/.


2. In the navigation pane, choose Performance Insights.
3. Choose a DB instance.

The Performance Insights dashboard is displayed for your DB instance.


4. Scroll down to the Top SQL tab.
5. Choose a SQL statement.

SQL statements with text larger than 500 bytes look similar to the following image.

6. Scroll down to the SQL text tab.

762
Amazon Relational Database Service User Guide
Analyzing metrics with the Performance Insights dashboard

The Performance Insights dashboard can display up to 4,096 bytes for each SQL statement.
7. (Optional) Choose Copy to copy the displayed SQL statement, or choose Download to download the
SQL statement to view the SQL text up to the DB engine limit.
Note
To copy or download the SQL statement, disable pop-up blockers.

Viewing SQL statistics in the Performance Insights dashboard


In the Performance Insights dashboard, SQL statistics are available in the Top SQL tab of the Database
load chart.

To view SQL statistics

1. Open the Amazon RDS console at https://console.aws.amazon.com/rds/.


2. In the left navigation pane, choose Performance Insights.
3. At the top of the page, choose the database whose SQL statistics you want to see.
4. Scroll to the bottom of the page and choose the Top SQL tab.
5. Choose an individual statement or digest query.

763
Amazon Relational Database Service User Guide
Analyzing metrics with the Performance Insights dashboard

6. Choose which statistics to display by choosing the gear icon in the upper-right corner of the
chart. For descriptions of the SQL statistics for the Amazon RDS engines, see SQL statistics for
Performance Insights (p. 830).

The following example shows the statistics preferences for Oracle DB instances.

764
Amazon Relational Database Service User Guide
Analyzing metrics with the Performance Insights dashboard

The following example shows the preferences for MariaDB and MySQL DB instances.

765
Amazon Relational Database Service User Guide
Analyzing metrics with the Performance Insights dashboard

7. Choose Save to save your preferences.

The Top SQL table refreshes.

The following example shows statistics for an Oracle SQL query.

Analyzing Oracle execution plans using the Performance


Insights dashboard
When analyzing DB load on an Oracle Database, you might want to know which plans are contributing
the most to DB load. For example, the top SQL statements at a given time might be using the plans
shown in the following table.

766
Amazon Relational Database Service User Guide
Analyzing metrics with the Performance Insights dashboard

Top SQL Plan

SELECT SUM(amount_sold) FROM sales WHERE prod_id = 10 Plan A

SELECT SUM(amount_sold) FROM sales WHERE prod_id = 521 Plan B

SELECT SUM(s_total) FROM sales WHERE region = 10 Plan A

SELECT * FROM emp WHERE emp_id = 1000 Plan C

SELECT SUM(amount_sold) FROM sales WHERE prod_id = 72 Plan A

With the plan feature of Performance Insights, you can do the following:

• Find out which plans are used by the top SQL queries.

For example, you might find out that most of the DB load is generated by queries using plan A and
plan B, with only a small percentage using plan C.
• Compare different plans for the same query.

In the preceding example, three queries are identical except for the product ID. Two queries use plan A,
but one query uses plan B. To see the difference in the two plans, you can use Performance Insights.
• Find out when a query switched to a new plan.

You might see that a query used plan A and then switched to plan B at a certain time. Was there a
change in the database at this point? For example, if a table is empty, the optimizer might choose a
full table scan. If the table is loaded with a million rows, the optimizer might switch to an index range
scan.
• Drill down to the specific steps of a plan with the highest cost.

For example, the for a long-running query might show a missing a join condition in an equijoin. This
missing condition forces a Cartesian join, which joins all rows of two tables.

You can perform the preceding tasks by using the plan capture feature of Performance Insights. Just as
you can slice Oracle queries by wait events and top SQL, you can slice them by the plan dimension.

For the region, DB engine, and instance class support information for this feature, see Amazon RDS DB
engine, Region, and instance class support for Performance Insights features (p. 725)

To analyze Oracle execution plans using the console

1. Open the Amazon RDS console at https://console.aws.amazon.com/rds/.


2. In the navigation pane, choose Performance Insights.
3. Choose an Oracle DB instance. The Performance Insights dashboard is displayed for that DB
instance.
4. In the Database load (DB load) section, choose Plans next to Slice by.

The Average active sessions chart shows the plans used by your top SQL statements. The plan hash
values appear to the right of the color-coded squares. Each hash value uniquely identifies a plan.

767
Amazon Relational Database Service User Guide
Analyzing metrics with the Performance Insights dashboard

5. Scroll down to the Top SQL tab.

In the following example, the top SQL digest has two plans. You can tell that it's a digest by the
question mark in the statement.

6. Choose the digest to expand it into its component statements.

In the following example, the SELECT statement is a digest query. The component queries in the
digest use two different plans. The colors of the plans correspond to the database load chart. The
total number of plans in the digest is shown in the second column.

7. Scroll down and choose two Plans to compare from Plans for digest query list.

768
Amazon Relational Database Service User Guide
Retrieving metrics with the Performance Insights API

You can view either one or two plans for a query at a time. The following screenshot compares the
two plans in the digest, with hash 2032253151 and hash 1117438016. In the following example,
62% of the average active sessions running this digest query are using the plan on the left, whereas
38% are using the plan on the right.

In this example, the plans differ in an important way. Step 2 in plan 2032253151 uses an index scan,
whereas plan 1117438016 uses a full table scan. For a table with a large number of rows, a query of
a single row is almost always faster with an index scan.

8. (Optional) Choose Copy to copy the plan to the clipboard, or Download to save the plan to your
hard drive.

Retrieving metrics with the Performance Insights API


When Performance Insights is turned on, the API provides visibility into instance performance. Amazon
CloudWatch Logs provides the authoritative source for vended monitoring metrics for AWS services.

Performance Insights offers a domain-specific view of database load measured as average active
sessions (AAS). This metric appears to API consumers as a two-dimensional time-series dataset. The time
dimension of the data provides DB load data for each time point in the queried time range. Each time
point decomposes overall load in relation to the requested dimensions, such as SQL, Wait-event, User,
or Host, measured at that time point.

Amazon RDS Performance Insights monitors your Amazon RDS DB instance so that you can analyze
and troubleshoot database performance. One way to view Performance Insights data is in the AWS
Management Console. Performance Insights also provides a public API so that you can query your own
data. You can use the API to do the following:

769
Amazon Relational Database Service User Guide
Retrieving metrics with the Performance Insights API

• Offload data into a database


• Add Performance Insights data to existing monitoring dashboards
• Build monitoring tools

To use the Performance Insights API, enable Performance Insights on one of your Amazon RDS DB
instances. For information about enabling Performance Insights, see Turning Performance Insights
on and off (p. 727). For more information about the Performance Insights API, see the Amazon RDS
Performance Insights API Reference.

The Performance Insights API provides the following operations.

Performance Insights action AWS CLI command Description

CreatePerformanceAnalysisReport
aws pi create-performance- Creates a performance analysis
analysis-report report for a specific time period
for the DB instance. The result is
AnalysisReportId which is the
unique identifier of the report.

DeletePerformanceAnalysisReport
aws pi delete-performance- Deletes a performance analysis
analysis-report report.

DescribeDimensionKeys aws pi describe-dimension- Retrieves the top N dimension


keys keys for a metric for a specific time
period.

GetDimensionKeyDetails aws pi get-dimension-key- Retrieves the attributes of the


details specified dimension group
for a DB instance or data
source. For example, if you
specify a SQL ID, and if the
dimension details are available,
GetDimensionKeyDetails
retrieves the full text of the
dimension db.sql.statement
associated with this ID. This
operation is useful because
GetResourceMetrics and
DescribeDimensionKeys don't
support retrieval of large SQL
statement text.

GetPerformanceAnalysisReport aws pi get-performance- Retrieves the report including the


analysis-report insights for the report. The result
includes the report status, report
ID, report time details, insights, and
recommendations.

GetResourceMetadata aws pi get-resource- Retrieve the metadata for different


metadata features. For example, the metadata
might indicate that a feature is
turned on or off on a specific DB
instance.

GetResourceMetrics aws pi get-resource-metrics Retrieves Performance Insights


metrics for a set of data sources
over a time period. You can

770
Amazon Relational Database Service User Guide
Retrieving metrics with the Performance Insights API

Performance Insights action AWS CLI command Description


provide specific dimension groups
and dimensions, and provide
aggregation and filtering criteria for
each group.

ListAvailableResourceDimensions
aws pi list-available- Retrieve the dimensions that can
resource-dimensions be queried for each specified metric
type on a specified instance.

ListAvailableResourceMetrics aws pi list-available- Retrieve all available metrics of the


resource-metrics specified metric types that can be
queried for a specified DB instance.

ListPerformanceAnalysisReportsaws pi list-performance- Retrieves all the analysis reports


analysis-reports available for the DB instance. The
reports are listed based on the start
time of each report.

ListTagsForResource aws pi list-tags-for- Lists all the metadata tags added to


resource the resource. The list includes the
name and value of the tag.

TagResource aws pi tag-resource Adds metadata tags to the Amazon


RDS resource. The tag includes a
name and a value.

UntagResource aws pi untag-resource Removes the metadata tag from the


resource.

Topics
• AWS CLI for Performance Insights (p. 771)
• Retrieving time-series metrics (p. 771)
• AWS CLI examples for Performance Insights (p. 773)

AWS CLI for Performance Insights


You can view Performance Insights data using the AWS CLI. You can view help for the AWS CLI
commands for Performance Insights by entering the following on the command line.

aws pi help

If you don't have the AWS CLI installed, see Installing the AWS Command Line Interface in the AWS CLI
User Guide for information about installing it.

Retrieving time-series metrics


The GetResourceMetrics operation retrieves one or more time-series metrics from the Performance
Insights data. GetResourceMetrics requires a metric and time period, and returns a response with a
list of data points.

For example, the AWS Management Console uses GetResourceMetrics to populate the Counter
Metrics chart and the Database Load chart, as seen in the following image.

771
Amazon Relational Database Service User Guide
Retrieving metrics with the Performance Insights API

All metrics returned by GetResourceMetrics are standard time-series metrics, with the exception of
db.load. This metric is displayed in the Database Load chart. The db.load metric is different from
the other time-series metrics because you can break it into subcomponents called dimensions. In the
previous image, db.load is broken down and grouped by the waits states that make up the db.load.
Note
GetResourceMetrics can also return the db.sampleload metric, but the db.load metric is
appropriate in most cases.

For information about the counter metrics returned by GetResourceMetrics, see Performance
Insights counter metrics (p. 814).

The following calculations are supported for the metrics:

• Average – The average value for the metric over a period of time. Append .avg to the metric name.
• Minimum – The minimum value for the metric over a period of time. Append .min to the metric name.
• Maximum – The maximum value for the metric over a period of time. Append .max to the metric
name.
• Sum – The sum of the metric values over a period of time. Append .sum to the metric name.
• Sample count – The number of times the metric was collected over a period of time. Append
.sample_count to the metric name.

For example, assume that a metric is collected for 300 seconds (5 minutes), and that the metric is
collected one time each minute. The values for each minute are 1, 2, 3, 4, and 5. In this case, the
following calculations are returned:

• Average – 3
• Minimum – 1
• Maximum – 5
• Sum – 15
• Sample count – 5

For information about using the get-resource-metrics AWS CLI command, see get-resource-
metrics.

772
Amazon Relational Database Service User Guide
Retrieving metrics with the Performance Insights API

For the --metric-queries option, specify one or more queries that you want to get results for. Each
query consists of a mandatory Metric and optional GroupBy and Filter parameters. The following is
an example of a --metric-queries option specification.

{
"Metric": "string",
"GroupBy": {
"Group": "string",
"Dimensions": ["string", ...],
"Limit": integer
},
"Filter": {"string": "string"
...}

AWS CLI examples for Performance Insights


The following examples show how to use the AWS CLI for Performance Insights.

Topics
• Retrieving counter metrics (p. 773)
• Retrieving the DB load average for top wait events (p. 776)
• Retrieving the DB load average for top SQL (p. 777)
• Retrieving the DB load average filtered by SQL (p. 780)
• Retrieving the full text of a SQL statement (p. 783)
• Creating a performance analysis report for a time period (p. 783)
• Retrieving a performance analysis report (p. 784)
• Listing all the performance analysis reports for the DB instance (p. 784)
• Deleting a performance analysis report (p. 785)
• Adding tag to a performance analysis report (p. 785)
• Listing all the tags for a performance analysis report (p. 785)
• Deleting tags from a performance analysis report (p. 786)

Retrieving counter metrics


The following screenshot shows two counter metrics charts in the AWS Management Console.

773
Amazon Relational Database Service User Guide
Retrieving metrics with the Performance Insights API

The following example shows how to gather the same data that the AWS Management Console uses to
generate the two counter metric charts.

For Linux, macOS, or Unix:

aws pi get-resource-metrics \
--service-type RDS \
--identifier db-ID \
--start-time 2018-10-30T00:00:00Z \
--end-time 2018-10-30T01:00:00Z \
--period-in-seconds 60 \
--metric-queries '[{"Metric": "os.cpuUtilization.user.avg" },
{"Metric": "os.cpuUtilization.idle.avg"}]'

For Windows:

aws pi get-resource-metrics ^
--service-type RDS ^
--identifier db-ID ^
--start-time 2018-10-30T00:00:00Z ^
--end-time 2018-10-30T01:00:00Z ^
--period-in-seconds 60 ^
--metric-queries '[{"Metric": "os.cpuUtilization.user.avg" },
{"Metric": "os.cpuUtilization.idle.avg"}]'

You can also make a command easier to read by specifying a file for the --metrics-query option. The
following example uses a file called query.json for the option. The file has the following contents.

[
{
"Metric": "os.cpuUtilization.user.avg"
},
{
"Metric": "os.cpuUtilization.idle.avg"
}
]

Run the following command to use the file.

For Linux, macOS, or Unix:

aws pi get-resource-metrics \
--service-type RDS \
--identifier db-ID \
--start-time 2018-10-30T00:00:00Z \
--end-time 2018-10-30T01:00:00Z \
--period-in-seconds 60 \
--metric-queries file://query.json

For Windows:

aws pi get-resource-metrics ^
--service-type RDS ^
--identifier db-ID ^
--start-time 2018-10-30T00:00:00Z ^
--end-time 2018-10-30T01:00:00Z ^
--period-in-seconds 60 ^
--metric-queries file://query.json

774
Amazon Relational Database Service User Guide
Retrieving metrics with the Performance Insights API

The preceding example specifies the following values for the options:

• --service-type – RDS for Amazon RDS


• --identifier – The resource ID for the DB instance
• --start-time and --end-time – The ISO 8601 DateTime values for the period to query, with
multiple supported formats

It queries for a one-hour time range:

• --period-in-seconds – 60 for a per-minute query


• --metric-queries – An array of two queries, each just for one metric.

The metric name uses dots to classify the metric in a useful category, with the final element being
a function. In the example, the function is avg for each query. As with Amazon CloudWatch, the
supported functions are min, max, total, and avg.

The response looks similar to the following.

{
"Identifier": "db-XXX",
"AlignedStartTime": 1540857600.0,
"AlignedEndTime": 1540861200.0,
"MetricList": [
{ //A list of key/datapoints
"Key": {
"Metric": "os.cpuUtilization.user.avg" //Metric1
},
"DataPoints": [
//Each list of datapoints has the same timestamps and same number of items
{
"Timestamp": 1540857660.0, //Minute1
"Value": 4.0
},
{
"Timestamp": 1540857720.0, //Minute2
"Value": 4.0
},
{
"Timestamp": 1540857780.0, //Minute 3
"Value": 10.0
}
//... 60 datapoints for the os.cpuUtilization.user.avg metric
]
},
{
"Key": {
"Metric": "os.cpuUtilization.idle.avg" //Metric2
},
"DataPoints": [
{
"Timestamp": 1540857660.0, //Minute1
"Value": 12.0
},
{
"Timestamp": 1540857720.0, //Minute2
"Value": 13.5
},
//... 60 datapoints for the os.cpuUtilization.idle.avg metric
]
}
] //end of MetricList

775
Amazon Relational Database Service User Guide
Retrieving metrics with the Performance Insights API

} //end of response

The response has an Identifier, AlignedStartTime, and AlignedEndTime. B the --period-in-


seconds value was 60, the start and end times have been aligned to the minute. If the --period-in-
seconds was 3600, the start and end times would have been aligned to the hour.

The MetricList in the response has a number of entries, each with a Key and a DataPoints entry.
Each DataPoint has a Timestamp and a Value. Each Datapoints list has 60 data points because the
queries are for per-minute data over an hour, with Timestamp1/Minute1, Timestamp2/Minute2, and
so on, up to Timestamp60/Minute60.

Because the query is for two different counter metrics, there are two elements in the response
MetricList.

Retrieving the DB load average for top wait events


The following example is the same query that the AWS Management Console uses to generate a
stacked area line graph. This example retrieves the db.load.avg for the last hour with load divided
according to the top seven wait events. The command is the same as the command in Retrieving counter
metrics (p. 773). However, the query.json file has the following contents.

[
{
"Metric": "db.load.avg",
"GroupBy": { "Group": "db.wait_event", "Limit": 7 }
}
]

Run the following command.

For Linux, macOS, or Unix:

aws pi get-resource-metrics \
--service-type RDS \
--identifier db-ID \
--start-time 2018-10-30T00:00:00Z \
--end-time 2018-10-30T01:00:00Z \
--period-in-seconds 60 \
--metric-queries file://query.json

For Windows:

aws pi get-resource-metrics ^
--service-type RDS ^
--identifier db-ID ^
--start-time 2018-10-30T00:00:00Z ^
--end-time 2018-10-30T01:00:00Z ^
--period-in-seconds 60 ^
--metric-queries file://query.json

The example specifies the metric of db.load.avg and a GroupBy of the top seven wait events.
For details about valid values for this example, see DimensionGroup in the Performance Insights API
Reference.

The response looks similar to the following.

{
"Identifier": "db-XXX",

776
Amazon Relational Database Service User Guide
Retrieving metrics with the Performance Insights API

"AlignedStartTime": 1540857600.0,
"AlignedEndTime": 1540861200.0,
"MetricList": [
{ //A list of key/datapoints
"Key": {
//A Metric with no dimensions. This is the total db.load.avg
"Metric": "db.load.avg"
},
"DataPoints": [
//Each list of datapoints has the same timestamps and same number of items
{
"Timestamp": 1540857660.0, //Minute1
"Value": 0.5166666666666667
},
{
"Timestamp": 1540857720.0, //Minute2
"Value": 0.38333333333333336
},
{
"Timestamp": 1540857780.0, //Minute 3
"Value": 0.26666666666666666
}
//... 60 datapoints for the total db.load.avg key
]
},
{
"Key": {
//Another key. This is db.load.avg broken down by CPU
"Metric": "db.load.avg",
"Dimensions": {
"db.wait_event.name": "CPU",
"db.wait_event.type": "CPU"
}
},
"DataPoints": [
{
"Timestamp": 1540857660.0, //Minute1
"Value": 0.35
},
{
"Timestamp": 1540857720.0, //Minute2
"Value": 0.15
},
//... 60 datapoints for the CPU key
]
},
//... In total we have 8 key/datapoints entries, 1) total, 2-8) Top Wait Events
] //end of MetricList
} //end of response

In this response, there are eight entries in the MetricList. There is one entry for the total
db.load.avg, and seven entries each for the db.load.avg divided according to one of the top seven
wait events. Unlike in the first example, because there was a grouping dimension, there must be one
key for each grouping of the metric. There can't be only one key for each metric, as in the basic counter
metric use case.

Retrieving the DB load average for top SQL


The following example groups db.wait_events by the top 10 SQL statements. There are two different
groups for SQL statements:

• db.sql – The full SQL statement, such as select * from customers where customer_id =
123

777
Amazon Relational Database Service User Guide
Retrieving metrics with the Performance Insights API

• db.sql_tokenized – The tokenized SQL statement, such as select * from customers where
customer_id = ?

When analyzing database performance, it can be useful to consider SQL statements that only differ
by their parameters as one logic item. So, you can use db.sql_tokenized when querying. However,
especially when you're interested in explain plans, sometimes it's more useful to examine full SQL
statements with parameters, and query grouping by db.sql. There is a parent-child relationship
between tokenized and full SQL, with multiple full SQL (children) grouped under the same tokenized
SQL (parent).

The command in this example is the similar to the command in Retrieving the DB load average for top
wait events (p. 776). However, the query.json file has the following contents.

[
{
"Metric": "db.load.avg",
"GroupBy": { "Group": "db.sql_tokenized", "Limit": 10 }
}
]

The following example uses db.sql_tokenized.

For Linux, macOS, or Unix:

aws pi get-resource-metrics \
--service-type RDS \
--identifier db-ID \
--start-time 2018-10-29T00:00:00Z \
--end-time 2018-10-30T00:00:00Z \
--period-in-seconds 3600 \
--metric-queries file://query.json

For Windows:

aws pi get-resource-metrics ^
--service-type RDS ^
--identifier db-ID ^
--start-time 2018-10-29T00:00:00Z ^
--end-time 2018-10-30T00:00:00Z ^
--period-in-seconds 3600 ^
--metric-queries file://query.json

This example queries over 24 hours, with a one hour period-in-seconds.

The example specifies the metric of db.load.avg and a GroupBy of the top seven wait events.
For details about valid values for this example, see DimensionGroup in the Performance Insights API
Reference.

The response looks similar to the following.

{
"AlignedStartTime": 1540771200.0,
"AlignedEndTime": 1540857600.0,
"Identifier": "db-XXX",

"MetricList": [ //11 entries in the MetricList


{

778
Amazon Relational Database Service User Guide
Retrieving metrics with the Performance Insights API

"Key": { //First key is total


"Metric": "db.load.avg"
}
"DataPoints": [ //Each DataPoints list has 24 per-hour Timestamps and a value
{
"Value": 1.6964980544747081,
"Timestamp": 1540774800.0
},
//... 24 datapoints
]
},
{
"Key": { //Next key is the top tokenized SQL
"Dimensions": {
"db.sql_tokenized.statement": "INSERT INTO authors (id,name,email)
VALUES\n( nextval(?) ,?,?)",
"db.sql_tokenized.db_id": "pi-2372568224",
"db.sql_tokenized.id": "AKIAIOSFODNN7EXAMPLE"
},
"Metric": "db.load.avg"
},
"DataPoints": [ //... 24 datapoints
]
},
// In total 11 entries, 10 Keys of top tokenized SQL, 1 total key
] //End of MetricList
} //End of response

This response has 11 entries in the MetricList (1 total, 10 top tokenized SQL), with each entry having
24 per-hour DataPoints.

For tokenized SQL, there are three entries in each dimensions list:

• db.sql_tokenized.statement – The tokenized SQL statement.


• db.sql_tokenized.db_id – Either the native database ID used to refer to the SQL, or a synthetic
ID that Performance Insights generates for you if the native database ID isn't available. This example
returns the pi-2372568224 synthetic ID.
• db.sql_tokenized.id – The ID of the query inside Performance Insights.

In the AWS Management Console, this ID is called the Support ID. It's named this because the ID is
data that AWS Support can examine to help you troubleshoot an issue with your database. AWS takes
the security and privacy of your data extremely seriously, and almost all data is stored encrypted with
your AWS KMS customer master key (CMK). Therefore, nobody inside AWS can look at this data. In
the example preceding, both the tokenized.statement and the tokenized.db_id are stored
encrypted. If you have an issue with your database, AWS Support can help you by referencing the
Support ID.

When querying, it might be convenient to specify a Group in GroupBy. However, for finer-grained
control over the data that's returned, specify the list of dimensions. For example, if all that is needed is
the db.sql_tokenized.statement, then a Dimensions attribute can be added to the query.json file.

[
{
"Metric": "db.load.avg",
"GroupBy": {
"Group": "db.sql_tokenized",
"Dimensions":["db.sql_tokenized.statement"],
"Limit": 10
}
}

779
Amazon Relational Database Service User Guide
Retrieving metrics with the Performance Insights API

Retrieving the DB load average filtered by SQL

The preceding image shows that a particular query is selected, and the top average active sessions
stacked area line graph is scoped to that query. Although the query is still for the top seven overall wait
events, the value of the response is filtered. The filter causes it to take into account only sessions that are
a match for the particular filter.

The corresponding API query in this example is similar to the command in Retrieving the DB load average
for top SQL (p. 777). However, the query.json file has the following contents.

[
{
"Metric": "db.load.avg",
"GroupBy": { "Group": "db.wait_event", "Limit": 5 },
"Filter": { "db.sql_tokenized.id": "AKIAIOSFODNN7EXAMPLE" }
}
]

For Linux, macOS, or Unix:

aws pi get-resource-metrics \
--service-type RDS \
--identifier db-ID \
--start-time 2018-10-30T00:00:00Z \
--end-time 2018-10-30T01:00:00Z \
--period-in-seconds 60 \
--metric-queries file://query.json

For Windows:

aws pi get-resource-metrics ^
--service-type RDS ^
--identifier db-ID ^

780
Amazon Relational Database Service User Guide
Retrieving metrics with the Performance Insights API

--start-time 2018-10-30T00:00:00Z ^
--end-time 2018-10-30T01:00:00Z ^
--period-in-seconds 60 ^
--metric-queries file://query.json

The response looks similar to the following.

{
"Identifier": "db-XXX",
"AlignedStartTime": 1556215200.0,
"MetricList": [
{
"Key": {
"Metric": "db.load.avg"
},
"DataPoints": [
{
"Timestamp": 1556218800.0,
"Value": 1.4878117913832196
},
{
"Timestamp": 1556222400.0,
"Value": 1.192823803967328
}
]
},
{
"Key": {
"Metric": "db.load.avg",
"Dimensions": {
"db.wait_event.type": "io",
"db.wait_event.name": "wait/io/aurora_redo_log_flush"
}
},
"DataPoints": [
{
"Timestamp": 1556218800.0,
"Value": 1.1360544217687074
},
{
"Timestamp": 1556222400.0,
"Value": 1.058051341890315
}
]
},
{
"Key": {
"Metric": "db.load.avg",
"Dimensions": {
"db.wait_event.type": "io",
"db.wait_event.name": "wait/io/table/sql/handler"
}
},
"DataPoints": [
{
"Timestamp": 1556218800.0,
"Value": 0.16241496598639457
},
{
"Timestamp": 1556222400.0,
"Value": 0.05163360560093349
}
]
},
{

781
Amazon Relational Database Service User Guide
Retrieving metrics with the Performance Insights API

"Key": {
"Metric": "db.load.avg",
"Dimensions": {
"db.wait_event.type": "synch",
"db.wait_event.name": "wait/synch/mutex/innodb/
aurora_lock_thread_slot_futex"
}
},
"DataPoints": [
{
"Timestamp": 1556218800.0,
"Value": 0.11479591836734694
},
{
"Timestamp": 1556222400.0,
"Value": 0.013127187864644107
}
]
},
{
"Key": {
"Metric": "db.load.avg",
"Dimensions": {
"db.wait_event.type": "CPU",
"db.wait_event.name": "CPU"
}
},
"DataPoints": [
{
"Timestamp": 1556218800.0,
"Value": 0.05215419501133787
},
{
"Timestamp": 1556222400.0,
"Value": 0.05805134189031505
}
]
},
{
"Key": {
"Metric": "db.load.avg",
"Dimensions": {
"db.wait_event.type": "synch",
"db.wait_event.name": "wait/synch/mutex/innodb/lock_wait_mutex"
}
},
"DataPoints": [
{
"Timestamp": 1556218800.0,
"Value": 0.017573696145124718
},
{
"Timestamp": 1556222400.0,
"Value": 0.002333722287047841
}
]
}
],
"AlignedEndTime": 1556222400.0
} //end of response

In this response, all values are filtered according to the contribution of tokenized SQL
AKIAIOSFODNN7EXAMPLE specified in the query.json file. The keys also might follow a different order
than a query without a filter, because it's the top five wait events that affected the filtered SQL.

782
Amazon Relational Database Service User Guide
Retrieving metrics with the Performance Insights API

Retrieving the full text of a SQL statement


The following example retrieves the full text of a SQL statement for DB instance
db-10BCD2EFGHIJ3KL4M5NO6PQRS5. The --group is db.sql, and the --group-identifier is
db.sql.id. In this example, my-sql-id represents a SQL ID retrieved by invoking pi get-resource-
metrics or pi describe-dimension-keys.

Run the following command.

For Linux, macOS, or Unix:

aws pi get-dimension-key-details \
--service-type RDS \
--identifier db-10BCD2EFGHIJ3KL4M5NO6PQRS5 \
--group db.sql \
--group-identifier my-sql-id \
--requested-dimensions statement

For Windows:

aws pi get-dimension-key-details ^
--service-type RDS ^
--identifier db-10BCD2EFGHIJ3KL4M5NO6PQRS5 ^
--group db.sql ^
--group-identifier my-sql-id ^
--requested-dimensions statement

In this example, the dimensions details are available. Thus, Performance Insights retrieves the full text of
the SQL statement, without truncating it.

{
"Dimensions":[
{
"Value": "SELECT e.last_name, d.department_name FROM employees e, departments d
WHERE e.department_id=d.department_id",
"Dimension": "db.sql.statement",
"Status": "AVAILABLE"
},
...
]
}

Creating a performance analysis report for a time period


The following example creates a performance analysis report with the 1682969503 start time and
1682979503 end time for the db-loadtest-0 database.

aws pi-test create-performance-analysis-report \


--service-type RDS \
--identifier db-loadtest-0 \
--start-time 1682969503 \
--end-time 1682979503 \
--endpoint-url https://api.titan.pi.a2z.com \
--region us-west-2

The response is the unique identifier report-0234d3ed98e28fb17 for the report.

783
Amazon Relational Database Service User Guide
Retrieving metrics with the Performance Insights API

{
"AnalysisReportId": "report-0234d3ed98e28fb17"
}

Retrieving a performance analysis report


The following example retrieves the analysis report details for the report-0d99cc91c4422ee61
report.

aws pi-test get-performance-analysis-report \


--service-type RDS \
--identifier db-loadtest-0 \
--analysis-report-id report-0d99cc91c4422ee61 \
--endpoint-url https://api.titan.pi.a2z.com \
--region us-west-2

The reponse provides the report status, ID, time details, and insights.

{
"AnalysisReport": {
"Status": "Succeeded",
"ServiceType": "RDS",
"Identifier": "db-loadtest-0",
"StartTime": 1680583486.584,
"AnalysisReportId": "report-0d99cc91c4422ee61",
"EndTime": 1680587086.584,
"CreateTime": 1680587087.139,
"Insights": [
... (Condensed for space)
]
}
}

Listing all the performance analysis reports for the DB instance


The following example lists all the available performance analysis reports for the db-loadtest-0
database.

aws pi-test list-performance-analysis-reports \


--service-type RDS \
--identifier db-loadtest-0 \
--endpoint-url https://api.titan.pi.a2z.com \
--region us-west-2

The response lists all the reports with the report ID, status, and time period details.

{
"AnalysisReports": [
{
"Status": "Succeeded",
"EndTime": 1680587086.584,
"CreationTime": 1680587087.139,

784
Amazon Relational Database Service User Guide
Retrieving metrics with the Performance Insights API

"StartTime": 1680583486.584,
"AnalysisReportId": "report-0d99cc91c4422ee61"
},
{
"Status": "Succeeded",
"EndTime": 1681491137.914,
"CreationTime": 1681491145.973,
"StartTime": 1681487537.914,
"AnalysisReportId": "report-002633115cc002233"
},
{
"Status": "Succeeded",
"EndTime": 1681493499.849,
"CreationTime": 1681493507.762,
"StartTime": 1681489899.849,
"AnalysisReportId": "report-043b1e006b47246f9"
},
{
"Status": "InProgress",
"EndTime": 1682979503.0,
"CreationTime": 1682979618.994,
"StartTime": 1682969503.0,
"AnalysisReportId": "report-01ad15f9b88bcbd56"
}
]
}

Deleting a performance analysis report


The following example deletes the analysis report for the db-loadtest-0 database.

aws pi-test delete-performance-analysis-report \


--service-type RDS \
--identifier db-loadtest-0 \
--analysis-report-id report-0d99cc91c4422ee61 \
--endpoint-url https://api.titan.pi.a2z.com \
--region us-west-2

Adding tag to a performance analysis report


The following example adds a tag with a key name and value test-tag to the
report-01ad15f9b88bcbd56 report.

aws pi-test tag-resource \


--service-type RDS \
--resource-arn arn:aws:pi:us-west-2:356798100956:perf-reports/RDS/db-loadtest-0/
report-01ad15f9b88bcbd56 \
--tags Key=name,Value=test-tag \
--endpoint-url https://api.titan.pi.a2z.com \
--region us-west-2

Listing all the tags for a performance analysis report


The following example lists all the tags for the report-01ad15f9b88bcbd56 report.

785
Amazon Relational Database Service User Guide
Logging Performance Insights calls using AWS CloudTrail

aws pi-test list-tags-for-resource \


--service-type RDS \
--resource-arn arn:aws:pi:us-west-2:356798100956:perf-reports/RDS/db-loadtest-0/
report-01ad15f9b88bcbd56 \
--endpoint-url https://api.titan.pi.a2z.com \
--region us-west-2

The response lists the value and key for all the tags added to the report:

{
"Tags": [
{
"Value": "test-tag",
"Key": "name"
}
]
}

Deleting tags from a performance analysis report


The following example deletes the name tag from the report-01ad15f9b88bcbd56 report.

aws pi-test untag-resource \


--service-type RDS \
--resource-arn arn:aws:pi:us-west-2:356798100956:perf-reports/RDS/db-loadtest-0/
report-01ad15f9b88bcbd56 \
--tag-keys name \
--endpoint-url https://api.titan.pi.a2z.com \
--region us-west-2

After the tag is deleted, calling the list-tags-for-resource API doesn't list this tag.

Logging Performance Insights calls using AWS


CloudTrail
Performance Insights runs with AWS CloudTrail, a service that provides a record of actions taken by a
user, role, or an AWS service in Performance Insights. CloudTrail captures all API calls for Performance
Insights as events. This capture includes calls from the Amazon RDS console and from code calls to the
Performance Insights API operations.

If you create a trail, you can enable continuous delivery of CloudTrail events to an Amazon S3 bucket,
including events for Performance Insights. If you don't configure a trail, you can still view the most
recent events in the CloudTrail console in Event history. Using the data collected by CloudTrail, you can
determine certain information. This information includes the request that was made to Performance
Insights, the IP address the request was made from, who made the request, and when it was made. It
also includes additional details.

To learn more about CloudTrail, see the AWS CloudTrail User Guide.

Working with Performance Insights information in CloudTrail


CloudTrail is enabled on your AWS account when you create the account. When activity occurs in
Performance Insights, that activity is recorded in a CloudTrail event along with other AWS service events

786
Amazon Relational Database Service User Guide
Logging Performance Insights calls using AWS CloudTrail

in the CloudTrail console in Event history. You can view, search, and download recent events in your AWS
account. For more information, see Viewing Events with CloudTrail Event History in AWS CloudTrail User
Guide.

For an ongoing record of events in your AWS account, including events for Performance Insights, create a
trail. A trail enables CloudTrail to deliver log files to an Amazon S3 bucket. By default, when you create a
trail in the console, the trail applies to all AWS Regions. The trail logs events from all AWS Regions in the
AWS partition and delivers the log files to the Amazon S3 bucket that you specify. Additionally, you can
configure other AWS services to further analyze and act upon the event data collected in CloudTrail logs.
For more information, see the following topics in AWS CloudTrail User Guide:

• Overview for Creating a Trail


• CloudTrail Supported Services and Integrations
• Configuring Amazon SNS Notifications for CloudTrail
• Receiving CloudTrail Log Files from Multiple Regions and Receiving CloudTrail Log Files from Multiple
Accounts

All Performance Insights operations are logged by CloudTrail and are documented in the Performance
Insights API Reference. For example, calls to the DescribeDimensionKeys and GetResourceMetrics
operations generate entries in the CloudTrail log files.

Every event or log entry contains information about who generated the request. The identity
information helps you determine the following:

• Whether the request was made with root or IAM user credentials.
• Whether the request was made with temporary security credentials for a role or federated user.
• Whether the request was made by another AWS service.

For more information, see the CloudTrail userIdentity Element.

Performance Insights log file entries


A trail is a configuration that enables delivery of events as log files to an Amazon S3 bucket that you
specify. CloudTrail log files contain one or more log entries. An event represents a single request from
any source. Each event includes information about the requested operation, the date and time of the
operation, request parameters, and so on. CloudTrail log files aren't an ordered stack trace of the public
API calls, so they don't appear in any specific order.

The following example shows a CloudTrail log entry that demonstrates the GetResourceMetrics
operation.

{
"eventVersion": "1.05",
"userIdentity": {
"type": "IAMUser",
"principalId": "AKIAIOSFODNN7EXAMPLE",
"arn": "arn:aws:iam::123456789012:user/johndoe",
"accountId": "123456789012",
"accessKeyId": "AKIAI44QH8DHBEXAMPLE",
"userName": "johndoe"
},
"eventTime": "2019-12-18T19:28:46Z",
"eventSource": "pi.amazonaws.com",
"eventName": "GetResourceMetrics",
"awsRegion": "us-east-1",
"sourceIPAddress": "72.21.198.67",
"userAgent": "aws-cli/1.16.240 Python/3.7.4 Darwin/18.7.0 botocore/1.12.230",

787
Amazon Relational Database Service User Guide
Logging Performance Insights calls using AWS CloudTrail

"requestParameters": {
"identifier": "db-YTDU5J5V66X7CXSCVDFD2V3SZM",
"metricQueries": [
{
"metric": "os.cpuUtilization.user.avg"
},
{
"metric": "os.cpuUtilization.idle.avg"
}
],
"startTime": "Dec 18, 2019 5:28:46 PM",
"periodInSeconds": 60,
"endTime": "Dec 18, 2019 7:28:46 PM",
"serviceType": "RDS"
},
"responseElements": null,
"requestID": "9ffbe15c-96b5-4fe6-bed9-9fccff1a0525",
"eventID": "08908de0-2431-4e2e-ba7b-f5424f908433",
"eventType": "AwsApiCall",
"recipientAccountId": "123456789012"
}

788
Amazon Relational Database Service User Guide
Analyzing performance with DevOps Guru for RDS

Analyzing performance anomalies with Amazon


DevOps Guru for Amazon RDS
Amazon DevOps Guru is a fully managed operations service that helps developers and operators improve
the performance and availability of their applications. DevOps Guru offloads the tasks associated with
identifying operational issues so that you can quickly implement recommendations to improve your
application. For more information, see What is Amazon DevOps Guru? in the Amazon DevOps Guru User
Guide.

DevOps Guru detects, analyzes, and makes recommendations for existing operational issues for all
Amazon RDS DB engines. DevOps Guru for RDS extends this capability by applying machine learning
to Performance Insights metrics for RDS for PostgreSQL databases. These monitoring features allow
DevOps Guru for RDS to detect and diagnose performance bottlenecks and recommend specific
corrective actions. DevOps Guru for RDS can also detect problematic conditions in your RDS for
PostgreSQL database before they occur.

The following video is an overview of DevOps Guru for RDS.

For a deep dive on this subject, see Amazon DevOps Guru for RDS under the hood.

Topics
• Benefits of DevOps Guru for RDS (p. 789)
• How DevOps Guru for RDS works (p. 790)
• Setting up DevOps Guru for RDS (p. 791)

Benefits of DevOps Guru for RDS


If you're responsible for RDS for PostgreSQL database, you might not know that an event or regression
that is affecting that database is occurring. When you learn about the issue, you might not know why
it's occurring or what to do about it. Rather than turning to a database administrator (DBA) for help or
relying on third-party tools, you can follow recommendations from DevOps Guru for RDS.

You gain the following advantages from the detailed analysis of DevOps Guru for RDS:

Fast diagnosis

DevOps Guru for RDS continuously monitors and analyzes database telemetry. Performance Insights,
Enhanced Monitoring, and Amazon CloudWatch collect telemetry data for your database instance.
DevOps Guru for RDS uses statistical and machine learning techniques to mine this data and detect
anomalies. To learn more about telemetry data, see Monitoring DB load with Performance Insights
on Amazon RDS and Monitoring OS metrics with Enhanced Monitoring in the Amazon RDS User
Guide.
Fast resolution

Each anomaly identifies the performance issue and suggests avenues of investigation or corrective
actions. For example, DevOps Guru for RDS might recommend that you investigate specific wait
events. Or it might recommend that you tune your application pool settings to limit the number of
database connections. Based on these recommendations, you can resolve performance issues more
quickly than by troubleshooting manually.
Proactive insights

DevOps Guru for RDS uses metrics from your resources to detect potentially problematic behavior
before it becomes a bigger problem. For example, it can detect when your database is using
an increasing number of on-disk temporary tables, which could start to impact performance.

789
Amazon Relational Database Service User Guide
How DevOps Guru for RDS works

DevOps Guru then provides recommendations to help you address issues before they become bigger
problems.
Deep knowledge of Amazon engineers and machine learning

To detect performance issues and help you resolve bottlenecks, DevOps Guru for RDS relies
on machine learning (ML) and advanced mathematical formulas. Amazon database engineers
contributed to the development of the DevOps Guru for RDS findings, which encapsulate many
years of managing hundreds of thousands of databases. By drawing on this collective knowledge,
DevOps Guru for RDS can teach you best practices.

How DevOps Guru for RDS works


DevOps Guru for RDS collects data about your RDS for PostgreSQL databases from Amazon RDS
Performance Insights. The most important metric is DBLoad. DevOps Guru for RDS consumes the
Performance Insights metrics, analyzes them with machine learning, and publishes insights to the
dashboard.

An insight is a collection of related anomalies that were detected by DevOps Guru.

In DevOps Guru for RDS, an anomaly is a pattern that deviates from what is considered normal
performance for your RDS for PostgreSQL database.

Proactive insights
A proactive insight lets you know about problematic behavior before it occurs. It contains anomalies with
recommendations and related metrics to help you address issues in your RDS for PostgreSQL databases
before become bigger problems. These insights are published in the DevOps Guru dashboard.

For example, DevOps Guru might detect that your RDS for PostgreSQL database is creating many on-
disk temporary tables. If not addressed, this trend might lead to performance issues. Each proactive
insight includes recommendations for corrective behavior and links to relevant topics in Tuning RDS for
PostgreSQL with Amazon DevOps Guru proactive insights (p. 2353). For more information, see Working
with insights in DevOps Guru in the Amazon DevOps Guru User Guide.

Reactive insights
A reactive insight identifies anomalous behavior as it occurs. If DevOps Guru for RDS finds performance
issues in your RDS for PostgreSQL DB instances, it publishes a reactive insight in the DevOps Guru
dashboard. For more information, see Working with insights in DevOps Guru in the Amazon DevOps Guru
User Guide.

Causal anomalies
A causal anomaly is a top-level anomaly within a reactive insight. Database load (DB load) is the causal
anomaly for DevOps Guru for RDS.

An anomaly measures performance impact by assigning a severity level of High, Medium, or Low. To
learn more, see Key concepts for DevOps Guru for RDS in the Amazon DevOps Guru User Guide.

If DevOps Guru detects a current anomaly on your DB instance, you're alerted in the Databases page
of the RDS console. The console also alerts you to anomalies that occurred in the past 24 hours. To go
to the anomaly page from the RDS console, choose the link in the alert message. The RDS console also
alerts you in the page for your RDS for PostgreSQL DB instance.

Contextual anomalies
A contextual anomaly is a finding within Database load (DB load) that is related to a reactive insight.
Each contextual anomaly describes a specific RDS for PostgreSQL performance issue that requires

790
Amazon Relational Database Service User Guide
Setting up DevOps Guru for RDS

investigation. For example, DevOps Guru for RDS might recommend that you consider increasing CPU
capacity or investigate wait events that are contributing to DB load.
Important
We recommend that you test any changes on a test instance before modifying a production
instance. In this way, you understand the impact of the change.

To learn more, see Analyzing anomalies in Amazon RDS in the Amazon DevOps Guru User Guide.

Setting up DevOps Guru for RDS


To allow DevOps Guru for Amazon RDS to publish insights for a RDS for PostgreSQL database, complete
the following tasks.

Topics
• Configuring IAM access policies for DevOps Guru for RDS (p. 791)
• Turning on Performance Insights for your RDS for PostgreSQL DB instances (p. 791)
• Turning on DevOps Guru and specifying resource coverage (p. 791)

Configuring IAM access policies for DevOps Guru for RDS


To view alerts from DevOps Guru in the RDS console, your AWS Identity and Access Management (IAM)
user or role must have either of the following policies:

• The AWS managed policy AmazonDevOpsGuruConsoleFullAccess


• The AWS managed policy AmazonDevOpsGuruConsoleReadOnlyAccess and either of the following
policies:
• The AWS managed policy AmazonRDSFullAccess
• A customer managed policy that includes pi:GetResourceMetrics and
pi:DescribeDimensionKeys

For more information, see Configuring access policies for Performance Insights (p. 734).

Turning on Performance Insights for your RDS for PostgreSQL


DB instances
DevOps Guru for RDS relies on Performance Insights for its data. Without Performance Insights,
DevOps Guru publishes anomalies, but doesn't include the detailed analysis and recommendations.

When you create or modify a RDS for PostgreSQL DB instance, you can turn on Performance Insights. For
more information, see Turning Performance Insights on and off (p. 727).

Turning on DevOps Guru and specifying resource coverage


You can turn on DevOps Guru to have it monitor your RDS for PostgreSQL databases in either of the
following ways.

Topics
• Turning on DevOps Guru in the RDS console (p. 792)
• Adding RDS for PostgreSQL resources in the DevOps Guru console (p. 795)
• Adding RDS for PostgreSQL resources using AWS CloudFormation (p. 795)

791
Amazon Relational Database Service User Guide
Setting up DevOps Guru for RDS

Turning on DevOps Guru in the RDS console


You can take multiple paths in the Amazon RDS console to turn on DevOps Guru.

Topics
• Turning on DevOps Guru when you create an RDS for PostgreSQL database (p. 792)
• Turning on DevOps Guru from the notification banner (p. 793)
• Responding to a permissions error when you turn on DevOps Guru (p. 794)

Turning on DevOps Guru when you create an RDS for PostgreSQL database

The creation workflow includes a setting that turns on DevOps Guru coverage for your database. This
setting is turned on by default when you choose the Production template.

To turn on DevOps Guru when you create an RDS for PostgreSQL database

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. Follow the steps in Creating a DB instance (p. 303), up to but not including the step where you
choose monitoring settings.
3. In Monitoring, choose Turn on Performance Insights. For DevOps Guru for RDS to provide detailed
analysis of performance anomalies, Performance Insights must be turned on.
4. Choose Turn on DevOps Guru.

792
Amazon Relational Database Service User Guide
Setting up DevOps Guru for RDS

5. Create a tag for your database so that DevOps Guru can monitor it. Do the following:

• In the text field for Tag key, enter a name that begins with Devops-Guru-.
• In the text field for Tag value, enter any value. For example, if you enter rds-database-1 for
the name of your RDS for PostgreSQL database, you can also enter rds-database-1 as the tag
value.

For more information about tags, see "Use tags to identify resources in your DevOps Guru
applications" in the Amazon DevOps Guru User Guide.
6. Complete the remaining steps in Creating a DB instance (p. 303).

Turning on DevOps Guru from the notification banner

If your resources aren't covered by DevOps Guru, Amazon RDS notifies you with a banner in the following
locations:

• The Monitoring tab of a DB cluster instance


• The Performance Insights dashboard

793
Amazon Relational Database Service User Guide
Setting up DevOps Guru for RDS

To turn on DevOps Guru for your RDS for PostgreSQL database

1. In the banner, choose Turn on DevOps Guru for RDS.


2. Enter a tag key name and value. For more information about tags, see "Use tags to identify resources
in your DevOps Guru applications" in the Amazon DevOps Guru User Guide.

3. Choose Turn on DevOps Guru.

Responding to a permissions error when you turn on DevOps Guru

If you turn on DevOps Guru from the RDS console when you create a database, RDS might display the
following banner about missing permissions.

To respond to a permissions error

1. Grant your IAM user or role the user managed role AmazonDevOpsGuruConsoleFullAccess. For
more information, see Configuring IAM access policies for DevOps Guru for RDS (p. 791).
2. Open the RDS console.
3. In the navigation pane, choose Performance Insights.
4. Choose a DB instance in the cluster that you just created.
5. Turn on DevOps Guru for RDS.

794
Amazon Relational Database Service User Guide
Setting up DevOps Guru for RDS

6. Choose a tag value. For more information, see "Use tags to identify resources in your DevOps Guru
applications" in the Amazon DevOps Guru User Guide.

7. Choose Turn on DevOps Guru.

Adding RDS for PostgreSQL resources in the DevOps Guru console


You can specify your DevOps Guru resource coverage on the DevOps Guru console. Follow the step
described in Specify your DevOps Guru resource coverage in the Amazon DevOps Guru User Guide. When
you edit your analyzed resources, choose one of the following options:

• Choose All account resources to analyze all supported resources, including the RDS for PostgreSQL
databases, in your AWS account and Region.
• Choose CloudFormation stacks to analyze the RDS for PostgreSQL databases that are in stacks you
choose. For more information, see Use AWS CloudFormation stacks to identify resources in your
DevOps Guru applications in the Amazon DevOps Guru User Guide.
• Choose Tags to analyze the RDS for PostgreSQL databases that you have tagged. For more
information, see Use tags to identify resources in your DevOps Guru applications in the Amazon
DevOps Guru User Guide.

For more information, see Enable DevOps Guru in the Amazon DevOps Guru User Guide.

Adding RDS for PostgreSQL resources using AWS CloudFormation


You can use tags to add coverage for your RDS for PostgreSQL resources to your CloudFormation
templates. The following procedure assumes that you have a CloudFormation template both for your
RDS for PostgreSQL DB instance and DevOps Guru stack.

To specify an RDS for PostgreSQL DB instance using a CloudFormation tag

1. In the CloudFormation template for your DB instance, define a tag using a key/value pair.

The following example assigns the value my-db-instance1 to Devops-guru-cfn-default for


an RDS for PostgreSQL DB instance.

795
Amazon Relational Database Service User Guide
Setting up DevOps Guru for RDS

MyDBInstance1:
Type: "AWS::RDS::DBInstance"
Properties:
DBInstanceIdentifier: my-db-instance1
Tags:
- Key: Devops-guru-cfn-default
Value: devopsguru-my-db-instance1

2. In the CloudFormation template for your DevOps Guru stack, specify the same tag in your resource
collection filter.

The following example configures DevOps Guru to provide coverage for the resource with the tag
value my-db-instance1.

DevOpsGuruResourceCollection:
Type: AWS::DevOpsGuru::ResourceCollection
Properties:
ResourceCollectionFilter:
Tags:
- AppBoundaryKey: "Devops-guru-cfn-default"
TagValues:
- "devopsguru-my-db-instance1"

The following example provides coverage for all resources within the application boundary Devops-
guru-cfn-default.

DevOpsGuruResourceCollection:
Type: AWS::DevOpsGuru::ResourceCollection
Properties:
ResourceCollectionFilter:
Tags:
- AppBoundaryKey: "Devops-guru-cfn-default"
TagValues:
- "*"

For more information, see AWS::DevOpsGuru::ResourceCollection and AWS::RDS::DBInstance in the AWS


CloudFormation User Guide.

796
Amazon Relational Database Service User Guide
Monitoring the OS with Enhanced Monitoring

Monitoring OS metrics with Enhanced Monitoring


With Enhanced Monitoring, you can monitor the operating system of your DB instance in real time. When
you want to see how different processes or threads use the CPU, Enhanced Monitoring metrics are useful.

Topics
• Overview of Enhanced Monitoring (p. 797)
• Setting up and enabling Enhanced Monitoring (p. 798)
• Viewing OS metrics in the RDS console (p. 802)
• Viewing OS metrics using CloudWatch Logs (p. 805)

Overview of Enhanced Monitoring


Amazon RDS provides metrics in real time for the operating system (OS) that your DB instance runs on.
You can view all the system metrics and process information for your RDS DB instances on the console.
You can manage which metrics you want to monitor for each instance and customize the dashboard
according to your requirements. For descriptions of the Enhanced Monitoring metrics, see OS metrics in
Enhanced Monitoring (p. 837).

RDS delivers the metrics from Enhanced Monitoring into your Amazon CloudWatch Logs account.
You can create metrics filters in CloudWatch from CloudWatch Logs and display the graphs on the
CloudWatch dashboard. You can consume the Enhanced Monitoring JSON output from CloudWatch Logs
in a monitoring system of your choice. For more information, see Enhanced Monitoring in the Amazon
RDS FAQs.

Topics
• Enhanced Monitoring availability (p. 797)
• Differences between CloudWatch and Enhanced Monitoring metrics (p. 797)
• Retention of Enhanced Monitoring metrics (p. 798)
• Cost of Enhanced Monitoring (p. 798)

Enhanced Monitoring availability


Enhanced Monitoring is available for the following database engines:

• MariaDB
• Microsoft SQL Server
• MySQL
• Oracle
• PostgreSQL

Enhanced Monitoring is available for all DB instance classes except for the db.m1.small instance class.

Differences between CloudWatch and Enhanced Monitoring


metrics
A hypervisor creates and runs virtual machines (VMs). Using a hypervisor, an instance can support
multiple guest VMs by virtually sharing memory and CPU. CloudWatch gathers metrics about CPU

797
Amazon Relational Database Service User Guide
Setting up and enabling Enhanced Monitoring

utilization from the hypervisor for a DB instance. In contrast, Enhanced Monitoring gathers its metrics
from an agent on the DB instance.

You might find differences between the CloudWatch and Enhanced Monitoring measurements, because
the hypervisor layer performs a small amount of work. The differences can be greater if your DB
instances use smaller instance classes. In this scenario, more virtual machines (VMs) are probably
managed by the hypervisor layer on a single physical instance.

For descriptions of the Enhanced Monitoring metrics, see OS metrics in Enhanced Monitoring (p. 837).
For more information about CloudWatch metrics, see the Amazon CloudWatch User Guide.

Retention of Enhanced Monitoring metrics


By default, Enhanced Monitoring metrics are stored for 30 days in the CloudWatch Logs. This retention
period is different from typical CloudWatch metrics.

To modify the amount of time the metrics are stored in the CloudWatch Logs, change the retention for
the RDSOSMetrics log group in the CloudWatch console. For more information, see Change log data
retention in CloudWatch logs in the Amazon CloudWatch Logs User Guide.

Cost of Enhanced Monitoring


Enhanced Monitoring metrics are stored in the CloudWatch Logs instead of in CloudWatch metrics. The
cost of Enhanced Monitoring depends on the following factors:

• You are charged for Enhanced Monitoring only if you exceed the free tier provided by Amazon
CloudWatch Logs. Charges are based on CloudWatch Logs data transfer and storage rates.
• The amount of information transferred for an RDS instance is directly proportional to the defined
granularity for the Enhanced Monitoring feature. A smaller monitoring interval results in more
frequent reporting of OS metrics and increases your monitoring cost. To manage costs, set different
granularities for different instances in your accounts.
• Usage costs for Enhanced Monitoring are applied for each DB instance that Enhanced Monitoring is
enabled for. Monitoring a large number of DB instances is more expensive than monitoring only a few.
• DB instances that support a more compute-intensive workload have more OS process activity to report
and higher costs for Enhanced Monitoring.

For more information about pricing, see Amazon CloudWatch pricing.

Setting up and enabling Enhanced Monitoring


To use Enhanced Monitoring, you must create an IAM role, and then enable Enhanced Monitoring.

Topics
• Creating an IAM role for Enhanced Monitoring (p. 798)
• Turning Enhanced Monitoring on and off (p. 799)
• Protecting against the confused deputy problem (p. 801)

Creating an IAM role for Enhanced Monitoring


Enhanced Monitoring requires permission to act on your behalf to send OS metric information to
CloudWatch Logs. You grant Enhanced Monitoring permissions using an AWS Identity and Access
Management (IAM) role. You can either create this role when you enable Enhanced Monitoring or create
it beforehand.

798
Amazon Relational Database Service User Guide
Setting up and enabling Enhanced Monitoring

Topics
• Creating the IAM role when you enable Enhanced Monitoring (p. 799)
• Creating the IAM role before you enable Enhanced Monitoring (p. 799)

Creating the IAM role when you enable Enhanced Monitoring


When you enable Enhanced Monitoring in the RDS console, Amazon RDS can create the required IAM
role for you. The role is named rds-monitoring-role. RDS uses this role for the specified DB instance,
read replica, or Multi-AZ DB cluster.

To create the IAM role when enabling Enhanced Monitoring

1. Follow the steps in Turning Enhanced Monitoring on and off (p. 799).


2. Set Monitoring Role to Default in the step where you choose a role.

Creating the IAM role before you enable Enhanced Monitoring


You can create the required role before you enable Enhanced Monitoring. When you enable Enhanced
Monitoring, specify your new role's name. You must create this required role if you enable Enhanced
Monitoring using the AWS CLI or the RDS API.

The user that enables Enhanced Monitoring must be granted the PassRole permission. For more
information, see Example 2 in Granting a user permissions to pass a role to an AWS service in the IAM
User Guide.

To create an IAM role for Amazon RDS enhanced monitoring

1. Open the IAM console at https://console.aws.amazon.com.


2. In the navigation pane, choose Roles.
3. Choose Create role.
4. Choose the AWS service tab, and then choose RDS from the list of services.
5. Choose RDS - Enhanced Monitoring, and then choose Next.
6. Ensure that the Permissions policies shows AmazonRDSEnhancedMonitoringRole, and then
choose Next.
7. For Role name, enter a name for your role. For example, enter emaccess.

The trusted entity for your role is the AWS service monitoring.rds.amazonaws.com.
8. Choose Create role.

Turning Enhanced Monitoring on and off


You can turn Enhanced Monitoring on and off using the AWS Management Console, AWS CLI, or RDS
API. You choose the RDS DB instances on which you want to turn on Enhanced Monitoring. You can set
different granularities for metric collection on each DB instance.

Console

You can turn on Enhanced Monitoring when you create a DB instance, Multi-AZ DB cluster, or read
replica, or when you modify a DB instance or Multi-AZ DB cluster. If you modify a DB instance to turn on
Enhanced Monitoring, you don't need to reboot your DB instance for the change to take effect.

You can turn on Enhanced Monitoring in the RDS console when you do one of the following actions in
the Databases page:

799
Amazon Relational Database Service User Guide
Setting up and enabling Enhanced Monitoring

• Create a DB instance or Multi-AZ DB cluster – Choose Create database.


• Create a read replica – Choose Actions, then Create read replica.
• Modify a DB instance or Multi-AZ DB cluster – Choose Modify.

To turn Enhanced Monitoring on or off in the RDS console

1. Scroll to Additional configuration.


2. In Monitoring, choose Enable Enhanced Monitoring for your DB instance or read replica. To turn
Enhanced Monitoring off, choose Disable Enhanced Monitoring.
3. Set the Monitoring Role property to the IAM role that you created to permit Amazon RDS to
communicate with Amazon CloudWatch Logs for you, or choose Default to have RDS create a role
for you named rds-monitoring-role.
4. Set the Granularity property to the interval, in seconds, between points when metrics are collected
for your DB instance or read replica. The Granularity property can be set to one of the following
values: 1, 5, 10, 15, 30, or 60.

The fastest that the RDS console refreshes is every 5 seconds. If you set the granularity to 1 second
in the RDS console, you still see updated metrics only every 5 seconds. You can retrieve 1-second
metric updates by using CloudWatch Logs.

AWS CLI

To turn on Enhanced Monitoring using the AWS CLI, in the following commands, set the --
monitoring-interval option to a value other than 0 and set the --monitoring-role-arn option
to the role you created in Creating an IAM role for Enhanced Monitoring (p. 798).

• create-db-instance
• create-db-instance-read-replica
• modify-db-instance
• create-db-cluster (Multi-AZ DB cluster)
• modify-db-cluster (Multi-AZ DB cluster)

The --monitoring-interval option specifies the interval, in seconds, between points when Enhanced
Monitoring metrics are collected. Valid values for the option are 0, 1, 5, 10, 15, 30, and 60.

To turn off Enhanced Monitoring using the AWS CLI, set the --monitoring-interval option to 0 in
these commands.

Example

The following example turns on Enhanced Monitoring for a DB instance:

For Linux, macOS, or Unix:

aws rds modify-db-instance \


--db-instance-identifier mydbinstance \
--monitoring-interval 30 \
--monitoring-role-arn arn:aws:iam::123456789012:role/emaccess

For Windows:

aws rds modify-db-instance ^

800
Amazon Relational Database Service User Guide
Setting up and enabling Enhanced Monitoring

--db-instance-identifier mydbinstance ^
--monitoring-interval 30 ^
--monitoring-role-arn arn:aws:iam::123456789012:role/emaccess

Example

The following example turns on Enhanced Monitoring for a Multi-AZ DB cluster:

For Linux, macOS, or Unix:

aws rds modify-db-cluster \


--db-cluster-identifier mydbcluster \
--monitoring-interval 30 \
--monitoring-role-arn arn:aws:iam::123456789012:role/emaccess

For Windows:

aws rds modify-db-cluster ^


--db-cluster-identifier mydbcluster ^
--monitoring-interval 30 ^
--monitoring-role-arn arn:aws:iam::123456789012:role/emaccess

RDS API

To turn on Enhanced Monitoring using the RDS API, set the MonitoringInterval parameter to a value
other than 0 and set the MonitoringRoleArn parameter to the role you created in Creating an IAM
role for Enhanced Monitoring (p. 798). Set these parameters in the following actions:

• CreateDBInstance
• CreateDBInstanceReadReplica
• ModifyDBInstance
• CreateDBCluster (Multi-AZ DB cluster)
• ModifyDBCluster (Multi-AZ DB cluster)

The MonitoringInterval parameter specifies the interval, in seconds, between points when Enhanced
Monitoring metrics are collected. Valid values are 0, 1, 5, 10, 15, 30, and 60.

To turn off Enhanced Monitoring using the RDS API, set MonitoringInterval to 0.

Protecting against the confused deputy problem


The confused deputy problem is a security issue where an entity that doesn't have permission to perform
an action can coerce a more-privileged entity to perform the action. In AWS, cross-service impersonation
can result in the confused deputy problem. Cross-service impersonation can occur when one service (the
calling service) calls another service (the called service). The calling service can be manipulated to use its
permissions to act on another customer's resources in a way it should not otherwise have permission to
access. To prevent this, AWS provides tools that help you protect your data for all services with service
principals that have been given access to resources in your account. For more information, see The
confused deputy problem.

To limit the permissions to the resource that Amazon RDS can give another service, we recommend using
the aws:SourceArn and aws:SourceAccount global condition context keys in a trust policy for your
Enhanced Monitoring role. If you use both global condition context keys, they must use the same account
ID.

801
Amazon Relational Database Service User Guide
Viewing OS metrics in the RDS console

The most effective way to protect against the confused deputy problem is to use the aws:SourceArn
global condition context key with the full ARN of the resource. For Amazon RDS, set aws:SourceArn to
arn:aws:rds:Region:my-account-id:db:dbname.

The following example uses the aws:SourceArn and aws:SourceAccount global condition context
keys in a trust policy to prevent the confused deputy problem.

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "rds.amazonaws.com"
},
"Action": "sts:AssumeRole",
"Condition": {
"StringLike": {
"aws:SourceArn": "arn:aws:rds:Region:my-account-id:db:dbname"
},
"StringEquals": {
"aws:SourceAccount": "my-account-id"
}
}
}
]
}

Viewing OS metrics in the RDS console


You can view OS metrics reported by Enhanced Monitoring in the RDS console by choosing Enhanced
monitoring for Monitoring.

The following example shows the Enhanced Monitoring page. For descriptions of the Enhanced
Monitoring metrics, see OS metrics in Enhanced Monitoring (p. 837).

Some DB instances use more than one disk for the DB instance's data storage volume. On those
DB instances, the Physical Devices graphs show metrics for each one of the disks. For example, the
following graph shows metrics for four disks.

802
Amazon Relational Database Service User Guide
Viewing OS metrics in the RDS console

Note
Currently, Physical Devices graphs are not available for Microsoft SQL Server DB instances.

When you are viewing aggregated Disk I/O and File system graphs, the rdsdev device relates to the /
rdsdbdata file system, where all database files and logs are stored. The filesystem device relates to the
/ file system (also known as root), where files related to the operating system are stored.

If the DB instance is a Multi-AZ deployment, you can view the OS metrics for the primary DB instance
and its Multi-AZ standby replica. In the Enhanced monitoring view, choose primary to view the OS
metrics for the primary DB instance, or choose secondary to view the OS metrics for the standby replica.

803
Amazon Relational Database Service User Guide
Viewing OS metrics in the RDS console

For more information about Multi-AZ deployments, see Configuring and managing a Multi-AZ
deployment (p. 492).
Note
Currently, viewing OS metrics for a Multi-AZ standby replica is not supported for MariaDB DB
instances.

If you want to see details for the processes running on your DB instance, choose OS process list for
Monitoring.

The Process List view is shown following.

The Enhanced Monitoring metrics shown in the Process list view are organized as follows:

• RDS child processes – Shows a summary of the RDS processes that support the DB instance, for
example mysqld for MySQL DB instances. Process threads appear nested beneath the parent process.
Process threads show CPU utilization only as other metrics are the same for all threads for the process.
The console displays a maximum of 100 processes and threads. The results are a combination of
the top CPU consuming and memory consuming processes and threads. If there are more than 50
processes and more than 50 threads, the console displays the top 50 consumers in each category. This
display helps you identify which processes are having the greatest impact on performance.
• RDS processes – Shows a summary of the resources used by the RDS management agent, diagnostics
monitoring processes, and other AWS processes that are required to support RDS DB instances.
• OS processes – Shows a summary of the kernel and system processes, which generally have minimal
impact on performance.

804
Amazon Relational Database Service User Guide
Viewing OS metrics using CloudWatch Logs

The items listed for each process are:

• VIRT – Displays the virtual size of the process.


• RES – Displays the actual physical memory being used by the process.
• CPU% – Displays the percentage of the total CPU bandwidth being used by the process.
• MEM% – Displays the percentage of the total memory being used by the process.

The monitoring data that is shown in the RDS console is retrieved from Amazon CloudWatch Logs.
You can also retrieve the metrics for a DB instance as a log stream from CloudWatch Logs. For more
information, see Viewing OS metrics using CloudWatch Logs (p. 805).

Enhanced Monitoring metrics are not returned during the following:

• A failover of the DB instance.


• Changing the instance class of the DB instance (scale compute).

Enhanced Monitoring metrics are returned during a reboot of a DB instance because only the database
engine is rebooted. Metrics for the operating system are still reported.

Viewing OS metrics using CloudWatch Logs


After you have enabled Enhanced Monitoring for your DB instance or Multi-AZ DB cluster, you can view
the metrics for it using CloudWatch Logs, with each log stream representing a single DB instance or DB
cluster being monitored. The log stream identifier is the resource identifier (DbiResourceId) for the DB
instance or DB cluster.

To view Enhanced Monitoring log data

1. Open the CloudWatch console at https://console.aws.amazon.com/cloudwatch/.


2. If necessary, choose the AWS Region that your DB instance or Multi-AZ DB cluster is in. For more
information, see Regions and endpoints in the Amazon Web Services General Reference.
3. Choose Logs in the navigation pane.
4. Choose RDSOSMetrics from the list of log groups.

In a Multi-AZ DB instance deployment, log files with -secondary appended to the name are for the
Multi-AZ standby replica.

5. Choose the log stream that you want to view from the list of log streams.

805
Amazon Relational Database Service User Guide
RDS metrics reference

Metrics reference for Amazon RDS


In this reference, you can find descriptions of Amazon RDS metrics for Amazon CloudWatch,
Performance Insights, and Enhanced Monitoring.

Topics
• Amazon CloudWatch metrics for Amazon RDS (p. 806)
• Amazon CloudWatch dimensions for Amazon RDS (p. 813)
• Amazon CloudWatch metrics for Performance Insights (p. 813)
• Performance Insights counter metrics (p. 814)
• SQL statistics for Performance Insights (p. 830)
• OS metrics in Enhanced Monitoring (p. 837)

Amazon CloudWatch metrics for Amazon RDS


Amazon RDS publishes metrics to Amazon CloudWatch in the AWS/RDS and AWS/Usage namespaces.

Topics
• Amazon CloudWatch instance-level metrics for Amazon RDS (p. 806)
• Amazon CloudWatch usage metrics for Amazon RDS (p. 812)

Amazon CloudWatch instance-level metrics for Amazon RDS


The AWS/RDS namespace in Amazon CloudWatch includes the following instance-level metrics.
Note
The Amazon RDS console might display metrics in units that are different from the units sent
to Amazon CloudWatch. For example, the Amazon RDS console might display a metric in
megabytes (MB), while the metric is sent to Amazon CloudWatch in bytes.

Metric Console name Description Applies to Units

Binary Log
BinLogDiskUsage The amount of disk space occupied MariaDB Bytes
Disk Usage by binary logs. If automatic
(MB) backups are enabled for MySQL MySQL
and MariaDB instances, including
read replicas, binary logs are
created.

BurstBalance Burst Balance The percent of General Purpose All Percent


(Percent) SSD (gp2) burst-bucket I/O credits
available.

CheckpointLagCheckpoint The amount of time since the most Seconds


Lag (Seconds) recent checkpoint.

Connection
ConnectionAttempts The number of attempts to Count
Attempts connect to an instance, whether
(Count) successful or not.

CPU
CPUUtilization The percentage of CPU utilization. All Percentage
Utilization
(Percent)

806
Amazon Relational Database Service User Guide
CloudWatch metrics for RDS

Metric Console name Description Applies to Units

CPU Credit
CPUCreditUsage (T2 instances) The number of CPU Credits (vCPU-
Usage (Count) credits spent by the instance for minutes)
CPU utilization. One CPU credit
equals one vCPU running at 100
percent utilization for one minute
or an equivalent combination
of vCPUs, utilization, and time.
For example, you might have
one vCPU running at 50 percent
utilization for two minutes or
two vCPUs running at 25 percent
utilization for two minutes.

CPU credit metrics are available


at a five-minute frequency only. If
you specify a period greater than
five minutes, use the Sum statistic
instead of the Average statistic.

807
Amazon Relational Database Service User Guide
CloudWatch metrics for RDS

Metric Console name Description Applies to Units

CPU Credit
CPUCreditBalance (T2 instances) The number Credits (vCPU-
Balance of earned CPU credits that minutes)
(Count) an instance has accrued
since it was launched or
started. For T2 Standard, the
CPUCreditBalance also includes
the number of launch credits that
have been accrued.

Credits are accrued in the credit


balance after they are earned, and
removed from the credit balance
when they are spent. The credit
balance has a maximum limit,
determined by the instance size.
After the limit is reached, any
new credits that are earned are
discarded. For T2 Standard, launch
credits don't count towards the
limit.

The credits in the


CPUCreditBalance are available
for the instance to spend to burst
beyond its baseline CPU utilization.

When an instance is running,


credits in the CPUCreditBalance
don't expire. When the instance
stops, the CPUCreditBalance
does not persist, and all accrued
credits are lost.

CPU credit metrics are available at


a five-minute frequency only.

Launch credits work the same


way in Amazon RDS as they
do in Amazon EC2. For more
information, see Launch credits in
the Amazon Elastic Compute Cloud
User Guide for Linux Instances.

808
Amazon Relational Database Service User Guide
CloudWatch metrics for RDS

Metric Console name Description Applies to Units

DB
DatabaseConnections The number of client network All Count
Connections connections to the database
(Count) instance.

The number of database sessions


can be higher than the metric
value because the metric value
doesn't include the following:

• Sessions that no longer have a


network connection but which
the database hasn't cleaned up
• Sessions created by the database
engine for its own purposes
• Sessions created by the database
engine's parallel execution
capabilities
• Sessions created by the database
engine job scheduler
• Amazon RDS connections

Queue Depth
DiskQueueDepth The number of outstanding I/Os All Count
(Count) (read/write requests) waiting to
access the disk.

EBS Byte
EBSByteBalance The percentage of throughput All Percentage
% Balance credits remaining in the burst
(Percent) bucket of your RDS database.
This metric is available for basic
monitoring only.

The metric value is based on


the throughput and IOPS of
all volumes, including the root
volume, rather than on only those
volumes containing database files.

To find the instance sizes that


support this metric, see the
instance sizes with an asterisk (*) in
the EBS optimized by default table
in Amazon EC2 User Guide for Linux
Instances. The Sum statistic is not
applicable to this metric.

809
Amazon Relational Database Service User Guide
CloudWatch metrics for RDS

Metric Console name Description Applies to Units

EBSIOBalance EBS IO The percentage of I/O credits All Percentage


% Balance remaining in the burst bucket of
(Percent) your RDS database. This metric is
available for basic monitoring only.

The metric value is based on


the throughput and IOPS of
all volumes, including the root
volume, rather than on only those
volumes containing database files.

To find the instance sizes that


support this metric, see the
instance sizes with an asterisk (*) in
the EBS optimized by default table
in Amazon EC2 User Guide for Linux
Instances. The Sum statistic is not
applicable to this metric.

This metric is different from


BurstBalance. To learn how
to use this metric, see Improving
application performance and
reducing costs with Amazon
EBS-Optimized Instance burst
capability.

Failed SQL The number of failed Microsoft


FailedSQLServerAgentJobsCount Microsoft SQL Count per
Server Agent SQL Server Agent jobs during the Server minute
Jobs Count last minute.
(Count/
Minute)

Freeable
FreeableMemory The amount of available random All Bytes
Memory (MB) access memory.

For MariaDB, MySQL, Oracle, and


PostgreSQL DB instances, this
metric reports the value of the
MemAvailable field of /proc/
meminfo.

Free Storage
FreeStorageSpace The amount of available storage All Bytes
Space (MB) space.

Maximum
MaximumUsedTransactionIDs The maximum transaction IDs that PostgreSQL Count
Used have been used.
Transaction
IDs (Count)

Network
NetworkReceiveThroughput The incoming (receive) network All Bytes per
Receive traffic on the DB instance, second
Throughput including both customer
(MB/Second) database traffic and Amazon RDS
traffic used for monitoring and
replication.

810
Amazon Relational Database Service User Guide
CloudWatch metrics for RDS

Metric Console name Description Applies to Units

Network
NetworkTransmitThroughput The outgoing (transmit) network All Bytes per
Transmit traffic on the DB instance, second
Throughput including both customer
(MB/Second) database traffic and Amazon RDS
traffic used for monitoring and
replication.

Oldest
OldestReplicationSlotLag The lagging size of the replica PostgreSQL Bytes
Replication lagging the most in terms of write-
Slot Lag (MB) ahead log (WAL) data received.

ReadIOPS Read IOPS The average number of disk read I/ All Count per
(Count/ O operations per second. second
Second)

ReadLatency Read Latency The average amount of time taken All Seconds
(Seconds) per disk I/O operation.

Read
ReadThroughput The average number of bytes read All Bytes per
Throughput from disk per second. second
(MB/Second)

ReplicaLag Replica Lag For read replica configurations, the Seconds


(Seconds) amount of time a read replica DB
instance lags behind the source
DB instance. Applies to MariaDB,
Microsoft SQL Server, MySQL,
Oracle, and PostgreSQL read
replicas.

For Multi-AZ DB clusters, the


difference in time between the
latest transaction on the writer
DB instance and the latest applied
transaction on a reader DB
instance.

Replica Slot
ReplicationSlotDiskUsage The disk space used by replication PostgreSQL Bytes
Disk Usage slot files.
(MB)

SwapUsage Swap Usage The amount of swap space used on MariaDB Bytes
(MB) the DB instance.
MySQL

Oracle

PostgreSQL

Transaction
TransactionLogsDiskUsage The disk space used by transaction PostgreSQL Bytes
Logs Disk logs.
Usage (MB)

Transaction
TransactionLogsGeneration The size of transaction logs PostgreSQL Bytes per
Logs generated per second. second
Generation
(MB/Second)

811
Amazon Relational Database Service User Guide
CloudWatch metrics for RDS

Metric Console name Description Applies to Units

WriteIOPS Write IOPS The average number of disk write All Count per
(Count/ I/O operations per second. second
Second)

WriteLatency Write Latency The average amount of time taken All Seconds
(Seconds) per disk I/O operation.

Write
WriteThroughput The average number of bytes All Bytes per
Throughput written to disk per second. second
(MB/Second)

Amazon CloudWatch usage metrics for Amazon RDS


The AWS/Usage namespace in Amazon CloudWatch includes account-level usage metrics for your
Amazon RDS service quotas. CloudWatch collects usage metrics automatically for all AWS Regions.

For more information, see CloudWatch usage metrics in the Amazon CloudWatch User Guide. For more
information about quotas, see Quotas and constraints for Amazon RDS (p. 2720) and Requesting a quota
increase in the Service Quotas User Guide.

Metric Description Units*

AllocatedStorage The total storage for all DB instances. The sum excludes Gigabytes
temporary migration instances.

The number of DB cluster parameter groups in your AWS


DBClusterParameterGroups Count
account. The count excludes default parameter groups.

DBClusters The number of Amazon Aurora DB clusters in your AWS Count


account.

DBInstances The number of DB instances in your AWS account. Count

DBParameterGroups The number of DB parameter groups in your AWS account. Count


The count excludes the default DB parameter groups.

DBSecurityGroups The number of security groups in your AWS account. The Count
count excludes the default security group and the default VPC
security group.

DBSubnetGroups The number of DB subnet groups in your AWS account. The Count
count excludes the default subnet group.

The number of manually created DB cluster snapshots in your


ManualClusterSnapshots Count
AWS account. The count excludes invalid snapshots.

ManualSnapshots The number of manually created DB snapshots in your AWS Count


account. The count excludes invalid snapshots.

OptionGroups The number of option groups in your AWS account. The count Count
excludes the default option groups.

ReservedDBInstances The number of reserved DB instances in your AWS account. Count


The count excludes retired or declined instances.

812
Amazon Relational Database Service User Guide
CloudWatch dimensions for RDS

* Amazon RDS doesn't publish units for usage metrics to CloudWatch. The units only appear in the
documentation.

Amazon CloudWatch dimensions for Amazon RDS


You can filter Amazon RDS metrics data by using any dimension in the following table.

Dimension Filters the requested data for . . .

DBInstanceIdentifier A specific DB instance.

DatabaseClass All instances in a database class. For example, you can aggregate
metrics for all instances that belong to the database class
db.r5.large.

EngineName The identified engine name only. For example, you can aggregate
metrics for all instances that have the engine name postgres.

SourceRegion The specified Region only. For example, you can aggregate metrics
for all DB instances in the us-east-1 Region.

Amazon CloudWatch metrics for Performance


Insights
Performance Insights automatically publishes metrics to Amazon CloudWatch. The same data can
be queried from Performance Insights, but having the metrics in CloudWatch makes it easy to add
CloudWatch alarms. It also makes it easy to add the metrics to existing CloudWatch Dashboards.

Metric Description

DBLoad The number of active sessions for the DB engine.


Typically, you want the data for the average
number of active sessions. In Performance
Insights, this data is queried as db.load.avg.

DBLoadCPU The number of active sessions where the wait


event type is CPU. In Performance Insights, this
data is queried as db.load.avg, filtered by the
wait event type CPU.

DBLoadNonCPU The number of active sessions where the wait


event type is not CPU.

Note
These metrics are published to CloudWatch only if there is load on the DB instance.

You can examine these metrics using the CloudWatch console, the AWS CLI, or the CloudWatch API.

For example, you can get the statistics for the DBLoad metric by running the get-metric-statistics
command.

aws cloudwatch get-metric-statistics \

813
Amazon Relational Database Service User Guide
Counter metrics for Performance Insights

--region us-west-2 \
--namespace AWS/RDS \
--metric-name DBLoad \
--period 60 \
--statistics Average \
--start-time 1532035185 \
--end-time 1532036185 \
--dimensions Name=DBInstanceIdentifier,Value=db-loadtest-0

This example generates output similar to the following.

{
"Datapoints": [
{
"Timestamp": "2021-07-19T21:30:00Z",
"Unit": "None",
"Average": 2.1
},
{
"Timestamp": "2021-07-19T21:34:00Z",
"Unit": "None",
"Average": 1.7
},
{
"Timestamp": "2021-07-19T21:35:00Z",
"Unit": "None",
"Average": 2.8
},
{
"Timestamp": "2021-07-19T21:31:00Z",
"Unit": "None",
"Average": 1.5
},
{
"Timestamp": "2021-07-19T21:32:00Z",
"Unit": "None",
"Average": 1.8
},
{
"Timestamp": "2021-07-19T21:29:00Z",
"Unit": "None",
"Average": 3.0
},
{
"Timestamp": "2021-07-19T21:33:00Z",
"Unit": "None",
"Average": 2.4
}
],
"Label": "DBLoad"
}

For more information about CloudWatch, see What is Amazon CloudWatch? in the Amazon CloudWatch
User Guide.

Performance Insights counter metrics


Counter metrics are operating system and database performance metrics in the Performance Insights
dashboard. To help identify and analyze performance problems, you can correlate counter metrics
with DB load. You can add a statistic function to the metric to get the metric values. For example, the
supported functions for os.memory.active metric are .avg, .min, .max, .sum, and .sample_count.

814
Amazon Relational Database Service User Guide
Counter metrics for Performance Insights

The counter metrics are collected one time each minute. The OS metrics collection depends on whether
Enhanced Monitoring is turned on or off. If Enhanced Monitoring is turned off, the OS metrics are
collected one time each minute. If Enhanced Monitoring is turned on, the OS metrics are collected
for the selected time period. For more information about turning Enhanced Monitoring on or off, see
Turning Enhanced Monitoring on and off (p. 799).

Topics
• Performance Insights operating system counters (p. 815)
• Performance Insights counters for Amazon RDS for MariaDB and MySQL (p. 820)
• Performance Insights counters for Amazon RDS for Microsoft SQL Server (p. 825)
• Performance Insights counters for Amazon RDS for Oracle (p. 826)
• Performance Insights counters for Amazon RDS for PostgreSQL (p. 828)

Performance Insights operating system counters


The following operating system counters, which are prefixed with os, are available with Performance
Insights for all RDS engines except RDS for SQL Server .

You can use ListAvailableResourceMetrics API for the list of available counter metrics for your
DB instance. For more information, see ListAvailableResourceMetrics in the Amazon RDS Performance
Insights API Reference guide.

Counter Type Metric Description

Active Memory os.memory.active The amount of assigned


memory, in kilobytes.

Buffers Memory os.memory.buffers The amount of memory


used for buffering I/O
requests prior to writing
to the storage device, in
kilobytes.

Cached Memory os.memory.cached The amount of memory


used for caching file
system–based I/O, in
kilobytes.

DB Cache Memory os.memory.db.cache The amount of memory


used for page cache
by database process
including tmpfs
(shmem), in bytes.

DB Resident Set Size Memory os.memory.db.residentSetSize


The amount of memory
used for anonymous
and swap cache by
database process
not including tmpfs
(shmem), in bytes.

DB Swap Memory os.memory.db.swap The amount of memory


used for swap by
database process, in
bytes.

815
Amazon Relational Database Service User Guide
Counter metrics for Performance Insights

Counter Type Metric Description

Dirty Memory os.memory.dirty The amount of memory


pages in RAM that have
been modified but not
written to their related
data block in storage, in
kilobytes.

Free Memory os.memory.free The amount of


unassigned memory, in
kilobytes.

Huge Pages Free Memory os.memory.hugePagesFreeThe number of free


huge pages. Huge
pages are a feature of
the Linux kernel.

Huge Pages Rsvd Memory os.memory.hugePagesRsvdThe number of


committed huge pages.

Huge Pages Size Memory os.memory.hugePagesSizeThe size for each huge


pages unit, in kilobytes.

Huge Pages Surp Memory os.memory.hugePagesSurpThe number of


available surplus huge
pages over the total.

Huge Pages Total Memory os.memory.hugePagesTotal


The total number of
huge pages.

Inactive Memory os.memory.inactive The amount of least-


frequently used
memory pages, in
kilobytes.

Mapped Memory os.memory.mapped The total amount of


file-system contents
that is memory mapped
inside a process address
space, in kilobytes.s

Out of Memory Kill Memory os.memory.outOfMemoryKillCount


The number of OOM
Count kills that happened
over the last collection
interval.

Page Tables Memory os.memory.pageTables The amount of memory


used by page tables, in
kilobytes.

Slab Memory os.memory.slab The amount of reusable


kernel data structures,
in kilobytes.

Total Memory os.memory.total The total amount of


memory, in kilobytes.

816
Amazon Relational Database Service User Guide
Counter metrics for Performance Insights

Counter Type Metric Description

Writeback Memory os.memory.writeback The amount of dirty


pages in RAM that are
still being written to
the backing storage, in
kilobytes.

Guest Cpu Utilization os.cpuUtilization.guest The percentage of


CPU in use by guest
programs.

Idle Cpu Utilization os.cpuUtilization.idle The percentage of CPU


that is idle.

Irq Cpu Utilization os.cpuUtilization.irq The percentage of CPU


in use by software
interrupts.

Nice Cpu Utilization os.cpuUtilization.nice The percentage of CPU


in use by programs
running at lowest
priority.

Steal Cpu Utilization os.cpuUtilization.steal The percentage of CPU


in use by other virtual
machines.

System Cpu Utilization os.cpuUtilization.system The percentage of CPU


in use by the kernel.

Total Cpu Utilization os.cpuUtilization.total The total percentage


of the CPU in use. This
value includes the nice
value.

User Cpu Utilization os.cpuUtilization.user The percentage of


CPU in use by user
programs.

Wait Cpu Utilization os.cpuUtilization.wait The percentage of CPU


unused while waiting
for I/O access.

Read IOs PS Disk IO os.diskIO.<devicename>.readIOsPS


The number of read
operations per second.

Write IOs PS Disk IO os.diskIO.<devicename>.writeIOsPS


The number of write
operations per second.

Avg Queue Len Disk IO os.diskIO.<devicename>.avgQueueLen


The number of requests
waiting in the I/O
device's queue.

Avg Req Sz Disk IO os.diskIO.<devicename>.avgReqSz


The number of requests
waiting in the I/O
device's queue.

817
Amazon Relational Database Service User Guide
Counter metrics for Performance Insights

Counter Type Metric Description

Await Disk IO os.diskIO.<devicename>.await


The number of
milliseconds required
to respond to requests,
including queue time
and service time.

Read IOs PS Disk IO os.diskIO.<devicename>.readIOsPS


The number of read
operations per second.

Read KB Disk IO os.diskIO.<devicename>.readKb


The total number of
kilobytes read.

Read KB PS Disk IO os.diskIO.<devicename>.readKbPS


The number of
kilobytes read per
second.

Rrqm PS Disk IO os.diskIO.<devicename>.rrqmPS


The number of merged
read requests queued
per second.

TPS Disk IO os.diskIO.<devicename>.tps


The number of I/O
transactions per second.

Util Disk IO os.diskIO.<devicename>.util


The percentage of
CPU time during which
requests were issued.

Write IOs PS Disk IO os.diskIO.<devicename>.writeIOsPS


The number of write
operations per second.

Write KB Disk IO os.diskIO.<devicename>.writeKb


The total number of
kilobytes written.

Write KB PS Disk IO os.diskIO.<devicename>.writeKbPS


The number of
kilobytes written per
second.

Wrqm PS Disk IO os.diskIO.<devicename>.wrqmPS


The number of merged
write requests queued
per second.

Blocked Tasks os.tasks.blocked The number of tasks


that are blocked.

Running Tasks os.tasks.running The number of tasks


that are running.

Sleeping Tasks os.tasks.sleeping The number of tasks


that are sleeping.

Stopped Tasks os.tasks.stopped The number of tasks


that are stopped.

Total Tasks os.tasks.total The total number of


tasks.

818
Amazon Relational Database Service User Guide
Counter metrics for Performance Insights

Counter Type Metric Description

Zombie Tasks os.tasks.zombie The number of child


tasks that are inactive
with an active parent
task.

One Load Average Minute os.loadAverageMinute.oneThe number of


processes requesting
CPU time over the last
minute.

Fifteen Load Average Minute os.loadAverageMinute.fifteen


The number of
processes requesting
CPU time over the last
15 minutes.

Five Load Average Minute os.loadAverageMinute.fiveThe number of


processes requesting
CPU time over the last 5
minutes.

Cached Swap os.swap.cached The amount of swap


memory, in kilobytes,
used as cache memory.

Free Swap os.swap.free The amount of swap


memory free, in
kilobytes.

In Swap os.swap.in The amount of


memory, in kilobytes,
swapped in from disk.

Out Swap os.swap.out The amount of


memory, in kilobytes,
swapped out to disk.

Total Swap os.swap.total The total amount of


swap memory available
in kilobytes.

Max Files File Sys os.fileSys.maxFiles The maximum number


of files that can be
created for the file
system.

Used Files File Sys os.fileSys.usedFiles The number of files in


the file system.

Used File Percent File Sys os.fileSys.usedFilePercent The percentage of


available files in use.

Used Percent File Sys os.fileSys.usedPercent The percentage of the


file-system disk space in
use.

819
Amazon Relational Database Service User Guide
Counter metrics for Performance Insights

Counter Type Metric Description

Used File Sys os.fileSys.used The amount of disk


space used by files
in the file system, in
kilobytes.

Total File Sys os.fileSys.total The total number of


disk space available
for the file system, in
kilobytes.

Rx Network os.network.rx The number of bytes


received per second.

Tx Network os.network.tx The number of bytes


uploaded per second.

Acu Utilization General os.general.acuUtilization The percentage of


current capacity out
of the maximum
configured capacity.

Max Configured Acu General os.general.maxConfiguredAcu


The maximum capacity
configured by the user,
in ACUs.

Min Configured Acu General os.general.minConfiguredAcu


The minimum capacity
configured by the user,
in ACUs.

Num VCPUs General os.general.numVCPUs The number of virtual


CPUs for the DB
instance.

Serverless Database General os.general.serverlessDatabaseCapacity


The current capacity of
Capacity the instance, in ACUs.

Performance Insights counters for Amazon RDS for MariaDB and


MySQL
The following database counters are available with Performance Insights for Amazon RDS for MariaDB
and MySQL.

Topics
• Native counters for RDS for MariaDB and RDS for MySQL (p. 820)
• Non-native counters for Amazon RDS for MariaDB and MySQL (p. 823)

Native counters for RDS for MariaDB and RDS for MySQL
Native metrics are defined by the database engine and not by Amazon RDS. For definitions of these
native metrics, see Server status variables in the MySQL documentation.

820
Amazon Relational Database Service User Guide
Counter metrics for Performance Insights

Counter Type Unit Metric

Com_analyze SQL Queries per db.SQL.Com_analyze


second

Com_optimize SQL Queries per db.SQL.Com_optimize


second

Com_select SQL Queries per db.SQL.Com_select


second

Connections SQL The number db.Users.Connections


of connection
attempts
per minute
(successful or
not) to the
MySQL server

Innodb_rows_deleted SQL Rows per db.SQL.Innodb_rows_deleted


second

Innodb_rows_inserted SQL Rows per db.SQL.Innodb_rows_inserted


second

Innodb_rows_read SQL Rows per db.SQL.Innodb_rows_read


second

Innodb_rows_updated SQL Rows per db.SQL.Innodb_rows_updated


second

Select_full_join SQL Queries per db.SQL.Select_full_join


second

Select_full_range_join SQL Queries per db.SQL.Select_full_range_join


second

Select_range SQL Queries per db.SQL.Select_range


second

Select_range_check SQL Queries per db.SQL.Select_range_check


second

Select_scan SQL Queries per db.SQL.Select_scan


second

Slow_queries SQL Queries per db.SQL.Slow_queries


second

Sort_merge_passes SQL Queries per db.SQL.Sort_merge_passes


second

Sort_range SQL Queries per db.SQL.Sort_range


second

Sort_rows SQL Queries per db.SQL.Sort_rows


second

Sort_scan SQL Queries per db.SQL.Sort_scan


second

821
Amazon Relational Database Service User Guide
Counter metrics for Performance Insights

Counter Type Unit Metric

Questions SQL Queries per db.SQL.Questions


second

Innodb_row_lock_time Locks Milliseconds db.Locks.Innodb_row_lock_time


(average)

Table_locks_immediate Locks Requests per db.Locks.Table_locks_immediate


second

Table_locks_waited Locks Requests per db.Locks.Table_locks_waited


second

Aborted_clients Users Connections db.Users.Aborted_clients

Aborted_connects Users Connections db.Users.Aborted_connects

Threads_created Users Connections db.Users.Threads_created

Threads_running Users Connections db.Users.Threads_running

Innodb_data_writes I/O Operations per db.IO.Innodb_data_writes


second

Innodb_dblwr_writes I/O Operations per db.IO.Innodb_dblwr_writes


second

Innodb_log_write_requests I/O Operations per db.IO.Innodb_log_write_requests


second

Innodb_log_writes I/O Operations per db.IO.Innodb_log_writes


second

Innodb_pages_written I/O Pages per db.IO.Innodb_pages_written


second

Created_tmp_disk_tables Temp Tables per db.Temp.Created_tmp_disk_tables


second

Created_tmp_tables Temp Tables per db.Temp.Created_tmp_tables


second

Innodb_buffer_pool_pages_data Cache Pages db.Cache.Innodb_buffer_pool_pages_data

Innodb_buffer_pool_pages_total Cache Pages db.Cache.Innodb_buffer_pool_pages_total

Innodb_buffer_pool_read_requests Cache Pages per db.Cache.Innodb_buffer_pool_read_requests


second

Innodb_buffer_pool_reads Cache Pages per db.Cache.Innodb_buffer_pool_reads


second

Opened_tables Cache Tables db.Cache.Opened_tables

Opened_table_definitions Cache Tables db.Cache.Opened_table_definitions

Qcache_hits Cache Queries db.Cache.Qcache_hits

822
Amazon Relational Database Service User Guide
Counter metrics for Performance Insights

Non-native counters for Amazon RDS for MariaDB and MySQL


Non-native counter metrics are counters defined by Amazon RDS. A non-native metric can be a metric
that you get with a specific query. A non-native metric also can be a derived metric, where two or more
native counters are used in calculations for ratios, hit rates, or latencies.

Counter Type Metric Description Definition

innodb_buffer_pool_hits Cache db.Cache.innoDB_buffer_pool_hits


The number innodb_buffer_pool_read_requests
of reads that -
InnoDB could innodb_buffer_pool_reads
satisfy from
the buffer
pool.

innodb_buffer_pool_hit_rateCache db.Cache.innoDB_buffer_pool_hit_rate
The 100 *
percentage innodb_buffer_pool_read_requests
of reads that (innodb_buffer_pool_read_request
InnoDB could +
satisfy from innodb_buffer_pool_reads)
the buffer
pool.

innodb_buffer_pool_usage Cache db.Cache.innoDB_buffer_pool_usage


The Innodb_buffer_pool_pages_data /
percentage of Innodb_buffer_pool_pages_total
the InnoDB * 100.0
buffer pool
that contains
data (pages).
Note
When
using
compressed
tables,
this
value
can
vary.
For
more
information,
see
the
information
about
Innodb_buffer_pool_pages_data
and
Innodb_buffer_pool_pages_total
in
Server
status
variables
in the
MySQL
documentation.

823
Amazon Relational Database Service User Guide
Counter metrics for Performance Insights

Counter Type Metric Description Definition

query_cache_hit_rate Cache db.Cache.query_cache_hit_rate


MySQL result Qcache_hits /
set cache (QCache_hits +
(query cache) Com_select) * 100
hit ratio.

innodb_datafile_writes_to_disk
I/O db.IO.innoDB_datafile_writes_to_disk
The number of Innodb_data_writes -
InnoDB data Innodb_log_writes -
file writes to Innodb_dblwr_writes
disk, excluding
double write
and redo
logging write
operations.

innodb_rows_changed SQL db.SQL.innodb_rows_changed


The total db.SQL.Innodb_rows_inserted
InnoDB row +
operations. db.SQL.Innodb_rows_deleted
+
db.SQL.Innodb_rows_updated

active_transactions Transactions db.Transactions.active_transactions


The total SELECT COUNT(1) AS
active active_transactions
transactions. FROM
INFORMATION_SCHEMA.INNODB_TRX

trx_rseg_history_len Transactions db.Transactions.trx_rseg_history_len


A list of the SELECT COUNT AS
undo log pages trx_rseg_history_len
for committed FROM
transactions INFORMATION_SCHEMA.INNODB_METRIC
that is WHERE
maintained NAME='trx_rseg_history_len'
by the InnoDB
transaction
system to
implement
multi-version
concurrency
control.
For more
information
about undo log
records details,
see https://
dev.mysql.com/
doc/refman/
8.0/en/
innodb-multi-
versioning.html
in the MySQL
documentation.

824
Amazon Relational Database Service User Guide
Counter metrics for Performance Insights

Counter Type Metric Description Definition

innodb_deadlocks Locks db.Locks.innodb_deadlocksThe total SELECT COUNT AS


number of innodb_deadlocks FROM
deadlocks. INFORMATION_SCHEMA.INNODB_METRIC
WHERE
NAME='lock_deadlocks'

innodb_lock_timeouts Locks db.Locks.innodb_lock_timeouts


The total SELECT COUNT AS
number of innodb_lock_timeouts
locks that FROM
timed out. INFORMATION_SCHEMA.INNODB_METRIC
WHERE
NAME='lock_timeouts'

innodb_row_lock_waits Locks db.Locks.innodb_row_lock_waits


The total SELECT COUNT AS
number of innodb_row_lock_waits
row locks that FROM
resulted in a INFORMATION_SCHEMA.INNODB_METRIC
wait. WHERE
NAME='lock_row_lock_waits'

Performance Insights counters for Amazon RDS for Microsoft


SQL Server
The following database counters are available with Performance Insights for RDS for Microsoft SQL
Server.

Native counters for RDS for Microsoft SQL Server


Native metrics are defined by the database engine and not by Amazon RDS. You can find definitions for
these native metrics in Use SQL Server Objects in the Microsoft SQL Server documentation.

Counter Type Unit Metric

Forwarded Records Access Methods Records per second db.Access


Methods.Forwarded
Records

Page Splits Access Methods Splits per second db.Access


Methods.Page Splits

Buffer cache hit ratio Buffer Manager Ratio db.Buffer


Manager.Buffer cache
hit ratio

Page life expectancy Buffer Manager Expectancy in seconds db.Buffer Manager.Page


life expectancy

Page lookups Buffer Manager Lookups per second db.Buffer Manager.Page


lookups

Page reads Buffer Manager Reads per second db.Buffer Manager.Page


reads

825
Amazon Relational Database Service User Guide
Counter metrics for Performance Insights

Counter Type Unit Metric

Page writes Buffer Manager Writes per second db.Buffer Manager.Page


writes

Active Transactions Databases Transactions db.Databases.Active


Transactions (_Total)

Log Bytes Flushed Databases Bytes flushed per db.Databases.Log Bytes


second Flushed (_Total)

Log Flush Waits Databases Waits per second db.Databases.Log Flush


Waits (_Total)

Log Flushes Databases Flushes per second db.Databases.Log


Flushes (_Total)

Write Transactions Databases Transactions per second db.Databases.Write


Transactions (_Total)

Processes blocked General Statistics Processes blocked db.General


Statistics.Processes
blocked

User Connections General Statistics Connections db.General


Statistics.User
Connections

Latch Waits Latches Waits per second db.Latches.Latch Waits

Number of Deadlocks Locks Deadlocks per second db.Locks.Number of


Deadlocks (_Total)

Memory Grants Memory Manager Memory grants db.Memory


Pending Manager.Memory
Grants Pending

Batch Requests SQL Statistics Requests per second db.SQL Statistics.Batch


Requests

SQL Compilations SQL Statistics Compilations per db.SQL Statistics.SQL


second Compilations

SQL Re-Compilations SQL Statistics Re-compilations per db.SQL Statistics.SQL


second Re-Compilations

Performance Insights counters for Amazon RDS for Oracle


The following database counters are available with Performance Insights for RDS for Oracle.

Native counters for RDS for Oracle


Native metrics are defined by the database engine and not by Amazon RDS. You can find definitions for
these native metrics in Statistics Descriptions in the Oracle documentation.
Note
For the CPU used by this session counter metric, the unit has been transformed from the
native centiseconds to active sessions to make the value easier to use. For example, CPU send
in the DB Load chart represents the demand for CPU. The counter metric CPU used by this

826
Amazon Relational Database Service User Guide
Counter metrics for Performance Insights

session represents the amount of CPU used by Oracle sessions. You can compare CPU send to
the CPU used by this session counter metric. When demand for CPU is higher than CPU
used, sessions are waiting for CPU time.

Counter Type Unit Metric

CPU used by this User Active sessions db.User.CPU used by


session this session

SQL*Net roundtrips to/ User Roundtrips per second db.User.SQL*Net


from client roundtrips to/from
client

Bytes received via User Bytes per second db.User.bytes received


SQL*Net from client via SQL*Net from client

User commits User Commits per second db.User.user commits

Logons cumulative User Logons per second db.User.logons


cumulative

User calls User Calls per second db.User.user calls

Bytes sent via SQL*Net User Bytes per second db.User.bytes sent via
to client SQL*Net to client

User rollbacks User Rollbacks per second db.User.user rollbacks

Redo size Redo Bytes per second db.Redo.redo size

Parse count (total) SQL Parses per second db.SQL.parse count


(total)

Parse count (hard) SQL Parses per second db.SQL.parse count


(hard)

Table scan rows gotten SQL Rows per second db.SQL.table scan rows
gotten

Sorts (memory) SQL Sorts per second db.SQL.sorts (memory)

Sorts (disk) SQL Sorts per second db.SQL.sorts (disk)

Sorts (rows) SQL Sorts per second db.SQL.sorts (rows)

Physical read bytes Cache Bytes per second db.Cache.physical read


bytes

DB block gets Cache Blocks per second db.Cache.db block gets

DBWR checkpoints Cache Checkpoints per minute db.Cache.DBWR


checkpoints

Physical reads Cache Reads per second db.Cache.physical reads

Consistent gets from Cache Gets per second db.Cache.consistent


cache gets from cache

DB block gets from Cache Gets per second db.Cache.db block gets
cache from cache

827
Amazon Relational Database Service User Guide
Counter metrics for Performance Insights

Counter Type Unit Metric

Consistent gets Cache Gets per second db.Cache.consistent


gets

Performance Insights counters for Amazon RDS for PostgreSQL


The following database counters are available with Performance Insights for Amazon RDS for
PostgreSQL.

Topics
• Native counters for Amazon RDS for PostgreSQL (p. 828)
• Non-native counters for Amazon RDS for PostgreSQL (p. 829)

Native counters for Amazon RDS for PostgreSQL


Native metrics are defined by the database engine and not by Amazon RDS. You can find definitions for
these native metrics in Viewing Statistics in the PostgreSQL documentation.

Counter Type Unit Metric

blks_hit Cache Blocks per second db.Cache.blks_hit

buffers_alloc Cache Blocks per second db.Cache.buffers_alloc

buffers_checkpoint Checkpoint Blocks per second db.Checkpoint.buffers_checkpoint

checkpoint_sync_time Checkpoint Milliseconds per db.Checkpoint.checkpoint_sync_time


checkpoint

checkpoint_write_time Checkpoint Milliseconds per db.Checkpoint.checkpoint_write_time


checkpoint

checkpoints_req Checkpoint Checkpoints per db.Checkpoint.checkpoints_req


minute

checkpoints_timed Checkpoint Checkpoints per db.Checkpoint.checkpoints_timed


minute

maxwritten_clean Checkpoint Bgwriter clean stops db.Checkpoint.maxwritten_clean


per minute

deadlocks Concurrency Deadlocks per minute db.Concurrency.deadlocks

blk_read_time I/O Milliseconds db.IO.blk_read_time

blks_read I/O Blocks per second db.IO.blks_read

buffers_backend I/O Blocks per second db.IO.buffers_backend

buffers_backend_fsync I/O Blocks per second db.IO.buffers_backend_fsync

buffers_clean I/O Blocks per second db.IO.buffers_clean

tup_deleted SQL Tuples per second db.SQL.tup_deleted

tup_fetched SQL Tuples per second db.SQL.tup_fetched

828
Amazon Relational Database Service User Guide
Counter metrics for Performance Insights

Counter Type Unit Metric

tup_inserted SQL Tuples per second db.SQL.tup_inserted

tup_returned SQL Tuples per second db.SQL.tup_returned

tup_updated SQL Tuples per second db.SQL.tup_updated

temp_bytes Temp Bytes per second db.Temp.temp_bytes

temp_files Temp Files per minute db.Temp.temp_files

active_transactions Transactions Transactions db.Transactions.active_transactions

blocked_transactions Transactions Transactions db.Transactions.blocked_transactions

max_used_xact_ids Transactions Transactions db.Transactions.max_used_xact_ids

xact_commit Transactions Commits per second db.Transactions.xact_commit

xact_rollback Transactions Rollbacks per second db.Transactions.xact_rollback

numbackends User Connections db.User.numbackends

archived_count Write-ahead Files per minute db.WAL.archived_count


log (WAL)

archive_failed_count WAL Files per minute db.WAL.archive_failed_count

Non-native counters for Amazon RDS for PostgreSQL


Non-native counter metrics are counters defined by Amazon RDS. A non-native metric can be a metric
that you get with a specific query. A non-native metric also can be a derived metric, where two or more
native counters are used in calculations for ratios, hit rates, or latencies.

Counter Type Metric Description Definition

checkpoint_sync_latency
Checkpoint
db.Checkpoint.checkpoint_sync_latency
The total amount checkpoint_sync_time /
of time that has (checkpoints_timed
been spent in the + checkpoints_req)
portion of checkpoint
processing where files
are synchronized to disk.

checkpoint_write_latency
Checkpoint
db.Checkpoint.checkpoint_write_latency
The total amount of checkpoint_write_time /
time that has been (checkpoints_timed
spent in the portion of + checkpoints_req)
checkpoint processing
where files are written
to disk.

read_latency I/O db.IO.read_latency The time spent reading blk_read_time /


data file blocks by blks_read
backends in this
instance.

829
Amazon Relational Database Service User Guide
SQL statistics for Performance Insights

SQL statistics for Performance Insights


SQL statistics are performance-related metrics about SQL queries that are collected by Performance
Insights. Performance Insights gathers statistics for each second that a query is running and for each SQL
call. The SQL statistics are an average for the selected time range.

A SQL digest is a composite of all queries having a given pattern but not necessarily having the same
literal values. The digest replaces literal values with a question mark. For example, SELECT * FROM emp
WHERE lname= ?. This digest might consist of the following child queries:

SELECT * FROM emp WHERE lname = 'Sanchez'


SELECT * FROM emp WHERE lname = 'Olagappan'
SELECT * FROM emp WHERE lname = 'Wu'

All engines support SQL statistics for digest queries.

For the region, DB engine, and instance class support information for this feature, see Amazon RDS DB
engine, Region, and instance class support for Performance Insights features (p. 725)

Topics
• SQL statistics for MariaDB and MySQL (p. 830)
• SQL statistics for Oracle (p. 832)
• SQL statistics for SQL Server (p. 834)
• SQL statistics for RDS PostgreSQL (p. 835)

SQL statistics for MariaDB and MySQL


MariaDB and MySQL collect SQL statistics only at the digest level. No statistics are shown at the
statement level.

Topics
• Digest statistics for MariaDB and MySQL (p. 830)
• Per-second statistics for MariaDB and MySQL (p. 831)
• Per-call statistics for MariaDB and MySQL (p. 831)

Digest statistics for MariaDB and MySQL


Performance Insights collects SQL digest statistics from the
events_statements_summary_by_digest table. The events_statements_summary_by_digest
table is managed by your database.

The digest table doesn't have an eviction policy. When the table is full, the AWS Management Console
shows the following message:

Performance Insights is unable to collect SQL Digest statistics on new queries because the
table events_statements_summary_by_digest is full.
Please truncate events_statements_summary_by_digest table to clear the issue. Check the
User Guide for more details.

In this situation, MariaDB and MySQL don't track SQL queries. To address this issue, Performance Insights
automatically truncates the digest table when both of the following conditions are met:

• The table is full.

830
Amazon Relational Database Service User Guide
SQL statistics for Performance Insights

• Performance Insights manages the Performance Schema automatically.

For automatic management, the performance_schema parameter must be set to 0 and the
Source must not be set to user. If Performance Insights isn't managing the Performance Schema
automatically, see Turning on the Performance Schema for Performance Insights on Amazon RDS for
MariaDB or MySQL (p. 731).

In the AWS CLI, check the source of a parameter value by running the describe-db-parameters command.

Per-second statistics for MariaDB and MySQL


The following SQL statistics are available for MariaDB and MySQL DB instances.

Metric Unit

db.sql_tokenized.stats.count_star_per_sec Calls per second

db.sql_tokenized.stats.sum_timer_wait_per_sec Average active executions per second (AAE)

db.sql_tokenized.stats.sum_select_full_join_per_sec Select full join per second

db.sql_tokenized.stats.sum_select_range_check_per_sec
Select range check per second

db.sql_tokenized.stats.sum_select_scan_per_sec Select scan per second

db.sql_tokenized.stats.sum_sort_merge_passes_per_sec
Sort merge passes per second

db.sql_tokenized.stats.sum_sort_scan_per_sec Sort scans per second

db.sql_tokenized.stats.sum_sort_range_per_sec Sort ranges per second

db.sql_tokenized.stats.sum_sort_rows_per_sec Sort rows per second

db.sql_tokenized.stats.sum_rows_affected_per_sec Rows affected per second

db.sql_tokenized.stats.sum_rows_examined_per_sec Rows examined per second

db.sql_tokenized.stats.sum_rows_sent_per_sec Rows sent per second

db.sql_tokenized.stats.sum_created_tmp_disk_tables_per_sec
Created temporary disk tables per second

db.sql_tokenized.stats.sum_created_tmp_tables_per_sec
Created temporary tables per second

db.sql_tokenized.stats.sum_lock_time_per_sec Lock time per second (in ms)

Per-call statistics for MariaDB and MySQL


The following metrics provide per call statistics for a SQL statement.

Metric Unit

db.sql_tokenized.stats.sum_timer_wait_per_call Average latency per call (in ms)

db.sql_tokenized.stats.sum_select_full_join_per_call Select full joins per call

db.sql_tokenized.stats.sum_select_range_check_per_call
Select range check per call

db.sql_tokenized.stats.sum_select_scan_per_call Select scans per call

831
Amazon Relational Database Service User Guide
SQL statistics for Performance Insights

Metric Unit

db.sql_tokenized.stats.sum_sort_merge_passes_per_call
Sort merge passes per call

db.sql_tokenized.stats.sum_sort_scan_per_call Sort scans per call

db.sql_tokenized.stats.sum_sort_range_per_call Sort ranges per call

db.sql_tokenized.stats.sum_sort_rows_per_call Sort rows per call

db.sql_tokenized.stats.sum_rows_affected_per_call Rows affected per call

db.sql_tokenized.stats.sum_rows_examined_per_callRows examined per call

db.sql_tokenized.stats.sum_rows_sent_per_call Rows sent per call

db.sql_tokenized.stats.sum_created_tmp_disk_tables_per_call
Created temporary disk tables per call

db.sql_tokenized.stats.sum_created_tmp_tables_per_call
Created temporary tables per call

db.sql_tokenized.stats.sum_lock_time_per_call Lock time per call (in ms)

SQL statistics for Oracle


Amazon RDS for Oracle collects SQL statistics both at the statement and digest level. At the statement
level, the ID column represents the value of V$SQL.SQL_ID. At the digest level, the ID column shows the
value of V$SQL.FORCE_MATCHING_SIGNATURE.

If the ID is 0 at the digest level, Oracle Database has determined that this statement is not suitable for
reuse. In this case, the child SQL statements could belong to different digests. However, the statements
are grouped together under the digest_text for the first SQL statement collected.

Topics
• Per-second statistics for Oracle (p. 832)
• Per-call statistics for Oracle (p. 833)

Per-second statistics for Oracle


The following metrics provide per-second statistics for an Oracle SQL query.

Metric Unit

db.sql.stats.executions_per_sec Number of executions per second

db.sql.stats.elapsed_time_per_sec Average active executions (AAE)

db.sql.stats.rows_processed_per_sec Rows processed per second

db.sql.stats.buffer_gets_per_sec Buffer gets per second

db.sql.stats.physical_read_requests_per_sec Physical reads per second

db.sql.stats.physical_write_requests_per_sec Physical writes per second

db.sql.stats.total_sharable_mem_per_sec Total shareable memory per second (in bytes)

db.sql.stats.cpu_time_per_sec CPU time per second (in ms)

832
Amazon Relational Database Service User Guide
SQL statistics for Performance Insights

The following metrics provide per-call statistics for an Oracle SQL digest query.

Metric Unit

db.sql_tokenized.stats.executions_per_sec Number of executions per second

db.sql_tokenized.stats.elapsed_time_per_sec Average active executions (AAE)

db.sql_tokenized.stats.rows_processed_per_sec Rows processed per second

db.sql_tokenized.stats.buffer_gets_per_sec Buffer gets per second

db.sql_tokenized.stats.physical_read_requests_per_sec
Physical reads per second

db.sql_tokenized.stats.physical_write_requests_per_sec
Physical writes per second

db.sql_tokenized.stats.total_sharable_mem_per_sec Total shareable memory per second (in bytes)

db.sql_tokenized.stats.cpu_time_per_sec CPU time per second (in ms)

Per-call statistics for Oracle


The following metrics provide per-call statistics for an Oracle SQL statement.

Metric Unit

db.sql.stats.elapsed_time_per_exec Elapsed time per executions (in ms)

db.sql.stats.rows_processed_per_exec Rows processed per execution

db.sql.stats.buffer_gets_per_exec Buffer gets per execution

db.sql.stats.physical_read_requests_per_exec Physical reads per execution

db.sql.stats.physical_write_requests_per_exec Physical writes per execution

db.sql.stats.total_sharable_mem_per_exec Total shareable memory per execution (in bytes)

db.sql.stats.cpu_time_per_exec CPU time per execution (in ms)

The following metrics provide per-call statistics for an Oracle SQL digest query.

Metric Unit

db.sql_tokenized.stats.elapsed_time_per_exec Elapsed time per executions (in ms)

db.sql_tokenized.stats.rows_processed_per_exec Rows processed per execution

db.sql_tokenized.stats.buffer_gets_per_exec Buffer gets per execution

db.sql_tokenized.stats.physical_read_requests_per_exec
Physical reads per execution

db.sql_tokenized.stats.physical_write_requests_per_exec
Physical writes per execution

db.sql_tokenized.stats.total_sharable_mem_per_execTotal shareable memory per execution (in bytes)

db.sql_tokenized.stats.cpu_time_per_exec CPU time per execution (in ms)

833
Amazon Relational Database Service User Guide
SQL statistics for Performance Insights

SQL statistics for SQL Server


Amazon RDS for SQL Server collects SQL statistics both at the statement and digest level. At the
statement level, the ID column represents the value of sql_handle. At the digest level, the ID column
shows the value of query_hash.

SQL Server returns NULL values for query_hash for a few statements. For example, ALTER INDEX,
CHECKPOINT, UPDATE STATISTICS, COMMIT TRANSACTION, FETCH NEXT FROM Cursor, and a few
INSERT statements, SELECT @<variable>, conditional statements, and executable stored procedures. In
this case, the sql_handle value is displayed as the ID at the digest level for that statement.

Topics
• Per-second statistics for SQL Server (p. 834)
• Per-call statistics for SQL Server (p. 834)

Per-second statistics for SQL Server


The following metrics provide per-second statistics for a SQL Server SQL query.

Metric Unit

db.sql.stats.execution_count_per_sec Number of executions per second

db.sql.stats.total_elapsed_time_per_sec Total elapsed time per second

db.sql.stats.total_rows_per_sec Total rows processed per second

db.sql.stats.total_logical_reads_per_sec Total logical reads per second

db.sql.stats.total_logical_writes_per_sec Total logical writes per second

db.sql.stats.total_physical_reads_per_sec Total physical reads per second

db.sql.stats.total_worker_time_per_sec Total CPU time (in ms)

The following metrics provide per-second statistics for a SQL Server SQL digest query.

Metric Unit

db.sql_tokenized.stats.execution_count_per_sec Number of execution per second

db.sql_tokenized.stats.total_elapsed_time_per_sec Total elapsed time per second

db.sql_tokenized.stats.total_rows_per_sec Total rows processed per second

db.sql_tokenized.stats.total_logical_reads_per_sec Total logical reads per second

db.sql_tokenized.stats.total_logical_writes_per_sec Total logical writes per second

db.sql_tokenized.stats.total_physical_reads_per_sec Total physical reads per second

db.sql_tokenized.stats.total_worker_time_per_sec Total CPU time (in ms)

Per-call statistics for SQL Server


The following metrics provide per-call statistics for a SQL Server SQL statement.

834
Amazon Relational Database Service User Guide
SQL statistics for Performance Insights

Metric Unit

db.sql.stats.total_elapsed_time_per_call Total elapsed time per execution

db.sql.stats.total_rows_per_call Total rows processed per execution

db.sql.stats.total_logical_reads_per_call Total logical reads per execution

db.sql.stats.total_logical_writes_per_call Total logical writes per execution

db.sql.stats.total_physical_reads_per_call Total physical reads per execution

db.sql.stats.total_worker_time_per_call Total CPU time per execution (in ms)

The following metrics provide per-call statistics for a SQL Server SQL digest query.

Metric Unit

db.sql_tokenized.stats.total_elapsed_time_per_call Total elapsed time per execution

db.sql_tokenized.stats.total_rows_per_call Total rows processed per execution

db.sql_tokenized.stats.total_logical_reads_per_call Total logical reads per execution

db.sql_tokenized.stats.total_logical_writes_per_call Total logical writes per execution

db.sql_tokenized.stats.total_physical_reads_per_call Total physical reads per execution

db.sql_tokenized.stats.total_worker_time_per_call Total CPU time per execution (in ms)

SQL statistics for RDS PostgreSQL


For each SQL call and for each second that a query runs, Performance Insights collects SQL statistics.
RDS for PostgreSQL collect SQL statistics only at the digest–level. No statistics are shown at the
statement-level.

Following, you can find information about digest-level statistics for RDS for PostgreSQL.

Topics
• Digest statistics for RDS PostgreSQL (p. 835)
• Per-second digest statistics for RDS PostgreSQL (p. 836)
• Per-call digest statistics for RDS PostgreSQL (p. 836)

Digest statistics for RDS PostgreSQL


To view SQL digest statistics, RDS PostgreSQL must load the pg_stat_statements library. For
PostgreSQL DB instances that are compatible with PostgreSQL 11 or later, the database loads this library
by default. For PostgreSQL DB instances that are compatible with PostgreSQL 10 or earlier, enable this
library manually. To enable it manually, add pg_stat_statements to shared_preload_libraries
in the DB parameter group associated with the DB instance. Then reboot your DB instance. For more
information, see Working with parameter groups (p. 347).
Note
Performance Insights can only collect statistics for queries in pg_stat_activity that aren't
truncated. By default, PostgreSQL databases truncate queries longer than 1,024 bytes. To
increase the query size, change the track_activity_query_size parameter in the DB

835
Amazon Relational Database Service User Guide
SQL statistics for Performance Insights

parameter group associated with your DB instance. When you change this parameter, a DB
instance reboot is required.

Per-second digest statistics for RDS PostgreSQL


The following SQL digest statistics are available for PostgreSQL DB instances.

Metric Unit

db.sql_tokenized.stats.calls_per_sec Calls per second

db.sql_tokenized.stats.rows_per_sec Rows per second

db.sql_tokenized.stats.total_time_per_sec Average active executions per second (AAE)

db.sql_tokenized.stats.shared_blks_hit_per_sec Block hits per second

db.sql_tokenized.stats.shared_blks_read_per_sec Block reads per second

db.sql_tokenized.stats.shared_blks_dirtied_per_sec Blocks dirtied per second

db.sql_tokenized.stats.shared_blks_written_per_sec Block writes per second

db.sql_tokenized.stats.local_blks_hit_per_sec Local block hits per second

db.sql_tokenized.stats.local_blks_read_per_sec Local block reads per second

db.sql_tokenized.stats.local_blks_dirtied_per_sec Local block dirty per second

db.sql_tokenized.stats.local_blks_written_per_sec Local block writes per second

db.sql_tokenized.stats.temp_blks_written_per_sec Temporary writes per second

db.sql_tokenized.stats.temp_blks_read_per_sec Temporary reads per second

db.sql_tokenized.stats.blk_read_time_per_sec Average concurrent reads per second

db.sql_tokenized.stats.blk_write_time_per_sec Average concurrent writes per second

Per-call digest statistics for RDS PostgreSQL


The following metrics provide per call statistics for a SQL statement.

Metric Unit

db.sql_tokenized.stats.rows_per_call Rows per call

db.sql_tokenized.stats.avg_latency_per_call Average latency per call (in ms)

db.sql_tokenized.stats.shared_blks_hit_per_call Block hits per call

db.sql_tokenized.stats.shared_blks_read_per_call Block reads per call

db.sql_tokenized.stats.shared_blks_written_per_call Block writes per call

db.sql_tokenized.stats.shared_blks_dirtied_per_call Blocks dirtied per call

db.sql_tokenized.stats.local_blks_hit_per_call Local block hits per call

db.sql_tokenized.stats.local_blks_read_per_call Local block reads per call

836
Amazon Relational Database Service User Guide
OS metrics in Enhanced Monitoring

Metric Unit

db.sql_tokenized.stats.local_blks_dirtied_per_call Local block dirty per call

db.sql_tokenized.stats.local_blks_written_per_call Local block writes per call

db.sql_tokenized.stats.temp_blks_written_per_call Temporary block writes per call

db.sql_tokenized.stats.temp_blks_read_per_call Temporary block reads per call

db.sql_tokenized.stats.blk_read_time_per_call Read time per call (in ms)

db.sql_tokenized.stats.blk_write_time_per_call Write time per call (in ms)

For more information about these metrics, see pg_stat_statements in the PostgreSQL documentation.

OS metrics in Enhanced Monitoring


Amazon RDS provides metrics in real time for the operating system (OS) that your DB instance runs on.
RDS delivers the metrics from Enhanced Monitoring to your Amazon CloudWatch Logs account. The
following tables list the OS metrics available using Amazon CloudWatch Logs.

Topics
• OS metrics for MariaDB, MySQL, Oracle, and PostgreSQL (p. 837)
• OS metrics for Microsoft SQL Server (p. 842)

OS metrics for MariaDB, MySQL, Oracle, and PostgreSQL

Group Metric Console Description


name

General engine Not The database engine for the DB instance.


applicable

instanceID Not The DB instance identifier.


applicable

Not
instanceResourceID An immutable identifier for the DB instance that is unique
applicable to an AWS Region, also used as the log stream identifier.

numVCPUs Not The number of virtual CPUs for the DB instance.


applicable

timestamp Not The time at which the metrics were taken.


applicable

uptime Not The amount of time that the DB instance has been active.
applicable

version Not The version of the OS metrics' stream JSON format.


applicable

cpuUtilization
guest CPU Guest The percentage of CPU in use by guest programs.

idle CPU Idle The percentage of CPU that is idle.

837
Amazon Relational Database Service User Guide
OS metrics in Enhanced Monitoring

Group Metric Console Description


name

irq CPU IRQ The percentage of CPU in use by software interrupts.

nice CPU Nice The percentage of CPU in use by programs running at


lowest priority.

steal CPU Steal The percentage of CPU in use by other virtual machines.

system CPU System The percentage of CPU in use by the kernel.

total CPU Total The total percentage of the CPU in use. This value
includes the nice value.

user CPU User The percentage of CPU in use by user programs.

wait CPU Wait The percentage of CPU unused while waiting for I/O
access.

diskIO avgQueueLen Avg Queue The number of requests waiting in the I/O device's queue.
Size

avgReqSz Ave Request The average request size, in kilobytes.


Size

await Disk I/O The number of milliseconds required to respond to


Await requests, including queue time and service time.

device Not The identifier of the disk device in use.


applicable

readIOsPS Read IO/s The number of read operations per second.

readKb Read Total The total number of kilobytes read.

readKbPS Read Kb/s The number of kilobytes read per second.

readLatency Read The elapsed time between the submission of a read I/O
Latency request and its completion, in milliseconds.

This metric is only available for Amazon Aurora.

Read
readThroughput The amount of network throughput used by requests to
Throughput the DB cluster, in bytes per second.

This metric is only available for Amazon Aurora.

rrqmPS Rrqms The number of merged read requests queued per second.

tps TPS The number of I/O transactions per second.

util Disk I/O The percentage of CPU time during which requests were
Util issued.

writeIOsPS Write IO/s The number of write operations per second.

writeKb Write Total The total number of kilobytes written.

writeKbPS Write Kb/s The number of kilobytes written per second.

838
Amazon Relational Database Service User Guide
OS metrics in Enhanced Monitoring

Group Metric Console Description


name

writeLatencyWrite The average elapsed time between the submission of a


Latency write I/O request and its completion, in milliseconds.

This metric is only available for Amazon Aurora.

Write
writeThroughput The amount of network throughput used by responses
Throughput from the DB cluster, in bytes per second.

This metric is only available for Amazon Aurora.

wrqmPS Wrqms The number of merged write requests queued per second.

avgQueueLen Physical
physicalDeviceIO The number of requests waiting in the I/O device's queue.
Devices Avg
Queue Size

avgReqSz Physical The average request size, in kilobytes.


Devices Ave
Request
Size

await Physical The number of milliseconds required to respond to


Devices Disk requests, including queue time and service time.
I/O Await

device Not The identifier of the disk device in use.


applicable

readIOsPS Physical The number of read operations per second.


Devices
Read IO/s

readKb Physical The total number of kilobytes read.


Devices
Read Total

readKbPS Physical The number of kilobytes read per second.


Devices
Read Kb/s

rrqmPS Physical The number of merged read requests queued per second.
Devices
Rrqms

tps Physical The number of I/O transactions per second.


Devices TPS

util Physical The percentage of CPU time during which requests were
Devices Disk issued.
I/O Util

writeIOsPS Physical The number of write operations per second.


Devices
Write IO/s

839
Amazon Relational Database Service User Guide
OS metrics in Enhanced Monitoring

Group Metric Console Description


name

writeKb Physical The total number of kilobytes written.


Devices
Write Total

writeKbPS Physical The number of kilobytes written per second.


Devices
Write Kb/s

wrqmPS Physical The number of merged write requests queued per second.
Devices
Wrqms

fileSys maxFiles Max Inodes The maximum number of files that can be created for the
file system.

mountPoint Not The path to the file system.


applicable

name Not The name of the file system.


applicable

total Total The total number of disk space available for the file
Filesystem system, in kilobytes.

used Used The amount of disk space used by files in the file system,
Filesystem in kilobytes.

Used Inodes
usedFilePercent The percentage of available files in use.

usedFiles Used% The number of files in the file system.

usedPercent Used The percentage of the file-system disk space in use.


Filesystem

loadAverageMinute
fifteen Load Avg 15 The number of processes requesting CPU time over the
min last 15 minutes.

five Load Avg 5 The number of processes requesting CPU time over the
min last 5 minutes.

one Load Avg 1 The number of processes requesting CPU time over the
min last minute.

memory active Active The amount of assigned memory, in kilobytes.


Memory

buffers Buffered The amount of memory used for buffering I/O requests
Memory prior to writing to the storage device, in kilobytes.

cached Cached The amount of memory used for caching file system–
Memory based I/O.

dirty Dirty The amount of memory pages in RAM that have been
Memory modified but not written to their related data block in
storage, in kilobytes.

840
Amazon Relational Database Service User Guide
OS metrics in Enhanced Monitoring

Group Metric Console Description


name

free Free The amount of unassigned memory, in kilobytes.


Memory

Huge Pages
hugePagesFree The number of free huge pages. Huge pages are a feature
Free of the Linux kernel.

Huge Pages
hugePagesRsvd The number of committed huge pages.
Rsvd

Huge Pages
hugePagesSize The size for each huge pages unit, in kilobytes.
Size

Huge Pages
hugePagesSurp The number of available surplus huge pages over the
Surp total.

Huge Pages
hugePagesTotal The total number of huge pages.
Total

inactive Inactive The amount of least-frequently used memory pages, in


Memory kilobytes.

mapped Mapped The total amount of file-system contents that is memory


Memory mapped inside a process address space, in kilobytes.

pageTables Page Tables The amount of memory used by page tables, in kilobytes.

slab Slab The amount of reusable kernel data structures, in


Memory kilobytes.

total Total The total amount of memory, in kilobytes.


Memory

writeback Writeback The amount of dirty pages in RAM that are still being
Memory written to the backing storage, in kilobytes.

network interface Not The identifier for the network interface being used for the
applicable DB instance.

rx RX The number of bytes received per second.

tx TX The number of bytes uploaded per second.

processList cpuUsedPc CPU % The percentage of CPU used by the process.

id Not The identifier of the process.


applicable

memoryUsedPcMEM% The percentage of memory used by the process.

name Not The name of the process.


applicable

parentID Not The process identifier for the parent process of the
applicable process.

rss RES The amount of RAM allocated to the process, in kilobytes.

841
Amazon Relational Database Service User Guide
OS metrics in Enhanced Monitoring

Group Metric Console Description


name

tgid Not The thread group identifier, which is a number


applicable representing the process ID to which a thread belongs.
This identifier is used to group threads from the same
process.

vss VIRT The amount of virtual memory allocated to the process,


in kilobytes.

swap swap Swap The amount of swap memory available, in kilobytes.

swap in Swaps in The amount of memory, in kilobytes, swapped in from


disk.

swap out Swaps out The amount of memory, in kilobytes, swapped out to
disk.

free Free Swap The amount of swap memory free, in kilobytes.

committed Committed The amount of swap memory, in kilobytes, used as cache


Swap memory.

tasks blocked Tasks The number of tasks that are blocked.


Blocked

running Tasks The number of tasks that are running.


Running

sleeping Tasks The number of tasks that are sleeping.


Sleeping

stopped Tasks The number of tasks that are stopped.


Stopped

total Tasks Total The total number of tasks.

zombie Tasks The number of child tasks that are inactive with an active
Zombie parent task.

OS metrics for Microsoft SQL Server

Group Metric Console name Description

General engine Not applicable The database engine for the DB instance.

instanceID Not applicable The DB instance identifier.

Not applicable
instanceResourceID An immutable identifier for the DB instance that
is unique to an AWS Region, also used as the log
stream identifier.

numVCPUs Not applicable The number of virtual CPUs for the DB instance.

timestamp Not applicable The time at which the metrics were taken.

842
Amazon Relational Database Service User Guide
OS metrics in Enhanced Monitoring

Group Metric Console name Description

uptime Not applicable The amount of time that the DB instance has been
active.

version Not applicable The version of the OS metrics' stream JSON


format.

cpuUtilizationidle CPU Idle The percentage of CPU that is idle.

kern CPU Kernel The percentage of CPU in use by the kernel.

user CPU User The percentage of CPU in use by user programs.

disks name Not applicable The identifier for the disk.

totalKb Total Disk The total space of the disk, in kilobytes.


Space

usedKb Used Disk The amount of space used on the disk, in


Space kilobytes.

usedPc Used Disk The percentage of space used on the disk.


Space %

availKb Available Disk The space available on the disk, in kilobytes.


Space

availPc Available Disk The percentage of space available on the disk.


Space %

rdCountPS Reads/s The number of read operations per second

rdBytesPS Read Kb/s The number of bytes read per second.

wrCountPS Write IO/s The number of write operations per second.

wrBytesPS Write Kb/s The amount of bytes written per second.

memory commitTotKb Commit Total The amount of pagefile-backed virtual address


space in use, that is, the current commit charge.
This value is composed of main memory (RAM)
and disk (pagefiles).

commitLimitKb Maximum The maximum possible value for the


Commit commitTotKb metric. This value is the sum of the
current pagefile size plus the physical memory
available for pageable contents, excluding RAM
that is assigned to nonpageable areas.

commitPeakKb Commit Peak The largest value of the commitTotKb metric


since the operating system was last started.

kernTotKb Total Kernel The sum of the memory in the paged and
Memory nonpaged kernel pools, in kilobytes.

kernPagedKb Paged Kernel The amount of memory in the paged kernel pool,
Memory in kilobytes.

843
Amazon Relational Database Service User Guide
OS metrics in Enhanced Monitoring

Group Metric Console name Description

kernNonpagedKbNonpaged The amount of memory in the nonpaged kernel


Kerenel pool, in kilobytes.
Memory

pageSize Page Size The size of a page, in bytes.

physTotKb Total Memory The amount of physical memory, in kilobytes.

physAvailKb Available The amount of available physical memory, in


Memory kilobytes.

sqlServerTotKbSQL Server The amount of memory committed to SQL Server,


Total Memory in kilobytes.

sysCacheKb System Cache The amount of system cache memory, in


kilobytes.

network interface Not applicable The identifier for the network interface being
used for the DB instance.

rdBytesPS Network Read The number of bytes received per second.


Kb/s

wrBytesPS Network Write The number of bytes sent per second.


Kb/s

processList cpuUsedPc Used % The percentage of CPU used by the process.

memUsedPc MEM% The percentage of total memory used by the


process.

name Not applicable The name of the process.

pid Not applicable The identifier of the process. This value is not
present for processes that are owned by Amazon
RDS.

ppid Not applicable The process identifier for the parent of this
process. This value is only present for child
processes.

tid Not applicable The thread identifier. This value is only present for
threads. The owning process can be identified by
using the pid value.

workingSetKb Not applicable The amount of memory in the private working set
plus the amount of memory that is in use by the
process and can be shared with other processes, in
kilobytes.

Not applicable
workingSetPrivKb The amount of memory that is in use by a process,
but can't be shared with other processes, in
kilobytes.

Not applicable
workingSetShareableKb The amount of memory that is in use by a process
and can be shared with other processes, in
kilobytes.

844
Amazon Relational Database Service User Guide
OS metrics in Enhanced Monitoring

Group Metric Console name Description

virtKb Not applicable The amount of virtual address space the process
is using, in kilobytes. Use of virtual address space
doesn't necessarily imply corresponding use of
either disk or main memory pages.

system handles Handles The number of handles that the system is using.

processes Processes The number of processes running on the system.

threads Threads The number of threads running on the system.

845
Amazon Relational Database Service User Guide
Viewing logs, events, and streams
in the Amazon RDS console

Monitoring events, logs, and streams


in an Amazon RDS DB instance
When you monitor your Amazon RDS databases and your other AWS solutions, your goal is to maintain
the following:

• Reliability
• Availability
• Performance
• Security

Monitoring metrics in an Amazon RDS instance (p. 678) explains how to monitor your instance using
metrics. A complete solution must also monitor database events, log files, and activity streams. AWS
provides you with the following monitoring tools:

• Amazon EventBridge is a serverless event bus service that makes it easy to connect your applications
with data from a variety of sources. EventBridge delivers a stream of real-time data from your own
applications, Software-as-a-Service (SaaS) applications, and AWS services. EventBridge routes that
data to targets such as AWS Lambda. This way, you can monitor events that happen in services and
build event-driven architectures. For more information, see the Amazon EventBridge User Guide.
• Amazon CloudWatch Logs provides a way to monitor, store, and access your log files from Amazon
RDS instances, AWS CloudTrail, and other sources. Amazon CloudWatch Logs can monitor information
in the log files and notify you when certain thresholds are met. You can also archive your log data in
highly durable storage. For more information, see the Amazon CloudWatch Logs User Guide.
• AWS CloudTrail captures API calls and related events made by or on behalf of your AWS account.
CloudTrail delivers the log files to an Amazon S3 bucket that you specify. You can identify which users
and accounts called AWS, the source IP address from which the calls were made, and when the calls
occurred. For more information, see the AWS CloudTrail User Guide.
• Database Activity Streams is an Amazon RDS feature that provides a near real-time stream of the
activity in your DB instance. Amazon RDS pushes activities to an Amazon Kinesis data stream. The
Kinesis stream is created automatically. From Kinesis, you can configure AWS services such as Amazon
Kinesis Data Firehose and AWS Lambda to consume the stream and store the data.

Topics
• Viewing logs, events, and streams in the Amazon RDS console (p. 846)
• Monitoring Amazon RDS events (p. 850)
• Monitoring Amazon RDS log files (p. 895)
• Monitoring Amazon RDS API calls in AWS CloudTrail (p. 940)
• Monitoring Amazon RDS with Database Activity Streams (p. 944)

Viewing logs, events, and streams in the Amazon


RDS console
Amazon RDS integrates with AWS services to show information about logs, events, and database activity
streams in the RDS console.

846
Amazon Relational Database Service User Guide
Viewing logs, events, and streams
in the Amazon RDS console

The Logs & events tab for your RDS DB instance shows the following information:

• Amazon CloudWatch alarms – Shows any metric alarms that you have configured for the DB instance.
If you haven't configured alarms, you can create them in the RDS console. For more information, see
Monitoring Amazon RDS metrics with Amazon CloudWatch (p. 706).
• Recent events – Shows a summary of events (environment changes) for your RDS DB instance . For
more information, see Viewing Amazon RDS events (p. 852).
• Logs – Shows database log files generated by a DB instance. For more information, see Monitoring
Amazon RDS log files (p. 895).

The Configuration tab displays information about database activity streams.

To view logs, events, and streams for your DB instance in the RDS console

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases.
3. Choose the name of the DB instance that you want to monitor.

The database page appears. The following example shows an Oracle database named orclb.

4. Choose Logs & events.

The Logs & events section appears.

847
Amazon Relational Database Service User Guide
Viewing logs, events, and streams
in the Amazon RDS console

5. Choose Configuration.

The following example shows the status of the database activity streams for your DB instance.

848
Amazon Relational Database Service User Guide
Viewing logs, events, and streams
in the Amazon RDS console

849
Amazon Relational Database Service User Guide
Monitoring RDS events

Monitoring Amazon RDS events


An event indicates a change in an environment. This can be an AWS environment, an SaaS partner service
or application, or a custom application or service. For descriptions of the RDS events, see Amazon RDS
event categories and event messages (p. 874).

Topics
• Overview of events for Amazon RDS (p. 850)
• Viewing Amazon RDS events (p. 852)
• Working with Amazon RDS event notification (p. 855)
• Creating a rule that triggers on an Amazon RDS event (p. 870)
• Amazon RDS event categories and event messages (p. 874)

Overview of events for Amazon RDS


An RDS event indicates a change in the Amazon RDS environment. For example, Amazon RDS generates
an event when the state of a DB instance changes from pending to running. Amazon RDS delivers events
to CloudWatch Events and EventBridge in near-real time.
Note
Amazon RDS emits events on a best effort basis. We recommend that you avoid writing
programs that depend on the order or existence of notification events, because they might be
out of sequence or missing.

Amazon RDS records events that relate to the following resources:

• DB instances

For a list of DB instance events, see DB instance events (p. 876).


• DB parameter groups

For a list of DB parameter group events, see DB parameter group events (p. 889).
• DB security groups

For a list of DB security group events, see DB security group events (p. 890).
• DB snapshots

For a list of DB snapshot events, see DB snapshot events (p. 890).


• RDS Proxy events

For a list of RDS Proxy events, see RDS Proxy events (p. 891).
• Blue/green deployment events

For a list of blue/green deployment events, see Blue/green deployment events (p. 892).

This information includes the following:

• The date and time of the event


• The source name and source type of the event
• A message associated with the event
• Event notifications include tags from when the message was sent and may not reflect tags at the time
when the event occurred

850
Amazon Relational Database Service User Guide
Overview of events for Amazon RDS

851
Amazon Relational Database Service User Guide
Viewing Amazon RDS events

Viewing Amazon RDS events


You can retrieve the following event information for your Amazon RDS resources:

• Resource name
• Resource type
• Time of the event
• Message summary of the event

Access the events through the AWS Management Console, which shows events from the past 24 hours.
You can also retrieve events by using the describe-events AWS CLI command, or the DescribeEvents RDS
API operation. If you use the AWS CLI or the RDS API to view events, you can retrieve events for up to the
past 14 days.
Note
If you need to store events for longer periods of time, you can send Amazon RDS events to
CloudWatch Events. For more information, see Creating a rule that triggers on an Amazon RDS
event (p. 870)

For descriptions of the Amazon RDS events, see Amazon RDS event categories and event
messages (p. 874).

To access detailed information about events using AWS CloudTrail, including request parameters, see
CloudTrail events (p. 940).

Console
To view all Amazon RDS events for the past 24 hours

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Events.

The available events appear in a list.


3. (Optional) Enter a search term to filter your results.

The following example shows a list of events filtered by the characters stopped.

AWS CLI
To view all events generated in the last hour, call describe-events with no parameters.

852
Amazon Relational Database Service User Guide
Viewing Amazon RDS events

aws rds describe-events

The following sample output shows that a DB instance has been stopped.

{
"Events": [
{
"EventCategories": [
"notification"
],
"SourceType": "db-instance",
"SourceArn": "arn:aws:rds:us-east-1:123456789012:db:testinst",
"Date": "2022-04-22T21:31:00.681Z",
"Message": "DB instance stopped",
"SourceIdentifier": "testinst"
}
]
}

To view all Amazon RDS events for the past 10080 minutes (7 days), call the describe-events AWS CLI
command and set the --duration parameter to 10080.

aws rds describe-events --duration 10080

The following example shows the events in the specified time range for DB instance test-instance.

aws rds describe-events \


--source-identifier test-instance \
--source-type db-instance \
--start-time 2022-03-13T22:00Z \
--end-time 2022-03-13T23:59Z

The following sample output shows the status of a backup.

{
"Events": [
{
"SourceType": "db-instance",
"SourceIdentifier": "test-instance",
"EventCategories": [
"backup"
],
"Message": "Backing up DB instance",
"Date": "2022-03-13T23:09:23.983Z",
"SourceArn": "arn:aws:rds:us-east-1:123456789012:db:test-instance"
},
{
"SourceType": "db-instance",
"SourceIdentifier": "test-instance",
"EventCategories": [
"backup"
],
"Message": "Finished DB Instance backup",
"Date": "2022-03-13T23:15:13.049Z",
"SourceArn": "arn:aws:rds:us-east-1:123456789012:db:test-instance"
}
]
}

853
Amazon Relational Database Service User Guide
Viewing Amazon RDS events

API
You can view all Amazon RDS instance events for the past 14 days by calling the DescribeEvents RDS API
operation and setting the Duration parameter to 20160.

854
Amazon Relational Database Service User Guide
Working with Amazon RDS event notification

Working with Amazon RDS event notification


Amazon RDS uses the Amazon Simple Notification Service (Amazon SNS) to provide notification when an
Amazon RDS event occurs. These notifications can be in any notification form supported by Amazon SNS
for an AWS Region, such as an email, a text message, or a call to an HTTP endpoint.

Topics
• Overview of Amazon RDS event notification (p. 855)
• Granting permissions to publish notifications to an Amazon SNS topic (p. 859)
• Subscribing to Amazon RDS event notification (p. 860)
• Amazon RDS event notification tags and attributes (p. 863)
• Listing Amazon RDS event notification subscriptions (p. 864)
• Modifying an Amazon RDS event notification subscription (p. 865)
• Adding a source identifier to an Amazon RDS event notification subscription (p. 866)
• Removing a source identifier from an Amazon RDS event notification subscription (p. 867)
• Listing the Amazon RDS event notification categories (p. 868)
• Deleting an Amazon RDS event notification subscription (p. 869)

Overview of Amazon RDS event notification


Amazon RDS groups events into categories that you can subscribe to so that you can be notified when an
event in that category occurs.

Topics
• RDS resources eligible for event subscription (p. 855)
• Basic process for subscribing to Amazon RDS event notifications (p. 856)
• Delivery of RDS event notifications (p. 856)
• Billing for Amazon RDS event notifications (p. 856)
• Examples of Amazon RDS events (p. 856)

RDS resources eligible for event subscription


You can subscribe to an event category for the following resources:

• DB instance
• DB snapshot
• DB parameter group
• DB security group
• RDS Proxy
• Custom engine version

For example, if you subscribe to the backup category for a given DB instance, you're notified whenever
a backup-related event occurs that affects the DB instance. If you subscribe to a configuration change
category for a DB instance, you're notified when the DB instance is changed. You also receive notification
when an event notification subscription changes.

You might want to create several different subscriptions. For example, you might create one subscription
that receives all event notifications for all DB instances and another subscription that includes only
critical events for a subset of the DB instances. For the second subscription, specify one or more DB
instances in the filter.

855
Amazon Relational Database Service User Guide
Working with Amazon RDS event notification

Basic process for subscribing to Amazon RDS event notifications


The process for subscribing to Amazon RDS event notification is as follows:

1. You create an Amazon RDS event notification subscription by using the Amazon RDS console, AWS CLI,
or API.

Amazon RDS uses the ARN of an Amazon SNS topic to identify each subscription. The Amazon RDS
console creates the ARN for you when you create the subscription. Create the ARN by using the
Amazon SNS console, the AWS CLI, or the Amazon SNS API.
2. Amazon RDS sends an approval email or SMS message to the addresses you submitted with your
subscription.
3. You confirm your subscription by choosing the link in the notification you received.
4. The Amazon RDS console updates the My Event Subscriptions section with the status of your
subscription.
5. Amazon RDS begins sending the notifications to the addresses that you provided when you created
the subscription.

To learn about identity and access management when using Amazon SNS, see Identity and access
management in Amazon SNS in the Amazon Simple Notification Service Developer Guide.

You can use AWS Lambda to process event notifications from a DB instance. For more information, see
Using AWS Lambda with Amazon RDS in the AWS Lambda Developer Guide.

Delivery of RDS event notifications


Amazon RDS sends notifications to the addresses that you provide when you create the subscription.
The notification can include message attributes which provide structured metadata about the
message. For more information about message attributes, see Amazon RDS event categories and event
messages (p. 874).

Event notifications might take up to five minutes to be delivered.


Important
Amazon RDS doesn't guarantee the order of events sent in an event stream. The event order is
subject to change.

When Amazon SNS sends a notification to a subscribed HTTP or HTTPS endpoint, the POST message
sent to the endpoint has a message body that contains a JSON document. For more information, see
Amazon SNS message and JSON formats in the Amazon Simple Notification Service Developer Guide.

You can configure SNS to notify you with text messages. For more information, see Mobile text
messaging (SMS) in the Amazon Simple Notification Service Developer Guide.

To turn off notifications without deleting a subscription, choose No for Enabled in the Amazon RDS
console. Or you can set the Enabled parameter to false using the AWS CLI or Amazon RDS API.

Billing for Amazon RDS event notifications


Billing for Amazon RDS event notification is through Amazon SNS. Amazon SNS fees apply when using
event notification. For more information about Amazon SNS billing, see Amazon Simple Notification
Service pricing.

Examples of Amazon RDS events


The following examples illustrate different types of Amazon RDS events in JSON format. For a tutorial
that shows you how to capture and view events in JSON format, see Tutorial: Log DB instance state
changes using Amazon EventBridge (p. 871).

856
Amazon Relational Database Service User Guide
Working with Amazon RDS event notification

Topics
• Example of a DB instance event (p. 857)
• Example of a DB parameter group event (p. 857)
• Example of a DB snapshot event (p. 858)

Example of a DB instance event

The following is an example of a DB instance event in JSON format. The event shows that RDS
performed a multi-AZ failover for the instance named my-db-instance. The event ID is RDS-
EVENT-0049.

{
"version": "0",
"id": "68f6e973-1a0c-d37b-f2f2-94a7f62ffd4e",
"detail-type": "RDS DB Instance Event",
"source": "aws.rds",
"account": "123456789012",
"time": "2018-09-27T22:36:43Z",
"region": "us-east-1",
"resources": [
"arn:aws:rds:us-east-1:123456789012:db:my-db-instance"
],
"detail": {
"EventCategories": [
"failover"
],
"SourceType": "DB_INSTANCE",
"SourceArn": "arn:aws:rds:us-east-1:123456789012:db:my-db-instance",
"Date": "2018-09-27T22:36:43.292Z",
"Message": "A Multi-AZ failover has completed.",
"SourceIdentifier": "rds:my-db-instance",
"EventID": "RDS-EVENT-0049"
}
}

Example of a DB parameter group event

The following is an example of a DB parameter group event in JSON format. The event shows that the
parameter time_zone was updated in parameter group my-db-param-group. The event ID is RDS-
EVENT-0037.

{
"version": "0",
"id": "844e2571-85d4-695f-b930-0153b71dcb42",
"detail-type": "RDS DB Parameter Group Event",
"source": "aws.rds",
"account": "123456789012",
"time": "2018-10-06T12:26:13Z",
"region": "us-east-1",
"resources": [
"arn:aws:rds:us-east-1:123456789012:pg:my-db-param-group"
],
"detail": {
"EventCategories": [
"configuration change"
],
"SourceType": "DB_PARAM",
"SourceArn": "arn:aws:rds:us-east-1:123456789012:pg:my-db-param-group",
"Date": "2018-10-06T12:26:13.882Z",
"Message": "Updated parameter time_zone to UTC with apply method immediate",

857
Amazon Relational Database Service User Guide
Working with Amazon RDS event notification

"SourceIdentifier": "rds:my-db-param-group",
"EventID": "RDS-EVENT-0037"
}
}

Example of a DB snapshot event

The following is an example of a DB snapshot event in JSON format. The event shows the deletion of the
snapshot named my-db-snapshot. The event ID is RDS-EVENT-0041.

{
"version": "0",
"id": "844e2571-85d4-695f-b930-0153b71dcb42",
"detail-type": "RDS DB Snapshot Event",
"source": "aws.rds",
"account": "123456789012",
"time": "2018-10-06T12:26:13Z",
"region": "us-east-1",
"resources": [
"arn:aws:rds:us-east-1:123456789012:snapshot:rds:my-db-snapshot"
],
"detail": {
"EventCategories": [
"deletion"
],
"SourceType": "SNAPSHOT",
"SourceArn": "arn:aws:rds:us-east-1:123456789012:snapshot:rds:my-db-snapshot",
"Date": "2018-10-06T12:26:13.882Z",
"Message": "Deleted manual snapshot",
"SourceIdentifier": "rds:my-db-snapshot",
"EventID": "RDS-EVENT-0041"
}
}

858
Amazon Relational Database Service User Guide
Working with Amazon RDS event notification

Granting permissions to publish notifications to an Amazon SNS


topic
To grant Amazon RDS permissions to publish notifications to an Amazon Simple Notification Service
(Amazon SNS) topic, attach an AWS Identity and Access Management (IAM) policy to the destination
topic. For more information about permissions, see Example cases for Amazon Simple Notification
Service access control in the Amazon Simple Notification Service Developer Guide.

By default, an Amazon SNS topic has a policy allowing all Amazon RDS resources within the same
account to publish notifications to it. You can attach a custom policy to allow cross-account notifications,
or to restrict access to certain resources.

The following is an example of an IAM policy that you attach to the destination Amazon SNS topic. It
restricts the topic to DB instances with names that match the specified prefix. To use this policy, specify
the following values:

• Resource – The Amazon Resource Name (ARN) for your Amazon SNS topic
• SourceARN – Your RDS resource ARN
• SourceAccount – Your AWS account ID

To see a list of resource types and their ARNs, see Resources Defined by Amazon RDS in the Service
Authorization Reference.

{
"Version": "2008-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "events.rds.amazonaws.com"
},
"Action": [
"sns:Publish"
],
"Resource": "arn:aws:sns:us-east-1:123456789012:topic_name",
"Condition": {
"ArnLike": {
"aws:SourceArn": "arn:aws:rds:us-east-1:123456789012:db:prefix-*"
},
"StringEquals": {
"aws:SourceAccount": "123456789012"
}
}
}
]
}

859
Amazon Relational Database Service User Guide
Working with Amazon RDS event notification

Subscribing to Amazon RDS event notification


The simplest way to create a subscription is with the RDS console. If you choose to create event
notification subscriptions using the CLI or API, you must create an Amazon Simple Notification Service
topic and subscribe to that topic with the Amazon SNS console or Amazon SNS API. You will also need to
retain the Amazon Resource Name (ARN) of the topic because it is used when submitting CLI commands
or API operations. For information on creating an SNS topic and subscribing to it, see Getting started
with Amazon SNS in the Amazon Simple Notification Service Developer Guide.

You can specify the type of source you want to be notified of and the Amazon RDS source that triggers
the event:

Source type

The type of source. For example, Source type might be Instances. You must choose a source type.
Resources to include

The Amazon RDS resources that are generating the events. For example, you might choose Select
specific instances and then myDBInstance1.

The following table explains the result when you specify or don't specify Resources to include.

Resources to Description Example


include

Specified RDS notifies you about all events for the If your Source type is
specified resource only. Instances and your resource is
myDBInstance1, RDS notifies you
about all events for myDBInstance1
only.

Not specified RDS notifies you about the events for the If your Source type is Instances,
specified source type for all your Amazon RDS RDS notifies you about all instance-
resources. related events in your account.

An Amazon SNS topic subscriber receives every message published to the topic by default. To receive
only a subset of the messages, the subscriber must assign a filter policy to the topic subscription. For
more information about SNS message filtering, see Amazon SNS message filtering in the Amazon Simple
Notification Service Developer Guide

Console

To subscribe to RDS event notification

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In navigation pane, choose Event subscriptions.
3. In the Event subscriptions pane, choose Create event subscription.
4. Enter your subscription details as follows:

a. For Name, enter a name for the event notification subscription.


b. For Send notifications to, do one of the following:

• Choose New email topic. Enter a name for your email topic and a list of recipients. We
recommend that you configure the events subscriptions to the same email address as

860
Amazon Relational Database Service User Guide
Working with Amazon RDS event notification

your primary account contact. The recommendations, service events, and personal health
messages are sent using different channels. The subscriptions to the same email address
ensures that all the messages are consolidated in one location.
• Choose Amazon Resource Name (ARN). Then choose existing Amazon SNS ARN for an
Amazon SNS topic.

If you want to use a topic that has been enabled for server-side encryption (SSE), grant
Amazon RDS the necessary permissions to access the AWS KMS key. For more information, see
Enable compatibility between event sources from AWS services and encrypted topics in the
Amazon Simple Notification Service Developer Guide.
c. For Source type, choose a source type. For example, choose Instances or Parameter groups.
d. Choose the event categories and resources that you want to receive event notifications for.

The following example configures event notifications for the DB instance named testinst.

e. Choose Create.

The Amazon RDS console indicates that the subscription is being created.

AWS CLI

To subscribe to RDS event notification, use the AWS CLI create-event-subscription command.
Include the following required parameters:

• --subscription-name
• --sns-topic-arn

861
Amazon Relational Database Service User Guide
Working with Amazon RDS event notification

Example

For Linux, macOS, or Unix:

aws rds create-event-subscription \


--subscription-name myeventsubscription \
--sns-topic-arn arn:aws:sns:us-east-1:123456789012:myawsuser-RDS \
--enabled

For Windows:

aws rds create-event-subscription ^


--subscription-name myeventsubscription ^
--sns-topic-arn arn:aws:sns:us-east-1:123456789012:myawsuser-RDS ^
--enabled

API

To subscribe to Amazon RDS event notification, call the Amazon RDS API function
CreateEventSubscription. Include the following required parameters:

• SubscriptionName
• SnsTopicArn

862
Amazon Relational Database Service User Guide
Working with Amazon RDS event notification

Amazon RDS event notification tags and attributes


When Amazon RDS sends an event notification to Amazon Simple Notification Service (SNS) or Amazon
EventBridge, the notification contains message attributes and event tags. RDS sends the message
attributes separately along with the message, while the event tags are in the body of the message. Use
the message attributes and the Amazon RDS tags to add metadata to your resources. You can modify
these tags with your own notations about the DB instances. For more information about tagging Amazon
RDS resources, see Tagging Amazon RDS resources (p. 461).

By default, the Amazon SNS and Amazon EventBridge receives every message sent to them. SNS and
EventBridge can filter the message and send the notifications to the preferred communication mode,
such as an email, a text message, or a call to an HTTP endpoint.
Note
The notification sent in an email or a text message will not have event tags.

The following table shows the message attributes for RDS events sent to the topic subscriber.

Amazon RDS event attribute Description

EventID Identifier for the RDS event message, for example, RDS-
EVENT-0006.

Resource The ARN identifier for the resource emitting


the event, for example, arn:aws:rds:ap-
southeast-2:123456789012:db:database-1.

The RDS tags provide data about the resource that was affected by the service event. RDS adds the
current state of the tags in the message body when the notification is sent to SNS or EventBridge.

For more information about filtering message attributes for SNS, see Amazon SNS message filtering in
the Amazon Simple Notification Service Developer Guide.

For more information about filtering event tags for EventBridge, see Content filtering in Amazon
EventBridge event patterns in the Amazon EventBridge User Guide.

For more information about filtering payload-based tags for SNS, see https://aws.amazon.com/blogs/
compute/introducing-payload-based-message-filtering-for-amazon-sns/

863
Amazon Relational Database Service User Guide
Working with Amazon RDS event notification

Listing Amazon RDS event notification subscriptions


You can list your current Amazon RDS event notification subscriptions.

Console

To list your current Amazon RDS event notification subscriptions

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Event subscriptions. The Event subscriptions pane shows all your
event notification subscriptions.

AWS CLI

To list your current Amazon RDS event notification subscriptions, use the AWS CLI describe-event-
subscriptions command.

Example

The following example describes all event subscriptions.

aws rds describe-event-subscriptions

The following example describes the myfirsteventsubscription.

aws rds describe-event-subscriptions --subscription-name myfirsteventsubscription

API

To list your current Amazon RDS event notification subscriptions, call the Amazon RDS API
DescribeEventSubscriptions action.

864
Amazon Relational Database Service User Guide
Working with Amazon RDS event notification

Modifying an Amazon RDS event notification subscription


After you have created a subscription, you can change the subscription name, source identifier,
categories, or topic ARN.

Console

To modify an Amazon RDS event notification subscription

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Event subscriptions.
3. In the Event subscriptions pane, choose the subscription that you want to modify and choose Edit.
4. Make your changes to the subscription in either the Target or Source section.
5. Choose Edit. The Amazon RDS console indicates that the subscription is being modified.

AWS CLI

To modify an Amazon RDS event notification subscription, use the AWS CLI modify-event-
subscription command. Include the following required parameter:

• --subscription-name

Example

The following code enables myeventsubscription.

For Linux, macOS, or Unix:

aws rds modify-event-subscription \


--subscription-name myeventsubscription \
--enabled

For Windows:

aws rds modify-event-subscription ^


--subscription-name myeventsubscription ^
--enabled

API

To modify an Amazon RDS event, call the Amazon RDS API operation ModifyEventSubscription.
Include the following required parameter:

• SubscriptionName

865
Amazon Relational Database Service User Guide
Working with Amazon RDS event notification

Adding a source identifier to an Amazon RDS event notification


subscription
You can add a source identifier (the Amazon RDS source generating the event) to an existing
subscription.

Console

You can easily add or remove source identifiers using the Amazon RDS console by selecting or
deselecting them when modifying a subscription. For more information, see Modifying an Amazon RDS
event notification subscription (p. 865).

AWS CLI

To add a source identifier to an Amazon RDS event notification subscription, use the AWS CLI add-
source-identifier-to-subscription command. Include the following required parameters:

• --subscription-name
• --source-identifier

Example

The following example adds the source identifier mysqldb to the myrdseventsubscription
subscription.

For Linux, macOS, or Unix:

aws rds add-source-identifier-to-subscription \


--subscription-name myrdseventsubscription \
--source-identifier mysqldb

For Windows:

aws rds add-source-identifier-to-subscription ^


--subscription-name myrdseventsubscription ^
--source-identifier mysqldb

API

To add a source identifier to an Amazon RDS event notification subscription, call the Amazon RDS API
AddSourceIdentifierToSubscription. Include the following required parameters:

• SubscriptionName
• SourceIdentifier

866
Amazon Relational Database Service User Guide
Working with Amazon RDS event notification

Removing a source identifier from an Amazon RDS event


notification subscription
You can remove a source identifier (the Amazon RDS source generating the event) from a subscription if
you no longer want to be notified of events for that source.

Console

You can easily add or remove source identifiers using the Amazon RDS console by selecting or
deselecting them when modifying a subscription. For more information, see Modifying an Amazon RDS
event notification subscription (p. 865).

AWS CLI

To remove a source identifier from an Amazon RDS event notification subscription, use the AWS CLI
remove-source-identifier-from-subscription command. Include the following required
parameters:

• --subscription-name
• --source-identifier

Example

The following example removes the source identifier mysqldb from the myrdseventsubscription
subscription.

For Linux, macOS, or Unix:

aws rds remove-source-identifier-from-subscription \


--subscription-name myrdseventsubscription \
--source-identifier mysqldb

For Windows:

aws rds remove-source-identifier-from-subscription ^


--subscription-name myrdseventsubscription ^
--source-identifier mysqldb

API

To remove a source identifier from an Amazon RDS event notification subscription, use the Amazon
RDS API RemoveSourceIdentifierFromSubscription command. Include the following required
parameters:

• SubscriptionName
• SourceIdentifier

867
Amazon Relational Database Service User Guide
Working with Amazon RDS event notification

Listing the Amazon RDS event notification categories


All events for a resource type are grouped into categories. To view the list of categories available, use the
following procedures.

Console

When you create or modify an event notification subscription, the event categories are displayed in
the Amazon RDS console. For more information, see Modifying an Amazon RDS event notification
subscription (p. 865).

AWS CLI

To list the Amazon RDS event notification categories, use the AWS CLI describe-event-categories
command. This command has no required parameters.

Example

aws rds describe-event-categories

API

To list the Amazon RDS event notification categories, use the Amazon RDS API
DescribeEventCategories command. This command has no required parameters.

868
Amazon Relational Database Service User Guide
Working with Amazon RDS event notification

Deleting an Amazon RDS event notification subscription


You can delete a subscription when you no longer need it. All subscribers to the topic will no longer
receive event notifications specified by the subscription.

Console

To delete an Amazon RDS event notification subscription

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose DB Event Subscriptions.
3. In the My DB Event Subscriptions pane, choose the subscription that you want to delete.
4. Choose Delete.
5. The Amazon RDS console indicates that the subscription is being deleted.

AWS CLI

To delete an Amazon RDS event notification subscription, use the AWS CLI delete-event-
subscription command. Include the following required parameter:

• --subscription-name

Example

The following example deletes the subscription myrdssubscription.

aws rds delete-event-subscription --subscription-name myrdssubscription

API

To delete an Amazon RDS event notification subscription, use the RDS API DeleteEventSubscription
command. Include the following required parameter:

• SubscriptionName

869
Amazon Relational Database Service User Guide
Creating a rule that triggers on an Amazon RDS event

Creating a rule that triggers on an Amazon RDS event


Using Amazon CloudWatch Events and Amazon EventBridge, you can automate AWS services and
respond to system events such as application availability issues or resource changes.

Topics
• Creating rules to send Amazon RDS events to CloudWatch Events (p. 870)
• Tutorial: Log DB instance state changes using Amazon EventBridge (p. 871)

Creating rules to send Amazon RDS events to CloudWatch


Events
You can write simple rules to indicate which Amazon RDS events interest you and which automated
actions to take when an event matches a rule. You can set a variety of targets, such as an AWS Lambda
function or an Amazon SNS topic, which receive events in JSON format. For example, you can configure
Amazon RDS to send events to CloudWatch Events or Amazon EventBridge whenever a DB instance
is created or deleted. For more information, see the Amazon CloudWatch Events User Guide and the
Amazon EventBridge User Guide.

To create a rule that triggers on an RDS event:

1. Open the CloudWatch console at https://console.aws.amazon.com/cloudwatch/.


2. Under Events in the navigation pane, choose Rules.
3. Choose Create rule.
4. For Event Source, do the following:

a. Choose Event Pattern.


b. For Service Name, choose Relational Database Service (RDS).
c. For Event Type, choose the type of Amazon RDS resource that triggers the event. For example,
if a DB instance triggers the event, choose RDS DB Instance Event.
5. For Targets, choose Add Target and choose the AWS service that is to act when an event of the
selected type is detected.
6. In the other fields in this section, enter information specific to this target type, if any is needed.
7. For many target types, CloudWatch Events needs permissions to send events to the target. In these
cases, CloudWatch Events can create the IAM role needed for your event to run:

• To create an IAM role automatically, choose Create a new role for this specific resource.
• To use an IAM role that you created before, choose Use existing role.
8. Optionally, repeat steps 5-7 to add another target for this rule.
9. Choose Configure details. For Rule definition, type a name and description for the rule.

The rule name must be unique within this Region.


10. Choose Create rule.

For more information, see Creating a CloudWatch Events Rule That Triggers on an Event in the Amazon
CloudWatch User Guide.

870
Amazon Relational Database Service User Guide
Creating a rule that triggers on an Amazon RDS event

Tutorial: Log DB instance state changes using Amazon


EventBridge
In this tutorial, you create an AWS Lambda function that logs the state changes for an Amazon RDS
instance. You then create a rule that runs the function whenever there is a state change of an existing
RDS DB instance. The tutorial assumes that you have a small running test instance that you can shut
down temporarily.
Important
Don't perform this tutorial on a running production DB instance.

Topics
• Step 1: Create an AWS Lambda function (p. 871)
• Step 2: Create a rule (p. 872)
• Step 3: Test the rule (p. 872)

Step 1: Create an AWS Lambda function


Create a Lambda function to log the state change events. You specify this function when you create your
rule.

To create a Lambda function

1. Open the AWS Lambda console at https://console.aws.amazon.com/lambda/.


2. If you're new to Lambda, you see a welcome page. Choose Get Started Now. Otherwise, choose
Create function.
3. Choose Author from scratch.
4. On the Create function page, do the following:

a. Enter a name and description for the Lambda function. For example, name the function
RDSInstanceStateChange.
b. In Runtime, select Node.js 16x.
c. For Architecture, choose x86_64.
d. For Execution role, do either of the following:

• Choose Create a new role with basic Lambda permissions.


• For Existing role, choose Use an existing role. Choose the role that you want to use.
e. Choose Create function.
5. On the RDSInstanceStateChange page, do the following:

a. In Code source, select index.js.


b. In the index.js pane, delete the existing code.
c. Enter the following code:

console.log('Loading function');

exports.handler = async (event, context) => {


console.log('Received event:', JSON.stringify(event));
};

d. Choose Deploy.

871
Amazon Relational Database Service User Guide
Creating a rule that triggers on an Amazon RDS event

Step 2: Create a rule


Create a rule to run your Lambda function whenever you launch an Amazon RDS instance.

To create the EventBridge rule

1. Open the Amazon EventBridge console at https://console.aws.amazon.com/events/.


2. In the navigation pane, choose Rules.
3. Choose Create rule.
4. Enter a name and description for the rule. For example, enter RDSInstanceStateChangeRule.
5. Choose Rule with an event pattern, and then choose Next.
6. For Event source, choose AWS events or EventBridge partner events.
7. Scroll down to the Event pattern section.
8. For Event source, choose AWS services.
9. For AWS service, choose Relational Database Service (RDS).
10. For Event type, choose RDS DB Instance Event.
11. Leave the default event pattern. Then choose Next.
12. For Target types, choose AWS service.
13. For Select a target, choose Lambda function.
14. For Function, choose the Lambda function that you created. Then choose Next.
15. In Configure tags, choose Next.
16. Review the steps in your rule. Then choose Create rule.

Step 3: Test the rule


To test your rule, shut down an RDS DB instance. After waiting a few minutes for the instance to shut
down, verify that your Lambda function was invoked.

To test your rule by stopping a DB instance

1. Open the Amazon RDS console at https://console.aws.amazon.com/rds/.


2. Stop an RDS DB instance.
3. Open the Amazon EventBridge console at https://console.aws.amazon.com/events/.
4. In the navigation pane, choose Rules, choose the name of the rule that you created.
5. In Rule details, choose Monitoring.

You are redirected to the Amazon CloudWatch console. If you are not redirected, click View the
metrics in CloudWatch.
6. In All metrics, choose the name of the rule that you created.

The graph should indicate that the rule was invoked.


7. In the navigation pane, choose Log groups.
8. Choose the name of the log group for your Lambda function (/aws/lambda/function-name).
9. Choose the name of the log stream to view the data provided by the function for the instance that
you launched. You should see a received event similar to the following:

{
"version": "0",
"id": "12a345b6-78c9-01d2-34e5-123f4ghi5j6k",
"detail-type": "RDS DB Instance Event",
"source": "aws.rds",

872
Amazon Relational Database Service User Guide
Creating a rule that triggers on an Amazon RDS event

"account": "111111111111",
"time": "2021-03-19T19:34:09Z",
"region": "us-east-1",
"resources": [
"arn:aws:rds:us-east-1:111111111111:db:testdb"
],
"detail": {
"EventCategories": [
"notification"
],
"SourceType": "DB_INSTANCE",
"SourceArn": "arn:aws:rds:us-east-1:111111111111:db:testdb",
"Date": "2021-03-19T19:34:09.293Z",
"Message": "DB instance stopped",
"SourceIdentifier": "testdb",
"EventID": "RDS-EVENT-0087"
}
}

For more examples of RDS events in JSON format, see Overview of events for Amazon RDS (p. 850).
10. (Optional) When you're finished, you can open the Amazon RDS console and start the instance that
you stopped.

873
Amazon Relational Database Service User Guide
Amazon RDS event categories and event messages

Amazon RDS event categories and event messages


Amazon RDS generates a significant number of events in categories that you can subscribe to using the
Amazon RDS Console, AWS CLI, or the API.

Topics
• DB cluster events (p. 874)
• DB instance events (p. 876)
• DB parameter group events (p. 889)
• DB security group events (p. 890)
• DB snapshot events (p. 890)
• DB cluster snapshot events (p. 891)
• RDS Proxy events (p. 891)
• Blue/green deployment events (p. 892)
• Custom engine version events (p. 893)

DB cluster events
The following table shows the event category and a list of events when a DB cluster is the source type.

For more information about Multi-AZ DB cluster deployments, see Multi-AZ DB cluster
deployments (p. 499)

Category RDS event ID Message Notes

creation RDS-EVENT-0170 DB cluster created.

failover RDS-EVENT-0069 Cluster failover failed, check


the health of your cluster
instances and try again.

failover RDS-EVENT-0070 Promoting previous primary


again: name.

failover RDS-EVENT-0071 Completed failover to DB


instance: name.

failover RDS-EVENT-0072 Started same AZ failover to


DB instance: name.

failover RDS-EVENT-0073 Started cross AZ failover to


DB instance: name.

global failover RDS-EVENT-0181 Global switchover to DB This event is for a switchover


cluster name in Region name operation (previously called
started. "managed planned failover").

The process can be delayed


because other operations are
running on the DB cluster.

global failover RDS-EVENT-0182 Old primary DB cluster name This event is for a switchover
in Region name successfully operation (previously called
shut down. "managed planned failover").

874
Amazon Relational Database Service User Guide
Amazon RDS event categories and event messages

Category RDS event ID Message Notes


The old primary instance
in the global database isn't
accepting writes. All volumes
are synchronized.

global failover RDS-EVENT-0183 Waiting for data This event is for a switchover
synchronization across global operation (previously called
cluster members. Current "managed planned failover").
lags behind primary DB
cluster: reason. A replication lag is occurring
during the synchronization
phase of the global database
failover.

global failover RDS-EVENT-0184 New primary DB cluster This event is for a switchover
name in Region name was operation (previously called
successfully promoted. "managed planned failover").

The volume topology of


the global database is
reestablished with the new
primary volume.

global failover RDS-EVENT-0185 Global switchover to DB This event is for a switchover


cluster name in Region name operation (previously called
finished. "managed planned failover").

The global database


switchover is finished on the
primary DB cluster. Replicas
might take long to come
online after the failover
completes.

global failover RDS-EVENT-0186 Global switchover to DB This event is for a switchover


cluster name in Region name operation (previously called
is cancelled. "managed planned failover").

global failover RDS-EVENT-0187 Global switchover to DB This event is for a switchover


cluster name in Region name operation (previously called
failed. "managed planned failover").

global failover RDS-EVENT-0238 Global failover to DB cluster


name in Region name
completed.

global failover RDS-EVENT-0239 Global failover to DB cluster


name in Region name failed.

global failover RDS-EVENT-0240 Started resynchronizing


members of DB cluster name
in Region name after global
failover.

875
Amazon Relational Database Service User Guide
Amazon RDS event categories and event messages

Category RDS event ID Message Notes

global failover RDS-EVENT-0241 Finished resynchronizing


members of DB cluster name
in Region name after global
failover.

maintenance RDS-EVENT-0176 Database cluster engine


major version has been
upgraded.

maintenance RDS-EVENT-0286 Database cluster engine


version upgrade started.

maintenance RDS-EVENT-0287 Operating system upgrade


requirement detected.

maintenance RDS-EVENT-0288 Cluster operating system


upgrade starting.

maintenance RDS-EVENT-0289 Cluster operating system


upgrade completed.

maintenance RDS-EVENT-0290 Database cluster has been


patched: source version
version_number =>
new_version_number.

notification RDS-EVENT-0172 Renamed cluster from name


to name.

DB instance events
The following table shows the event category and a list of events when a DB instance is the source type.

Category RDS event ID Message Notes

availability RDS-EVENT-0006 DB instance restarted.

availability RDS-EVENT-0004 DB instance shutdown.

availability RDS-EVENT-0022 Error restarting mysql: An error has occurred while


message. restarting MySQL.

availability RDS-EVENT-0221 DB instance has reached the


storage-full threshold, and
the database has been shut
down. You can increase the
allocated storage to address
this issue.

availability RDS-EVENT-0222 Free storage capacity for


DB instance name is low at
percentage of the allocated
storage [Allocated storage:
amount, Free storage:
amount]. The database will
be shut down to prevent

876
Amazon Relational Database Service User Guide
Amazon RDS event categories and event messages

Category RDS event ID Message Notes


corruption if free storage
is lower than amount. You
can increase the allocated
storage to address this issue.

backup RDS-EVENT-0001 Backing up DB instance.

backup RDS-EVENT-0002 Finished DB instance backup.

backup RDS-EVENT-0086 We are unable to associate For more information


the option group name see Working with option
with the database instance groups (p. 331).
name. Confirm that option
group name is supported on
your DB instance class and
configuration. If so, verify all
option group settings and
retry.

configuration RDS-EVENT-0024 Applying modification to


change convert to a Multi-AZ DB
instance.

configuration RDS-EVENT-0030 Applying modification to


change convert to a standard (Single-
AZ) DB instance.

configuration RDS-EVENT-0012 Applying modification to


change database instance class.

configuration RDS-EVENT-0018 Applying modification to


change allocated storage.

configuration RDS-EVENT-0011 Updated to use


change DBParameterGroup name.

configuration RDS-EVENT-0092 Finished updating DB


change parameter group.

configuration RDS-EVENT-0028 Disabled automated backups.


change

configuration RDS-EVENT-0032 Enabled automated backups.


change

configuration RDS-EVENT-0033 There are number users


change matching the master
username; only resetting
the one not tied to a specific
host.

configuration RDS-EVENT-0025 Finished applying


change modification to convert to a
Multi-AZ DB instance.

877
Amazon Relational Database Service User Guide
Amazon RDS event categories and event messages

Category RDS event ID Message Notes

configuration RDS-EVENT-0029 Finished applying


change modification to convert to
a standard (Single-AZ) DB
instance.

configuration RDS-EVENT-0014 Finished applying


change modification to DB instance
class.

configuration RDS-EVENT-0017 Finished applying


change modification to allocated
storage.

configuration RDS-EVENT-0016 Reset master credentials.


change

configuration RDS-EVENT-0067 Unable to reset your


change password. Error information:
message.

configuration RDS-EVENT-0078 Monitoring Interval changed The Enhanced Monitoring


change to number. configuration has been
changed.

configuration RDS-EVENT-0217 Applying autoscaling-


change initiated modification to
allocated storage.

configuration RDS-EVENT-0218 Finished applying


change autoscaling-initiated
modification to allocated
storage.

creation RDS-EVENT-0005 DB instance created.

deletion RDS-EVENT-0003 DB instance deleted.

failover RDS-EVENT-0013 Multi-AZ instance failover A Multi-AZ failover that


started. resulted in the promotion of
a standby DB instance has
started.

failover RDS-EVENT-0015 Multi-AZ failover to standby A Multi-AZ failover that


complete - DNS propagation resulted in the promotion
may take a few minutes. of a standby DB instance is
complete. It may take several
minutes for the DNS to
transfer to the new primary
DB instance.

failover RDS-EVENT-0034 Abandoning user requested Amazon RDS isn't attempting


failover since a failover a requested failover because
recently occurred on the a failover recently occurred
database instance. on the DB instance.

failover RDS-EVENT-0049 Multi-AZ instance failover


completed.

878
Amazon Relational Database Service User Guide
Amazon RDS event categories and event messages

Category RDS event ID Message Notes

failover RDS-EVENT-0050 Multi-AZ instance activation A Multi-AZ activation has


started. started after a successful DB
instance recovery

failover RDS-EVENT-0051 Multi-AZ instance activation A Multi-AZ activation is


completed. complete. Your database
should be accessible now.

failover RDS-EVENT-0065 Recovered from partial


failover.

failure RDS-EVENT-0031 DB instance put into name The DB instance has failed
state. RDS recommends that due to an incompatible
you initiate a point-in-time- configuration or an
restore. underlying storage issue.
Begin a point-in-time-restore
for the DB instance.

failure RDS-EVENT-315 Unable to move The database networking


incompatible-network configuration is invalid. The
database, name, to the database could not be moved
available status: message from incompatible-network
to available.

failure RDS-EVENT-0035 Database instance put into The DB instance has invalid
state. message. parameters. For example, if
the DB instance could not
start because a memory-
related parameter is set
too high for this instance
class, your action would
be to modify the memory
parameter and reboot the DB
instance.

failure RDS-EVENT-0036 Database instance in state. The DB instance is in an


message. incompatible network. Some
of the specified subnet IDs
are invalid or do not exist.

failure RDS-EVENT-0058 The Statspack installation Error while creating Oracle


failed. message. Statspack user account
PERFSTAT. Drop the
account before you add the
STATSPACK option.

879
Amazon Relational Database Service User Guide
Amazon RDS event categories and event messages

Category RDS event ID Message Notes

failure RDS-EVENT-0079 Amazon RDS has been Enhanced Monitoring can't


unable to create credentials be enabled without the
for enhanced monitoring Enhanced Monitoring IAM
and this feature has been role. For information about
disabled. This is likely due creating the IAM role, see
to the rds-monitoring- To create an IAM role for
role not being present and Amazon RDS enhanced
configured correctly in monitoring (p. 799).
your account. Please refer
to the troubleshooting
section in the Amazon RDS
documentation for further
details.

failure RDS-EVENT-0080 Amazon RDS has been unable Enhanced Monitoring


to configure enhanced was disabled because an
monitoring on your instance: error occurred during the
name and this feature has configuration change. It is
been disabled. This is likely likely that the Enhanced
due to the rds-monitoring- Monitoring IAM role is
role not being present and configured incorrectly.
configured correctly in For information about
your account. Please refer creating the enhanced
to the troubleshooting monitoring IAM role, see
section in the Amazon RDS To create an IAM role for
documentation for further Amazon RDS enhanced
details. monitoring (p. 799).

failure RDS-EVENT-0081 Amazon RDS has been unable The IAM role that you use
to create credentials for to access your Amazon
name option. This is due S3 bucket for SQL Server
to the name IAM role not native backup and restore
being configured correctly is configured incorrectly.
in your account. Please For more information, see
refer to the troubleshooting Setting up for native backup
section in the Amazon RDS and restore (p. 1421).
documentation for further
details.

failure RDS-EVENT-0165 The RDS Custom DB instance It's your responsibility to fix


is outside the support configuration issues that put
perimeter. your RDS Custom DB instance
into the unsupported-
configuration state. If
the issue is with the AWS
infrastructure, you can use
the console or the AWS CLI
to fix it. If the issue is with
the operating system or the
database configuration, you
can log in to the host to fix it.

For more information,


see RDS Custom support
perimeter (p. 985).

880
Amazon Relational Database Service User Guide
Amazon RDS event categories and event messages

Category RDS event ID Message Notes

failure RDS-EVENT-0188 The DB instance is in a state Amazon RDS was unable


that can't be upgraded. to upgrade a MySQL DB
message instance from version 5.7
to version 8.0 because of
incompatibilities related to
the data dictionary. The DB
instance was rolled back to
MySQL version 5.7. For more
information, see Rollback
after failure to upgrade from
MySQL 5.7 to 8.0 (p. 1668).

failure RDS-EVENT-0219 DB instance is in an invalid


state. No actions are
necessary. Autoscaling will
retry later.

failure RDS-EVENT-0220 DB instance is in the cooling-


off period for a previous scale
storage operation. We're
optimizing your DB instance.
This takes at least 6 hours.
No actions are necessary.
Autoscaling will retry after
the cooling-off period.

failure RDS-EVENT-0223 Storage autoscaling is unable


to scale the storage for the
reason: reason.

failure RDS-EVENT-0224 Storage autoscaling has


triggered a pending scale
storage task that will reach or
exceed the maximum storage
threshold. Increase the
maximum storage threshold.

failure RDS-EVENT-0237 DB instance has a storage


type that's currently
unavailable in the Availability
Zone. Autoscaling will retry
later.

failure RDS-EVENT-0254 Underlying storage quota for


this customer account has
exceeded the limit. Please
increase the allowed storage
quota to let the scaling go
through on the instance.

failure RDS-EVENT-0278 The DB instance creation The message includes details


failed. message about the failure.

failure RDS-EVENT-0279 The promotion of the RDS The message includes details
Custom read replica failed. about the failure.
message

881
Amazon Relational Database Service User Guide
Amazon RDS event categories and event messages

Category RDS event ID Message Notes

failure RDS-EVENT-0280 RDS Custom couldn't The message includes details


upgrade the DB instance about the failure.
because the pre-check failed.
message

failure RDS-EVENT-0281 RDS Custom couldn't modify The message includes details
the DB instance because the about the failure.
pre-check failed. message

failure RDS-EVENT-0282 RDS Custom couldn't modify


the DB instance because the
Elastic IP permissions aren't
correct. Please confirm the
Elastic IP address is tagged
with AWSRDSCustom.

failure RDS-EVENT-0283 RDS Custom couldn't modify


the DB instance because
the Elastic IP limit has been
reached in your account.
Release unused Elastic IPs or
request a quota increase for
your Elastic IP address limit.

failure RDS-EVENT-0284 RDS Custom couldn't The message includes details


convert the instance to high about the failure.
availability because the pre-
check failed. message

failure RDS-EVENT-0285 RDS Custom couldn't create The message includes details
a final snapshot for the DB about the failure.
instance because message.

low storage RDS-EVENT-0007 Allocated storage has The allocated storage for
been exhausted. Allocate the DB instance has been
additional storage to resolve. consumed. To resolve this
issue, allocate additional
storage for the DB instance.
For more information,
see the RDS FAQ. You can
monitor the storage space for
a DB instance using the Free
Storage Space metric.

low storage RDS-EVENT-0089 The free storage capacity The DB instance has
for DB instance: name is consumed more than 90% of
low at percentage of its allocated storage. You can
the provisioned storage monitor the storage space for
[Provisioned Storage: size, a DB instance using the Free
Free Storage: size]. You Storage Space metric.
may want to increase the
provisioned storage to
address this issue.

882
Amazon Relational Database Service User Guide
Amazon RDS event categories and event messages

Category RDS event ID Message Notes

low storage RDS-EVENT-0227 Your Aurora cluster's storage The Aurora storage
is dangerously low with only subsystem is running low on
amount terabytes remaining. space.
Please take measures to
reduce the storage load on
your cluster.

maintenance RDS-EVENT-0026 Applying off-line patches to Offline maintenance of the


DB instance. DB instance is taking place.
The DB instance is currently
unavailable.

maintenance RDS-EVENT-0027 Finished applying off-line Offline maintenance of the


patches to DB instance. DB instance is complete. The
DB instance is now available.

maintenance RDS-EVENT-0047 Database instance patched.

maintenance RDS-EVENT-0155 The DB instance has a


DB engine minor version
upgrade available.

maintenance RDS-EVENT-0264 The pre-check started for the


DB engine version upgrade.

maintenance RDS-EVENT-0265 The pre-check finished


for the DB engine version
upgrade.

maintenance RDS-EVENT-0266 The downtime started for the


DB instance.

maintenance RDS-EVENT-0267 The engine version upgrade


started.

maintenance RDS-EVENT-0268 The engine version upgrade


finished.

maintenance RDS-EVENT-0269 The post-upgrade tasks are in


progress.

maintenance RDS-EVENT-0270 The DB engine version


upgrade failed. The engine
version upgrade rollback
succeeded.

maintenance, RDS-EVENT-0191 A new version of the time If you update your RDS for
notification zone file is available for Oracle DB engine, Amazon
update. RDS generates this event if
you haven't chosen a time
zone file upgrade and the
database doesn’t use the
latest DST time zone file
available on the instance.
For more information,
see Oracle time zone file
autoupgrade (p. 2091).

883
Amazon Relational Database Service User Guide
Amazon RDS event categories and event messages

Category RDS event ID Message Notes

maintenance, RDS-EVENT-0192 The update of your time zone The upgrade of your Oracle
notification file has started. time zone file has begun.
For more information,
see Oracle time zone file
autoupgrade (p. 2091).

maintenance, RDS-EVENT-0193 No update is available for the Your Oracle DB instance is


notification current time zone file version. using latest time zone file
version, and either of the
following statements is true:

• You recently added the


TIMEZONE_FILE_AUTOUPGRADE
option.
• Your Oracle DB engine is
being upgraded.

For more information,


see Oracle time zone file
autoupgrade (p. 2091).

maintenance, RDS-EVENT-0194 The update of your time zone The update of your Oracle
notification file has finished. time zone file has completed.
For more information,
see Oracle time zone file
autoupgrade (p. 2091).

maintenance, RDS-EVENT-0195 message The update of the Oracle


failure time zone file failed. For
more information, see
Oracle time zone file
autoupgrade (p. 2091).

notification RDS-EVENT-0044 message This is an operator-issued


notification. For more
information, see the event
message.

notification RDS-EVENT-0048 Delaying database engine Patching of the DB instance


upgrade since this instance has been delayed.
has read replicas that need to
be upgraded first.

notification RDS-EVENT-0054 message The MySQL storage engine


you are using is not InnoDB,
which is the recommended
MySQL storage engine for
Amazon RDS. For information
about MySQL storage
engines, see Supported
storage engines for RDS for
MySQL (p. 1624).

884
Amazon Relational Database Service User Guide
Amazon RDS event categories and event messages

Category RDS event ID Message Notes

notification RDS-EVENT-0055 message The number of tables you


have for your DB instance
exceeds the recommended
best practices for Amazon
RDS. Reduce the number
of tables on your DB
instance. For information
about recommended best
practices, see Amazon
RDS basic operational
guidelines (p. 286).

notification RDS-EVENT-0056 message The number of databases you


have for your DB instance
exceeds the recommended
best practices for Amazon
RDS. Reduce the number
of databases on your DB
instance. For information
about recommended best
practices, see Amazon
RDS basic operational
guidelines (p. 286).

notification RDS-EVENT-0064 The TDE encryption key was For information about
rotated successfully. recommended best
practices, see Amazon
RDS basic operational
guidelines (p. 286).

notification RDS-EVENT-0084 Unable to convert the You attempted to convert


DB instance to Multi-AZ: a DB instance to Multi-AZ,
message. but it contains in-memory
file groups that are not
supported for Multi-AZ.
For more information, see
Multi-AZ deployments for
Amazon RDS for Microsoft
SQL Server (p. 1450).

notification RDS-EVENT-0087 DB instance stopped.

notification RDS-EVENT-0088 DB instance started.

notification RDS-EVENT-0154 DB instance is being started


due to it exceeding the
maximum allowed time being
stopped.

885
Amazon Relational Database Service User Guide
Amazon RDS event categories and event messages

Category RDS event ID Message Notes

notification RDS-EVENT-0157 Unable to modify the DB RDS can't modify the DB


instance class. message. instance class because the
target instance class can't
support the number of
databases that exist on the
source DB instance. The error
message appears as: "The
instance has N databases,
but after conversion it would
only support N". For more
information, see Limitations
for Microsoft SQL Server DB
instances (p. 1357).

notification RDS-EVENT-0158 Database instance is in


a state that cannot be
upgraded: message.

notification RDS-EVENT-0167 message The RDS Custom support


perimeter configuration has
changed.

notification RDS-EVENT-0189 The gp2 burst balance The gp2 burst balance
credits for the RDS database credits for the RDS database
instance are low. To resolve instance are low. To resolve
this issue, reduce IOPS usage this issue, reduce IOPS usage
or modify your storage or modify your storage
settings to enable higher settings to enable higher
performance. performance. For more
information, see I/O credits
and burst performance in
the Amazon Elastic Compute
Cloud User Guide.

notification RDS-EVENT-0225 Storage size amount GB is This event is invoked when


approaching the maximum storage reaches 80% of the
storage threshold amount maximum storage threshold.
GB. Increase the maximum To avoid the event, increase
storage threshold. the maximum storage
threshold.

886
Amazon Relational Database Service User Guide
Amazon RDS event categories and event messages

Category RDS event ID Message Notes

notification RDS-EVENT-0231 Your DB instance's storage An error has occurred in the


modification encountered read replication process. For
an internal error. The more information, see the
modification request is event message.
pending and will be retried
later. In addition, see the
troubleshooting section for
read replicas for your DB
engine.

• Troubleshooting a
MariaDB read replica
problem (p. 1327)
• Troubleshooting a SQL
Server read replica
problem (p. 1449)
• Troubleshooting a
MySQL read replica
problem (p. 1718)
• Troubleshooting RDS for
Oracle replicas (p. 1988)

notification RDS-EVENT-0253 The database is using the RDS Optimized Writes


doublewrite buffer. message. is incompatible with
For more information see the the instance storage
RDS Optimized Writes for configuration. For more
name documentation. information, see Improving
write performance with
Amazon RDS Optimized
Writes for MySQL (p. 1659)
and Improving write
performance with Amazon
RDS Optimized Writes for
MariaDB (p. 1284).

read replica RDS-EVENT-0045 Replication has stopped. Replication on your DB


instance has been stopped
due to insufficient storage.
Scale storage or reduce
the maximum size of your
redo logs to let replication
continue. To accommodate
redo logs of size %d MiB you
need at least %d MiB free
storage.

887
Amazon Relational Database Service User Guide
Amazon RDS event categories and event messages

Category RDS event ID Message Notes

read replica RDS-EVENT-0046 Replication for the Read This message appears when
Replica resumed. you first create a read replica,
or as a monitoring message
confirming that replication is
functioning properly. If this
message follows an RDS-
EVENT-0045 notification,
then replication has resumed
following an error or after
replication was stopped.

read replica RDS-EVENT-0057 Replication streaming has


been terminated.

read replica RDS-EVENT-0062 Replication for the Read


Replica has been manually
stopped.

read replica RDS-EVENT-0063 Replication from Non RDS


instance has been reset.

read replica RDS-EVENT-0202 Read replica creation failed.

recovery RDS-EVENT-0020 Recovery of the DB instance


has started. Recovery time
will vary with the amount of
data to be recovered.

recovery RDS-EVENT-0021 Recovery of the DB instance


is complete.

recovery RDS-EVENT-0023 Emergent Snapshot Request: A manual backup has been


message. requested but Amazon RDS
is currently in the process
of creating a DB snapshot.
Submit the request again
after Amazon RDS has
completed the DB snapshot.

recovery RDS-EVENT-0052 Multi-AZ instance recovery Recovery time will vary with
started. the amount of data to be
recovered.

recovery RDS-EVENT-0053 Multi-AZ instance recovery


completed. Pending failover
or activation.

888
Amazon Relational Database Service User Guide
Amazon RDS event categories and event messages

Category RDS event ID Message Notes

recovery RDS-EVENT-0066 Instance will be degraded The SQL Server DB instance


while mirroring is is re-establishing its mirror.
reestablished: message. Performance will be
degraded until the mirror is
reestablished. A database
was found with non-FULL
recovery model. The recovery
model was changed back
to FULL and mirroring
recovery was started.
(<dbname>: <recovery model
found>[,...])"

recovery RDS-EVENT-0166 message The RDS Custom DB instance


is inside the support
perimeter.

restoration RDS-EVENT-0019 Restored from DB instance The DB instance has been


name to name. restored from a point-in-time
backup.

security RDS-EVENT-0068 Decrypting hsm partition RDS is decrypting the


password to update instance. AWS CloudHSM partition
password to make updates
to the DB instance. For more
information see Oracle
Database Transparent Data
Encryption (TDE) with
AWS CloudHSM in the AWS
CloudHSM User Guide.

security RDS-EVENT-0230 A system update is available A new Operating System


patching for your DB instance. For update is available.
information about applying
updates, see 'Maintaining a A new, minor version,
DB instance' in the RDS User operating system update
Guide. is available for your DB
instance. For information
about applying updates,
see Working with operating
system updates (p. 426).

DB parameter group events


The following table shows the event category and a list of events when a DB parameter group is the
source type.

Category RDS event ID Message Notes

configuration RDS-EVENT-0037 Updated parameter name to


change value with apply method
method.

889
Amazon Relational Database Service User Guide
Amazon RDS event categories and event messages

DB security group events


The following table shows the event category and a list of events when a DB security group is the source
type.
Note
DB security groups are resources for EC2-Classic. EC2-Classic was retired on August 15, 2022.
If you haven't migrated from EC2-Classic to a VPC, we recommend that you migrate as soon as
possible. For more information, see Migrate from EC2-Classic to a VPC in the Amazon EC2 User
Guide and the blog EC2-Classic Networking is Retiring – Here’s How to Prepare.

Category RDS event ID Message Notes

configuration RDS-EVENT-0038 Applied change to security


change group.

failure RDS-EVENT-0039 Revoking authorization as The security group owned


user. by user doesn't exist. The
authorization for the security
group has been revoked
because it is invalid.

DB snapshot events
The following table shows the event category and a list of events when a DB snapshot is the source type.

Category RDS event ID Message Notes

creation RDS-EVENT-0040 Creating manual snapshot.

creation RDS-EVENT-0042 Manual snapshot created.

creation RDS-EVENT-0090 Creating automated


snapshot.

creation RDS-EVENT-0091 Automated snapshot created.

deletion RDS-EVENT-0041 Deleted user snapshot.

notification RDS-EVENT-0059 Started copy of This is a cross-Region


snapshotname from region snapshot copy.
name.

notification RDS-EVENT-0060 Finished copy of snapshot This is a cross-Region


name from region name in snapshot copy.
number minutes.

notification RDS-EVENT-0061 Canceled snapshot copy This is a cross-Region


request of name from region snapshot copy.
name.

notification RDS-EVENT-0159 The snapshot export task


failed.

notification RDS-EVENT-0160 The snapshot export task was


canceled.

890
Amazon Relational Database Service User Guide
Amazon RDS event categories and event messages

Category RDS event ID Message Notes

notification RDS-EVENT-0161 The snapshot export task


completed.

notification RDS-EVENT-0196 Started copy of snapshot This is a local snapshot copy.


name in region name.

notification RDS-EVENT-0197 Finished copy of snapshot This is a local snapshot copy.


name in region name.

notification RDS-EVENT-0190 Canceled snapshot copy This is a local snapshot copy.


request of name in region
name.

restoration RDS-EVENT-0043 Restored from snapshot A DB instance is being


name. restored from a DB snapshot.

DB cluster snapshot events


The following table shows the event category and a list of events when a DB cluster snapshot is the
source type.

Category RDS event ID Message Notes

backup RDS-EVENT-0074 Creating manual cluster


snapshot.

backup RDS-EVENT-0075 Manual cluster snapshot


created.

backup RDS-EVENT-0168 Creating automated cluster


snapshot.

backup RDS-EVENT-0169 Automated cluster snapshot


created.

RDS Proxy events


The following table shows the event category and a list of events when an RDS Proxy is the source type.

Category RDS event ID Message Notes

configuration RDS-EVENT-0204 RDS modified DB proxy name.


change

configuration RDS-EVENT-0207 RDS modified the end point


change of the DB proxy name.

configuration RDS-EVENT-0213 RDS detected the addition


change of the DB instance and
automatically added it to the
target group of the DB proxy
name.

891
Amazon Relational Database Service User Guide
Amazon RDS event categories and event messages

Category RDS event ID Message Notes

configuration RDS-EVENT-0213 RDS detected creation


change of DB instance name and
automatically added it to
target group name of DB
proxy name.

configuration RDS-EVENT-0214 RDS detected deletion


change of DB instance name and
automatically removed it
from target group name of
DB proxy name.

configuration RDS-EVENT-0215 RDS detected deletion


change of DB cluster name and
automatically removed it
from target group name of
DB proxy name.

creation RDS-EVENT-0203 RDS created DB proxy name.

creation RDS-EVENT-0206 RDS created endpoint name


for DB proxy name.

deletion RDS-EVENT-0205 RDS deleted DB proxy name.

deletion RDS-EVENT-0208 RDS deleted endpoint name


for DB proxy name.

failure RDS-EVENT-0243 RDS failed to provision To determine the


capacity for proxy name recommended number for
because there aren't enough your instance class, see
IP addresses available in Planning for IP address
your subnets: name. To capacity (p. 1208).
fix the issue, make sure
that your subnets have
the minimum number of
unused IP addresses as
recommended in the RDS
Proxy documentation.

failure RDS-EVENT-0275 RDS throttled some


connections to DB proxy (RDS
Proxy).

Blue/green deployment events


The following table shows the event category and a list of events when a blue/green deployment is the
source type.

For more information about blue/green deployments, see Using Amazon RDS Blue/Green Deployments
for database updates (p. 566).

892
Amazon Relational Database Service User Guide
Amazon RDS event categories and event messages

Category Amazon RDS event Message Notes


ID

creation RDS-EVENT-0244 Blue/green deployment


tasks completed. You can
make more modifications
to the green environment
databases or switch over the
deployment.

deletion RDS-EVENT-0246 Blue/green deployment


deleted.

failure RDS-EVENT-0245 Creation of blue/green


deployment failed because
the (source/target) DB
(instance/cluster) wasn't
found.

failure RDS-EVENT-0249 Switchover canceled on blue/


green deployment.

failure RDS-EVENT-0252 Switchover from primary


source to target canceled due
to reason.

failure RDS-EVENT-0261 Switchover from source to


target was canceled.

notification RDS-EVENT-0247 Switchover started on blue/


green deployment.

notification RDS-EVENT-0248 Switchover completed on


blue/green deployment.

notification RDS-EVENT-0250 Switchover from primary


source to target started.

notification RDS-EVENT-0251 Switchover from primary


source to target completed.
Renamed databases.

notification RDS-EVENT-0259 Switchover from source to


target started.

notification RDS-EVENT-0260 Switchover from source to


target completed. Renamed
databases.

Custom engine version events


The following table shows the event category and a list of events when a custom engine version is the
source type.

893
Amazon Relational Database Service User Guide
Amazon RDS event categories and event messages

Category Amazon RDS event Message Notes


ID

failure RDS-EVENT-0198 Creation failed for custom The message includes details
engine version name. about the failure, such as
message missing files.

failure RDS-EVENT-0277 Failure during deletion of The message includes details


custom engine version name. about the failure.
message

894
Amazon Relational Database Service User Guide
Monitoring RDS logs

Monitoring Amazon RDS log files


Every RDS database engine generates logs that you can access for auditing and troubleshooting. The
type of logs depends on your database engine.

You can access database logs using the AWS Management Console, the AWS Command Line Interface
(AWS CLI), or the Amazon RDS API. You can't view, watch, or download transaction logs.

Topics
• Viewing and listing database log files (p. 895)
• Downloading a database log file (p. 896)
• Watching a database log file (p. 897)
• Publishing database logs to Amazon CloudWatch Logs (p. 898)
• Reading log file contents using REST (p. 900)
• MariaDB database log files (p. 902)
• Microsoft SQL Server database log files (p. 911)
• MySQL database log files (p. 915)
• Oracle database log files (p. 924)
• RDS for PostgreSQL database log files (p. 931)

Viewing and listing database log files


You can view database log files for your Amazon RDS DB engine by using the AWS Management Console.
You can list what log files are available for download or monitoring by using the AWS CLI or Amazon RDS
API.
Note
If you can't view the list of log files for an existing RDS for Oracle DB instance, reboot the
instance to view the list.

Console

To view a database log file

1. Open the Amazon RDS console at https://console.aws.amazon.com/rds/.


2. In the navigation pane, choose Databases.
3. Choose the name of the DB instance that has the log file that you want to view.
4. Choose the Logs & events tab.
5. Scroll down to the Logs section.
6. (Optional) Enter a search term to filter your results.
7. Choose the log that you want to view, and then choose View.

AWS CLI
To list the available database log files for a DB instance, use the AWS CLI describe-db-log-files
command.

The following example returns a list of log files for a DB instance named my-db-instance.

895
Amazon Relational Database Service User Guide
Downloading a database log file

Example

aws rds describe-db-log-files --db-instance-identifier my-db-instance

RDS API
To list the available database log files for a DB instance, use the Amazon RDS API
DescribeDBLogFiles action.

Downloading a database log file


You can use the AWS Management Console, AWS CLI, or API to download a database log file.

Console
To download a database log file

1. Open the Amazon RDS console at https://console.aws.amazon.com/rds/.


2. In the navigation pane, choose Databases.
3. Choose the name of the DB instance that has the log file that you want to view.
4. Choose the Logs & events tab.
5. Scroll down to the Logs section.
6. In the Logs section, choose the button next to the log that you want to download, and then choose
Download.
7. Open the context (right-click) menu for the link provided, and then choose Save Link As. Enter the
location where you want the log file to be saved, and then choose Save.

AWS CLI
To download a database log file, use the AWS CLI command download-db-log-file-portion. By
default, this command downloads only the latest portion of a log file. However, you can download an
entire file by specifying the parameter --starting-token 0.

The following example shows how to download the entire contents of a log file called log/ERROR.4 and
store it in a local file called errorlog.txt.

Example

For Linux, macOS, or Unix:

896
Amazon Relational Database Service User Guide
Watching a database log file

aws rds download-db-log-file-portion \


--db-instance-identifier myexampledb \
--starting-token 0 --output text \
--log-file-name log/ERROR.4 > errorlog.txt

For Windows:

aws rds download-db-log-file-portion ^


--db-instance-identifier myexampledb ^
--starting-token 0 --output text ^
--log-file-name log/ERROR.4 > errorlog.txt

RDS API
To download a database log file, use the Amazon RDS API DownloadDBLogFilePortion action.

Watching a database log file


Watching a database log file is equivalent to tailing the file on a UNIX or Linux system. You can watch a
log file by using the AWS Management Console. RDS refreshes the tail of the log every 5 seconds.

To watch a database log file

1. Open the Amazon RDS console at https://console.aws.amazon.com/rds/.


2. In the navigation pane, choose Databases.
3. Choose the name of the DB instance that has the log file that you want to view.
4. Choose the Logs & events tab.

5. In the Logs section, choose a log file, and then choose Watch.

RDS shows the tail of the log, as in the following MySQL example.

897
Amazon Relational Database Service User Guide
Publishing to CloudWatch Logs

Publishing database logs to Amazon CloudWatch


Logs
In an on-premises database, the database logs reside on the file system. Amazon RDS doesn't provide
host access to the database logs on the file system of your DB instance. For this reason, Amazon RDS lets
you export database logs to Amazon CloudWatch Logs. With CloudWatch Logs, you can perform real-
time analysis of the log data. You can also store the data in highly durable storage and manage the data
with the CloudWatch Logs Agent.

Topics
• Overview of RDS integration with CloudWatch Logs (p. 898)
• Deciding which logs to publish to CloudWatch Logs (p. 899)
• Specifying the logs to publish to CloudWatch Logs (p. 899)
• Searching and filtering your logs in CloudWatch Logs (p. 899)

Overview of RDS integration with CloudWatch Logs


In CloudWatch Logs, a log stream is a sequence of log events that share the same source. Each separate
source of logs in CloudWatch Logs makes up a separate log stream. A log group is a group of log streams
that share the same retention, monitoring, and access control settings.

Amazon RDS continuously streams your DB instance log records to a log group. For example, you have
a log group /aws/rds/instance/instance_name/log_type for each type of log that you publish.
This log group is in the same AWS Region as the database instance that generates the log.

AWS retains log data published to CloudWatch Logs for an indefinite time period unless you specify a
retention period. For more information, see Change log data retention in CloudWatch Logs.

898
Amazon Relational Database Service User Guide
Publishing to CloudWatch Logs

Deciding which logs to publish to CloudWatch Logs


Each RDS database engine supports its own set of logs. To learn about the options for your database
engine, review the following topics:

• the section called “Publishing MariaDB logs to Amazon CloudWatch Logs” (p. 904)
• the section called “Publishing MySQL logs to Amazon CloudWatch Logs” (p. 918)
• the section called “Publishing Oracle logs to Amazon CloudWatch Logs” (p. 927)
• the section called “Publishing PostgreSQL logs to Amazon CloudWatch Logs” (p. 936)
• the section called “Publishing SQL Server logs to Amazon CloudWatch Logs” (p. 911)

Specifying the logs to publish to CloudWatch Logs


You specify which logs to publish in the console. Make sure that you have a service-linked role in AWS
Identity and Access Management (IAM). For more information about service-linked roles, see Using
service-linked roles for Amazon RDS (p. 2684).

To specify the logs to publish

1. Open the Amazon RDS console at https://console.aws.amazon.com/rds/.


2. In the navigation pane, choose Databases.
3. Do either of the following:

• Choose Create database.


• Choose a database from the list, and then choose Modify.
4. In Logs exports, choose which logs to publish.

The following example specifies the audit log, error logs, general log, and slow query log.

Searching and filtering your logs in CloudWatch Logs


You can search for log entries that meet a specified criteria using the CloudWatch Logs console. You can
access the logs either through the RDS console, which leads you to the CloudWatch Logs console, or
from the CloudWatch Logs console directly.

899
Amazon Relational Database Service User Guide
Reading log file contents using REST

To search your RDS logs using the RDS console

1. Open the Amazon RDS console at https://console.aws.amazon.com/rds/.


2. In the navigation pane, choose Databases.
3. Choose a DB instance.
4. Choose Configuration.
5. Under Published logs, choose the database log that you want to view.

To search your RDS logs using the CloudWatch Logs console

1. Open the CloudWatch console at https://console.aws.amazon.com/cloudwatch/.


2. In the navigation pane, choose Log groups.
3. In the filter box, enter /aws/rds.
4. For Log Groups, choose the name of the log group containing the log stream to search.
5. For Log Streams, choose the name of the log stream to search.
6. Under Log events, enter the filter syntax to use.

For more information, see Searching and filtering log data in the Amazon CloudWatch Logs User Guide.
For a blog tutorial explaining how to monitor RDS logs, see Build proactive database monitoring for
Amazon RDS with Amazon CloudWatch Logs, AWS Lambda, and Amazon SNS.

Reading log file contents using REST


Amazon RDS provides a REST endpoint that allows access to DB instance log files. This is useful if you
need to write an application to stream Amazon RDS log file contents.

The syntax is:

GET /v13/downloadCompleteLogFile/DBInstanceIdentifier/LogFileName HTTP/1.1


Content-type: application/json
host: rds.region.amazonaws.com

The following parameters are required:

• DBInstanceIdentifier—the name of the DB instance that contains the log file you want to
download.
• LogFileName—the name of the log file to be downloaded.

The response contains the contents of the requested log file, as a stream.

The following example downloads the log file named log/ERROR.6 for the DB instance named sample-sql
in the us-west-2 region.

GET /v13/downloadCompleteLogFile/sample-sql/log/ERROR.6 HTTP/1.1


host: rds.us-west-2.amazonaws.com
X-Amz-Security-Token: AQoDYXdzEIH//////////
wEa0AIXLhngC5zp9CyB1R6abwKrXHVR5efnAVN3XvR7IwqKYalFSn6UyJuEFTft9nObglx4QJ+GXV9cpACkETq=
X-Amz-Date: 20140903T233749Z
X-Amz-Algorithm: AWS4-HMAC-SHA256
X-Amz-Credential: AKIADQKE4SARGYLE/20140903/us-west-2/rds/aws4_request
X-Amz-SignedHeaders: host
X-Amz-Content-SHA256: e3b0c44298fc1c229afbf4c8996fb92427ae41e4649b934de495991b7852b855
X-Amz-Expires: 86400

900
Amazon Relational Database Service User Guide
Reading log file contents using REST

X-Amz-Signature: 353a4f14b3f250142d9afc34f9f9948154d46ce7d4ec091d0cdabbcf8b40c558

If you specify a nonexistent DB instance, the response consists of the following error:

• DBInstanceNotFound—DBInstanceIdentifier does not refer to an existing DB instance. (HTTP


status code: 404)

901
Amazon Relational Database Service User Guide
MariaDB database log files

MariaDB database log files


You can monitor the MariaDB error log, slow query log, and the general log. The MariaDB error log is
generated by default; you can generate the slow query and general logs by setting parameters in your
DB parameter group. Amazon RDS rotates all of the MariaDB log files; the intervals for each type are
given following.

You can monitor the MariaDB logs directly through the Amazon RDS console, Amazon RDS API, Amazon
RDS CLI, or AWS SDKs. You can also access MariaDB logs by directing the logs to a database table in the
main database and querying that table. You can use the mysqlbinlog utility to download a binary log.

For more information about viewing, downloading, and watching file-based database logs, see
Monitoring Amazon RDS log files (p. 895).

Topics
• Accessing MariaDB error logs (p. 902)
• Accessing the MariaDB slow query and general logs (p. 902)
• Publishing MariaDB logs to Amazon CloudWatch Logs (p. 904)
• Log file size (p. 906)
• Managing table-based MariaDB logs (p. 906)
• Binary logging format (p. 907)
• Accessing MariaDB binary logs (p. 908)
• Binary log annotation (p. 909)

Accessing MariaDB error logs


The MariaDB error log is written to the <host-name>.err file. You can view this file by using the
Amazon RDS console, You can also retrieve the log using the Amazon RDS API, Amazon RDS CLI, or AWS
SDKs. The <host-name>.err file is flushed every 5 minutes, and its contents are appended to mysql-
error-running.log. The mysql-error-running.log file is then rotated every hour and the hourly
files generated during the last 24 hours are retained. Each log file has the hour it was generated (in UTC)
appended to its name. The log files also have a timestamp that helps you determine when the log entries
were written.

MariaDB writes to the error log only on startup, shutdown, and when it encounters errors. A DB instance
can go hours or days without new entries being written to the error log. If you see no recent entries, it's
because the server did not encounter an error that resulted in a log entry.

Accessing the MariaDB slow query and general logs


You can write the MariaDB slow query log and general log to a file or database table by setting
parameters in your DB parameter group. For information about creating and modifying a DB parameter
group, see Working with parameter groups (p. 347). You must set these parameters before you can view
the slow query log or general log in the Amazon RDS console or by using the Amazon RDS API, AWS CLI,
or AWS SDKs.

You can control MariaDB logging by using the parameters in this list:

• slow_query_log: To create the slow query log, set to 1. The default is 0.


• general_log: To create the general log, set to 1. The default is 0.
• long_query_time: To prevent fast-running queries from being logged in the slow query log, specify
a value for the shortest query run time to be logged, in seconds. The default is 10 seconds; the

902
Amazon Relational Database Service User Guide
MariaDB database log files

minimum is 0. If log_output = FILE, you can specify a floating point value that goes to microsecond
resolution. If log_output = TABLE, you must specify an integer value with second resolution. Only
queries whose run time exceeds the long_query_time value are logged. For example, setting
long_query_time to 0.1 prevents any query that runs for less than 100 milliseconds from being
logged.
• log_queries_not_using_indexes: To log all queries that do not use an index to the slow query
log, set this parameter to 1. The default is 0. Queries that do not use an index are logged even if their
run time is less than the value of the long_query_time parameter.
• log_output option: You can specify one of the following options for the log_output parameter:
• TABLE (default)– Write general queries to the mysql.general_log table, and slow queries to the
mysql.slow_log table.
• FILE– Write both general and slow query logs to the file system. Log files are rotated hourly.
• NONE– Disable logging.

When logging is enabled, Amazon RDS rotates table logs or deletes log files at regular intervals. This
measure is a precaution to reduce the possibility of a large log file either blocking database use or
affecting performance. FILE and TABLE logging approach rotation and deletion as follows:

• When FILE logging is enabled, log files are examined every hour and log files older than 24 hours
are deleted. In some cases, the remaining combined log file size after the deletion might exceed
the threshold of 2 percent of a DB instance's allocated space. In these cases, the largest log files are
deleted until the log file size no longer exceeds the threshold.
• When TABLE logging is enabled, in some cases log tables are rotated every 24 hours. This rotation
occurs if the space used by the table logs is more than 20 percent of the allocated storage space. It
also occurs if the size of all logs combined is greater than 10 GB. If the amount of space used for a DB
instance is greater than 90 percent of the DB instance's allocated storage space, the thresholds for log
rotation are reduced. Log tables are then rotated if the space used by the table logs is more than 10
percent of the allocated storage space. They're also rotated if the size of all logs combined is greater
than 5 GB.

When log tables are rotated, the current log table is copied to a backup log table and the entries in
the current log table are removed. If the backup log table already exists, then it is deleted before the
current log table is copied to the backup. You can query the backup log table if needed. The backup
log table for the mysql.general_log table is named mysql.general_log_backup. The backup
log table for the mysql.slow_log table is named mysql.slow_log_backup.

You can rotate the mysql.general_log table by calling the mysql.rds_rotate_general_log


procedure. You can rotate the mysql.slow_log table by calling the mysql.rds_rotate_slow_log
procedure.

Table logs are rotated during a database version upgrade.

Amazon RDS records both TABLE and FILE log rotation in an Amazon RDS event and sends you a
notification.

To work with the logs from the Amazon RDS console, Amazon RDS API, Amazon RDS CLI, or AWS SDKs,
set the log_output parameter to FILE. Like the MariaDB error log, these log files are rotated hourly. The
log files that were generated during the previous 24 hours are retained.

For more information about the slow query and general logs, go to the following topics in the MariaDB
documentation:

• Slow query log


• General query log

903
Amazon Relational Database Service User Guide
MariaDB database log files

Publishing MariaDB logs to Amazon CloudWatch Logs


You can configure your MariaDB DB instance to publish log data to a log group in Amazon CloudWatch
Logs. With CloudWatch Logs, you can perform real-time analysis of the log data, and use CloudWatch to
create alarms and view metrics. You can use CloudWatch Logs to store your log records in highly durable
storage.

Amazon RDS publishes each MariaDB database log as a separate database stream in the log group. For
example, suppose that you configure the export function to include the slow query log. Then slow query
data is stored in a slow query log stream in the /aws/rds/instance/my_instance/slowquery log
group.

The error log is enabled by default. The following table summarizes the requirements for the other
MariaDB logs.

Log Requirement

Audit log The DB instance must use a custom option group


with the MARIADB_AUDIT_PLUGIN option.

General log The DB instance must use a custom parameter


group with the parameter setting general_log
= 1 to enable the general log.

Slow query log The DB instance must use a custom


parameter group with the parameter setting
slow_query_log = 1 to enable the slow query
log.

Log output The DB instance must use a custom parameter


group with the parameter setting log_output =
FILE to write logs to the file system and publish
them to CloudWatch Logs.

Console

To publish MariaDB logs to CloudWatch Logs from the console

1. Open the Amazon RDS console at https://console.aws.amazon.com/rds/.


2. In the navigation pane, choose Databases, and then choose the DB instance that you want to
modify.
3. Choose Modify.
4. In the Log exports section, choose the logs that you want to start publishing to CloudWatch Logs.
5. Choose Continue, and then choose Modify DB Instance on the summary page.

AWS CLI

You can publish a MariaDB logs with the AWS CLI. You can call the modify-db-instance command
with the following parameters:

• --db-instance-identifier
• --cloudwatch-logs-export-configuration

904
Amazon Relational Database Service User Guide
MariaDB database log files

Note
A change to the --cloudwatch-logs-export-configuration option is always applied
to the DB instance immediately. Therefore, the --apply-immediately and --no-apply-
immediately options have no effect.

You can also publish MariaDB logs by calling the following AWS CLI commands:

• create-db-instance
• restore-db-instance-from-db-snapshot
• restore-db-instance-from-s3
• restore-db-instance-to-point-in-time

Run one of these AWS CLI commands with the following options:

• --db-instance-identifier
• --enable-cloudwatch-logs-exports
• --db-instance-class
• --engine

Other options might be required depending on the AWS CLI command you run.

Example

The following example modifies an existing MariaDB DB instance to publish log files to CloudWatch Logs.
The --cloudwatch-logs-export-configuration value is a JSON object. The key for this object is
EnableLogTypes, and its value is an array of strings with any combination of audit, error, general,
and slowquery.

For Linux, macOS, or Unix:

aws rds modify-db-instance \


--db-instance-identifier mydbinstance \
--cloudwatch-logs-export-configuration '{"EnableLogTypes":
["audit","error","general","slowquery"]}'

For Windows:

aws rds modify-db-instance ^


--db-instance-identifier mydbinstance ^
--cloudwatch-logs-export-configuration '{"EnableLogTypes":
["audit","error","general","slowquery"]}'

Example

The following command creates a MariaDB DB instance and publishes log files to CloudWatch Logs.
The --enable-cloudwatch-logs-exports value is a JSON array of strings. The strings can be any
combination of audit, error, general, and slowquery.

For Linux, macOS, or Unix:

aws rds create-db-instance \


--db-instance-identifier mydbinstance \
--enable-cloudwatch-logs-exports '["audit","error","general","slowquery"]' \

905
Amazon Relational Database Service User Guide
MariaDB database log files

--db-instance-class db.m4.large \
--engine mariadb

For Windows:

aws rds create-db-instance ^


--db-instance-identifier mydbinstance ^
--enable-cloudwatch-logs-exports '["audit","error","general","slowquery"]' ^
--db-instance-class db.m4.large ^
--engine mariadb

RDS API

You can publish MariaDB logs with the RDS API. You can call the ModifyDBInstance operation with the
following parameters:

• DBInstanceIdentifier
• CloudwatchLogsExportConfiguration

Note
A change to the CloudwatchLogsExportConfiguration parameter is always applied to the
DB instance immediately. Therefore, the ApplyImmediately parameter has no effect.

You can also publish MariaDB logs by calling the following RDS API operations:

• CreateDBInstance
• RestoreDBInstanceFromDBSnapshot
• RestoreDBInstanceFromS3
• RestoreDBInstanceToPointInTime

Run one of these RDS API operations with the following parameters:

• DBInstanceIdentifier
• EnableCloudwatchLogsExports
• Engine
• DBInstanceClass

Other parameters might be required depending on the AWS CLI command you run.

Log file size


The MariaDB slow query log, error log, and the general log file sizes are constrained to no more
than 2 percent of the allocated storage space for a DB instance. To maintain this threshold, logs are
automatically rotated every hour and log files older than 24 hours are removed. If the combined log file
size exceeds the threshold after removing old log files, then the largest log files are deleted until the log
file size no longer exceeds the threshold.

Managing table-based MariaDB logs


You can direct the general and slow query logs to tables on the DB instance. To do so, create a DB
parameter group and set the log_output server parameter to TABLE. General queries are then logged

906
Amazon Relational Database Service User Guide
MariaDB database log files

to the mysql.general_log table, and slow queries are logged to the mysql.slow_log table. You
can query the tables to access the log information. Enabling this logging increases the amount of data
written to the database, which can degrade performance.

Both the general log and the slow query logs are disabled by default. In order to enable logging to
tables, you must also set the general_log and slow_query_log server parameters to 1.

Log tables keep growing until the respective logging activities are turned off by resetting the appropriate
parameter to 0. A large amount of data often accumulates over time, which can use up a considerable
percentage of your allocated storage space. Amazon RDS does not allow you to truncate the log tables,
but you can move their contents. Rotating a table saves its contents to a backup table and then creates
a new empty log table. You can manually rotate the log tables with the following command line
procedures, where the command prompt is indicated by PROMPT>:

PROMPT> CALL mysql.rds_rotate_slow_log;


PROMPT> CALL mysql.rds_rotate_general_log;

To completely remove the old data and reclaim the disk space, call the appropriate procedure twice in
succession.

Binary logging format


MariaDB on Amazon RDS supports the row-based, statement-based, and mixed binary logging formats.
The default binary logging format is mixed. For details on the different MariaDB binary log formats, see
Binary log formats in the MariaDB documentation.

If you plan to use replication, the binary logging format is important. This is because it determines
the record of data changes that is recorded in the source and sent to the replication targets. For
information about the advantages and disadvantages of different binary logging formats for replication,
see Advantages and disadvantages of statement-based and row-based replication in the MySQL
documentation.
Important
Setting the binary logging format to row-based can result in very large binary log files. Large
binary log files reduce the amount of storage available for a DB instance. They also can increase
the amount of time to perform a restore operation of a DB instance.
Statement-based replication can cause inconsistencies between the source DB instance and a
read replica. For more information, see Unsafe statements for statement-based replication in
the MariaDB documentation.

To set the MariaDB binary logging format

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Parameter groups.
3. Choose the parameter group that is used by the DB instance that you want to modify.

You can't modify a default parameter group. If the DB instance is using a default parameter group,
create a new parameter group and associate it with the DB instance.

For more information on DB parameter groups, see Working with parameter groups (p. 347).
4. For Parameter group actions, choose Edit.
5. Set the binlog_format parameter to the binary logging format of your choice (ROW, STATEMENT,
or MIXED).
6. Choose Save changes to save the updates to the DB parameter group.

907
Amazon Relational Database Service User Guide
MariaDB database log files

Accessing MariaDB binary logs


You can use the mysqlbinlog utility to download binary logs in text format from MariaDB DB instances.
The binary log is downloaded to your local computer. For more information about using the mysqlbinlog
utility, go to Using mysqlbinlog in the MariaDB documentation.

To run the mysqlbinlog utility against an Amazon RDS instance, use the following options:

• Specify the --read-from-remote-server option.


• --host: Specify the DNS name from the endpoint of the instance.
• --port: Specify the port used by the instance.
• --user: Specify a MariaDB user that has been granted the replication slave permission.
• --password: Specify the password for the user, or omit a password value so the utility prompts you
for a password.
• --result-file: Specify the local file that receives the output.
• Specify the names of one or more binary log files. To get a list of the available logs, use the SQL
command SHOW BINARY LOGS.

For more information about mysqlbinlog options, go to mysqlbinlog options in the MariaDB
documentation.

The following is an example:

For Linux, macOS, or Unix:

mysqlbinlog \
--read-from-remote-server \
--host=mariadbinstance1.1234abcd.region.rds.amazonaws.com \
--port=3306 \
--user ReplUser \
--password <password> \
--result-file=/tmp/binlog.txt

For Windows:

mysqlbinlog ^
--read-from-remote-server ^
--host=mariadbinstance1.1234abcd.region.rds.amazonaws.com ^
--port=3306 ^
--user ReplUser ^
--password <password> ^
--result-file=/tmp/binlog.txt

Amazon RDS normally purges a binary log as soon as possible. However, the binary log must still be
available on the instance to be accessed by mysqlbinlog. To specify the number of hours for RDS to
retain binary logs, use the mysql.rds_set_configuration stored procedure. Specify a period with
enough time for you to download the logs. After you set the retention period, monitor storage usage for
the DB instance to ensure that the retained binary logs don't take up too much storage.

The following example sets the retention period to 1 day.

call mysql.rds_set_configuration('binlog retention hours', 24);

To display the current setting, use the mysql.rds_show_configuration stored procedure.

908
Amazon Relational Database Service User Guide
MariaDB database log files

call mysql.rds_show_configuration;

Binary log annotation


In a MariaDB DB instance, you can use the Annotate_rows event to annotate a row event with a copy
of the SQL query that caused the row event. This approach provides similar functionality to enabling the
binlog_rows_query_log_events parameter on an RDS for MySQL DB instance.

You can enable binary log annotations globally by creating a custom parameter group and
setting the binlog_annotate_row_events parameter to 1. You can also enable annotations
at the session level, by calling SET SESSION binlog_annotate_row_events = 1. Use the
replicate_annotate_row_events to replicate binary log annotations to the replica instance if
binary logging is enabled on it. No special privileges are required to use these settings.

The following is an example of a row-based transaction in MariaDB. The use of row-based logging is
triggered by setting the transaction isolation level to read-committed.

CREATE DATABASE IF NOT EXISTS test;


USE test;
CREATE TABLE square(x INT PRIMARY KEY, y INT NOT NULL) ENGINE = InnoDB;
SET SESSION TRANSACTION ISOLATION LEVEL READ COMMITTED;
BEGIN
INSERT INTO square(x, y) VALUES(5, 5 * 5);
COMMIT;

Without annotations, the binary log entries for the transaction look like the following:

BEGIN
/*!*/;
# at 1163
# at 1209
#150922 7:55:57 server id 1855786460 end_log_pos 1209 Table_map: `test`.`square`
mapped to number 76
#150922 7:55:57 server id 1855786460 end_log_pos 1247 Write_rows: table id 76
flags: STMT_END_F
### INSERT INTO `test`.`square`
### SET
### @1=5
### @2=25
# at 1247
#150922 7:56:01 server id 1855786460 end_log_pos 1274 Xid = 62
COMMIT/*!*/;

The following statement enables session-level annotations for this same transaction, and disables them
after committing the transaction:

CREATE DATABASE IF NOT EXISTS test;


USE test;
CREATE TABLE square(x INT PRIMARY KEY, y INT NOT NULL) ENGINE = InnoDB;
SET SESSION TRANSACTION ISOLATION LEVEL READ COMMITTED;
SET SESSION binlog_annotate_row_events = 1;
BEGIN;
INSERT INTO square(x, y) VALUES(5, 5 * 5);
COMMIT;
SET SESSION binlog_annotate_row_events = 0;

With annotations, the binary log entries for the transaction look like the following:

BEGIN

909
Amazon Relational Database Service User Guide
MariaDB database log files

/*!*/;
# at 423
# at 483
# at 529
#150922 8:04:24 server id 1855786460 end_log_pos 483 Annotate_rows:
#Q> INSERT INTO square(x, y) VALUES(5, 5 * 5)
#150922 8:04:24 server id 1855786460 end_log_pos 529 Table_map: `test`.`square` mapped
to number 76
#150922 8:04:24 server id 1855786460 end_log_pos 567 Write_rows: table id 76 flags:
STMT_END_F
### INSERT INTO `test`.`square`
### SET
### @1=5
### @2=25
# at 567
#150922 8:04:26 server id 1855786460 end_log_pos 594 Xid = 88
COMMIT/*!*/;

910
Amazon Relational Database Service User Guide
Microsoft SQL Server database log files

Microsoft SQL Server database log files


You can access Microsoft SQL Server error logs, agent logs, trace files, and dump files by using the
Amazon RDS console, AWS CLI, or RDS API. For more information about viewing, downloading, and
watching file-based database logs, see Monitoring Amazon RDS log files (p. 895).

Topics
• Retention schedule (p. 911)
• Viewing the SQL Server error log by using the rds_read_error_log procedure (p. 911)
• Publishing SQL Server logs to Amazon CloudWatch Logs (p. 911)

Retention schedule
Log files are rotated each day and whenever your DB instance is restarted. The following is the retention
schedule for Microsoft SQL Server logs on Amazon RDS.

Log type Retention schedule

Error logs A maximum of 30 error logs are retained. Amazon RDS might delete error
logs older than 7 days.

Agent logs A maximum of 10 agent logs are retained. Amazon RDS might delete agent
logs older than 7 days.

Trace files Trace files are retained according to the trace file retention period of your DB
instance. The default trace file retention period is 7 days. To modify the trace
file retention period for your DB instance, see Setting the retention period
for trace and dump files (p. 1621).

Dump files Dump files are retained according to the dump file retention period of your
DB instance. The default dump file retention period is 7 days. To modify the
dump file retention period for your DB instance, see Setting the retention
period for trace and dump files (p. 1621).

Viewing the SQL Server error log by using the


rds_read_error_log procedure
You can use the Amazon RDS stored procedure rds_read_error_log to view error logs and agent logs.
For more information, see Viewing error and agent logs (p. 1620).

Publishing SQL Server logs to Amazon CloudWatch Logs


With Amazon RDS for SQL Server, you can publish error and agent log events directly to Amazon
CloudWatch Logs. Analyze the log data with CloudWatch Logs, then use CloudWatch to create alarms
and view metrics.

With CloudWatch Logs, you can do the following:

• Store logs in highly durable storage space with a retention period that you define.
• Search and filter log data.
• Share log data between accounts.

911
Amazon Relational Database Service User Guide
Microsoft SQL Server database log files

• Export logs to Amazon S3.


• Stream data to Amazon OpenSearch Service.
• Process log data in real time with Amazon Kinesis Data Streams. For more information, see Working
with Amazon CloudWatch Logs in the Amazon Managed Service for Apache Flink for SQL Applications
Developer Guide.

Amazon RDS publishes each SQL Server database log as a separate database stream in the log group.
For example, if you publish error logs, error data is stored in an error log stream in the /aws/rds/
instance/my_instance/error log group.

For Multi-AZ DB instances, Amazon RDS publishes the database log as two separate streams in the log
group. For example, if you publish the error logs, the error data is stored in the error log streams /aws/
rds/instance/my_instance.node1/error and /aws/rds/instance/my_instance.node2/
error respectively. The log streams don't change during a failover and the error log stream of each node
can contain error logs from primary or secondary instance.
Note
Publishing SQL Server logs to CloudWatch Logs isn't enabled by default. Publishing trace and
dump files isn't supported. Publishing SQL Server logs to CloudWatch Logs is supported in all
regions, except for Asia Pacific (Hong Kong).

Console

To publish SQL Server DB logs to CloudWatch Logs from the AWS Management Console

1. Open the Amazon RDS console at https://console.aws.amazon.com/rds/.


2. In the navigation pane, choose Databases, and then choose the DB instance that you want to
modify.
3. Choose Modify.
4. In the Log exports section, choose the logs that you want to start publishing to CloudWatch Logs.

You can choose Agent log, Error log, or both.


5. Choose Continue, and then choose Modify DB Instance on the summary page.

AWS CLI

To publish SQL Server logs, you can use the modify-db-instance command with the following
parameters:

• --db-instance-identifier
• --cloudwatch-logs-export-configuration

Note
A change to the --cloudwatch-logs-export-configuration option is always applied
to the DB instance immediately. Therefore, the --apply-immediately and --no-apply-
immediately options have no effect.

You can also publish SQL Server logs using the following commands:

• create-db-instance
• restore-db-instance-from-db-snapshot
• restore-db-instance-to-point-in-time

912
Amazon Relational Database Service User Guide
Microsoft SQL Server database log files

Example
The following example creates an SQL Server DB instance with CloudWatch Logs publishing enabled.
The --enable-cloudwatch-logs-exports value is a JSON array of strings that can include error,
agent, or both.

For Linux, macOS, or Unix:

aws rds create-db-instance \


--db-instance-identifier mydbinstance \
--enable-cloudwatch-logs-exports '["error","agent"]' \
--db-instance-class db.m4.large \
--engine sqlserver-se

For Windows:

aws rds create-db-instance ^


--db-instance-identifier mydbinstance ^
--enable-cloudwatch-logs-exports "[\"error\",\"agent\"]" ^
--db-instance-class db.m4.large ^
--engine sqlserver-se

Note
When using the Windows command prompt, you must escape double quotes (") in JSON code by
prefixing them with a backslash (\).

Example
The following example modifies an existing SQL Server DB instance to publish log files to CloudWatch
Logs. The --cloudwatch-logs-export-configuration value is a JSON object. The key for this
object is EnableLogTypes, and its value is an array of strings that can include error, agent, or both.

For Linux, macOS, or Unix:

aws rds modify-db-instance \


--db-instance-identifier mydbinstance \
--cloudwatch-logs-export-configuration '{"EnableLogTypes":["error","agent"]}'

For Windows:

aws rds modify-db-instance ^


--db-instance-identifier mydbinstance ^
--cloudwatch-logs-export-configuration "{\"EnableLogTypes\":[\"error\",\"agent\"]}"

Note
When using the Windows command prompt, you must escape double quotes (") in JSON code by
prefixing them with a backslash (\).

Example
The following example modifies an existing SQL Server DB instance to disable publishing agent log files
to CloudWatch Logs. The --cloudwatch-logs-export-configuration value is a JSON object. The
key for this object is DisableLogTypes, and its value is an array of strings that can include error,
agent, or both.

For Linux, macOS, or Unix:

aws rds modify-db-instance \

913
Amazon Relational Database Service User Guide
Microsoft SQL Server database log files

--db-instance-identifier mydbinstance \
--cloudwatch-logs-export-configuration '{"DisableLogTypes":["agent"]}'

For Windows:

aws rds modify-db-instance ^


--db-instance-identifier mydbinstance ^
--cloudwatch-logs-export-configuration "{\"DisableLogTypes\":[\"agent\"]}"

Note
When using the Windows command prompt, you must escape double quotes (") in JSON code by
prefixing them with a backslash (\).

914
Amazon Relational Database Service User Guide
MySQL database log files

MySQL database log files


You can monitor the MySQL logs directly through the Amazon RDS console, Amazon RDS API, AWS
CLI, or AWS SDKs. You can also access MySQL logs by directing the logs to a database table in the main
database and querying that table. You can use the mysqlbinlog utility to download a binary log.

For more information about viewing, downloading, and watching file-based database logs, see
Monitoring Amazon RDS log files (p. 895).

Topics
• Overview of RDS for MySQL database logs (p. 915)
• Publishing MySQL logs to Amazon CloudWatch Logs (p. 918)
• Managing table-based MySQL logs (p. 920)
• Configuring MySQL binary logging (p. 921)
• Accessing MySQL binary logs (p. 922)

Overview of RDS for MySQL database logs


You can monitor the following types of RDS for MySQL log files:

• Error log
• Slow query log
• General log
• Audit log

The RDS for MySQL error log is generated by default. You can generate the slow query and general logs
by setting parameters in your DB parameter group.

Topics
• RDS for MySQL error logs (p. 915)
• RDS for MySQL slow query and general logs (p. 916)
• MySQL audit log (p. 916)
• Log rotation and retention for RDS for MySQL (p. 916)
• Size limits on redo logs (p. 917)
• Size limits on BLOBs written to the redo log (p. 917)

RDS for MySQL error logs


RDS for MySQL writes errors in the mysql-error.log file. Each log file has the hour it was generated
(in UTC) appended to its name. The log files also have a timestamp that helps you determine when the
log entries were written.

RDS for MySQL writes to the error log only on startup, shutdown, and when it encounters errors. A DB
instance can go hours or days without new entries being written to the error log. If you see no recent
entries, it's because the server didn't encounter an error that would result in a log entry.

By design, the error logs are filtered so that only unexpected events such as errors are shown. However,
the error logs also contain some additional database information, for example query progress, which
isn't shown. Therefore, even without any actual errors the size of the error logs might increase because of
ongoing database activities. And while you might see a certain size in bytes or kilobytes for the error logs
in the AWS Management Console, they might have 0 bytes when you download them.

915
Amazon Relational Database Service User Guide
MySQL database log files

RDS for MySQL writes mysql-error.log to disk every 5 minutes. It appends the contents of the log to
mysql-error-running.log.

RDS for MySQL rotates the mysql-error-running.log file every hour. It retains the logs generated
during the last two weeks.
Note
The log retention period is different between Amazon RDS and Aurora.

RDS for MySQL slow query and general logs


You can write the RDS for MySQL slow query log and the general log to a file or a database table. To
do so, set parameters in your DB parameter group. For information about creating and modifying a DB
parameter group, see Working with parameter groups (p. 347). You must set these parameters before
you can view the slow query log or general log in the Amazon RDS console or by using the Amazon RDS
API, Amazon RDS CLI, or AWS SDKs.

You can control RDS for MySQL logging by using the parameters in this list:

• slow_query_log: To create the slow query log, set to 1. The default is 0.


• general_log: To create the general log, set to 1. The default is 0.
• long_query_time: To prevent fast-running queries from being logged in the slow query log,
specify a value for the shortest query runtime to be logged, in seconds. The default is 10 seconds; the
minimum is 0. If log_output = FILE, you can specify a floating point value that goes to microsecond
resolution. If log_output = TABLE, you must specify an integer value with second resolution. Only
queries whose runtime exceeds the long_query_time value are logged. For example, setting
long_query_time to 0.1 prevents any query that runs for less than 100 milliseconds from being
logged.
• log_queries_not_using_indexes: To log all queries that do not use an index to the slow query
log, set to 1. Queries that don't use an index are logged even if their runtime is less than the value of
the long_query_time parameter. The default is 0.
• log_output option: You can specify one of the following options for the log_output parameter.
• TABLE (default) – Write general queries to the mysql.general_log table, and slow queries to the
mysql.slow_log table.
• FILE – Write both general and slow query logs to the file system.
• NONE – Disable logging.

For more information about the slow query and general logs, go to the following topics in the MySQL
documentation:

• The slow query log


• The general query log

MySQL audit log


To access the audit log, the DB instance must use a custom option group with the
MARIADB_AUDIT_PLUGIN option. For more information, see MariaDB Audit Plugin support for
MySQL (p. 1733).

Log rotation and retention for RDS for MySQL


When logging is enabled, Amazon RDS rotates table logs or deletes log files at regular intervals. This
measure is a precaution to reduce the possibility of a large log file either blocking database use or
affecting performance. RDS for MySQL handles rotation and deletion as follows:

916
Amazon Relational Database Service User Guide
MySQL database log files

• The MySQL slow query log, error log, and the general log file sizes are constrained to no more than
2 percent of the allocated storage space for a DB instance. To maintain this threshold, logs are
automatically rotated every hour. MySQL removes log files more than two weeks old. If the combined
log file size exceeds the threshold after removing old log files, then the oldest log files are deleted
until the log file size no longer exceeds the threshold.
• When FILE logging is enabled, log files are examined every hour and log files more than two weeks
old are deleted. In some cases, the remaining combined log file size after the deletion might exceed
the threshold of 2 percent of a DB instance's allocated space. In these cases, the oldest log files are
deleted until the log file size no longer exceeds the threshold.
• When TABLE logging is enabled, in some cases log tables are rotated every 24 hours. This rotation
occurs if the space used by the table logs is more than 20 percent of the allocated storage space. It
also occurs if the size of all logs combined is greater than 10 GB. If the amount of space used for a DB
instance is greater than 90 percent of the DB instance's allocated storage space, then the thresholds
for log rotation are reduced. Log tables are then rotated if the space used by the table logs is more
than 10 percent of the allocated storage space. They're also rotated if the size of all logs combined
is greater than 5 GB. You can subscribe to the low_free_storage event to be notified when log
tables are rotated to free up space. For more information, see Working with Amazon RDS event
notification (p. 855).

When log tables are rotated, the current log table is first copied to a backup log table. Then the entries
in the current log table are removed. If the backup log table already exists, then it is deleted before the
current log table is copied to the backup. You can query the backup log table if needed. The backup
log table for the mysql.general_log table is named mysql.general_log_backup. The backup
log table for the mysql.slow_log table is named mysql.slow_log_backup.

You can rotate the mysql.general_log table by calling the mysql.rds_rotate_general_log


procedure. You can rotate the mysql.slow_log table by calling the mysql.rds_rotate_slow_log
procedure.

Table logs are rotated during a database version upgrade.

To work with the logs from the Amazon RDS console, Amazon RDS API, Amazon RDS CLI, or AWS SDKs,
set the log_output parameter to FILE. Like the MySQL error log, these log files are rotated hourly. The
log files that were generated during the previous two weeks are retained. Note that the retention period
is different between Amazon RDS and Aurora.

Size limits on redo logs


For RDS for MySQL version 8.0.28 and lower, the innodb_log_file_size parameter determines
the size of redo logs. The default value of this parameter is 256 MB. For information about limitations
related to this limit, see Size limits on BLOBs written to the redo log (p. 917). For more information
on how redo log size is calculated for these versions of MySQL, see innodb_log_file_size in the MySQL
documentation.

For RDS for MySQL version 8.0.30 and higher, the innodb_redo_log_capacity parameter
is used instead of the innodb_log_file_size parameter. The default value of the
innodb_redo_log_capacity parameter is 256 MB. For more information, see Changes in MySQL
8.0.30 in the MySQL documentation.

Size limits on BLOBs written to the redo log


For RDS for MySQL version 8.0.30 and higher, the innodb_redo_log_capacity parameter
is used instead of the innodb_log_file_size parameter. The size limit doesn't apply to the
innodb_redo_log_capacity parameter. For more information, see Size limits on redo logs (p. 917).

917
Amazon Relational Database Service User Guide
MySQL database log files

Publishing MySQL logs to Amazon CloudWatch Logs


You can configure your MySQL DB instance to publish log data to a log group in Amazon CloudWatch
Logs. With CloudWatch Logs, you can perform real-time analysis of the log data, and use CloudWatch to
create alarms and view metrics. You can use CloudWatch Logs to store your log records in highly durable
storage.

Amazon RDS publishes each MySQL database log as a separate database stream in the log group. For
example, if you configure the export function to include the slow query log, slow query data is stored in
a slow query log stream in the /aws/rds/instance/my_instance/slowquery log group.

The error log is enabled by default. The following table summarizes the requirements for the other
MySQL logs.

Log Requirement

Audit log The DB instance must use a custom option group


with the MARIADB_AUDIT_PLUGIN option.

General log The DB instance must use a custom parameter


group with the parameter setting general_log
= 1 to enable the general log.

Slow query log The DB instance must use a custom


parameter group with the parameter setting
slow_query_log = 1 to enable the slow query
log.

Log output The DB instance must use a custom parameter


group with the parameter setting log_output =
FILE to write logs to the file system and publish
them to CloudWatch Logs.

Console

To publish MySQL logs to CloudWatch Logs using the console

1. Open the Amazon RDS console at https://console.aws.amazon.com/rds/.


2. In the navigation pane, choose Databases, and then choose the DB instance that you want to
modify.
3. Choose Modify.
4. In the Log exports section, choose the logs that you want to start publishing to CloudWatch Logs.
5. Choose Continue, and then choose Modify DB Instance on the summary page.

AWS CLI

You can publish MySQL logs with the AWS CLI. You can call the modify-db-instance command with
the following parameters:

• --db-instance-identifier
• --cloudwatch-logs-export-configuration

918
Amazon Relational Database Service User Guide
MySQL database log files

Note
A change to the --cloudwatch-logs-export-configuration option is always applied
to the DB instance immediately. Therefore, the --apply-immediately and --no-apply-
immediately options have no effect.

You can also publish MySQL logs by calling the following AWS CLI commands:

• create-db-instance
• restore-db-instance-from-db-snapshot
• restore-db-instance-from-s3
• restore-db-instance-to-point-in-time

Run one of these AWS CLI commands with the following options:

• --db-instance-identifier
• --enable-cloudwatch-logs-exports
• --db-instance-class
• --engine

Other options might be required depending on the AWS CLI command you run.

Example
The following example modifies an existing MySQL DB instance to publish log files to CloudWatch Logs.
The --cloudwatch-logs-export-configuration value is a JSON object. The key for this object is
EnableLogTypes, and its value is an array of strings with any combination of audit, error, general,
and slowquery.

For Linux, macOS, or Unix:

aws rds modify-db-instance \


--db-instance-identifier mydbinstance \
--cloudwatch-logs-export-configuration '{"EnableLogTypes":
["audit","error","general","slowquery"]}'

For Windows:

aws rds modify-db-instance ^


--db-instance-identifier mydbinstance ^
--cloudwatch-logs-export-configuration '{"EnableLogTypes":
["audit","error","general","slowquery"]}'

Example
The following example creates a MySQL DB instance and publishes log files to CloudWatch Logs. The
--enable-cloudwatch-logs-exports value is a JSON array of strings. The strings can be any
combination of audit, error, general, and slowquery.

For Linux, macOS, or Unix:

aws rds create-db-instance \


--db-instance-identifier mydbinstance \
--enable-cloudwatch-logs-exports '["audit","error","general","slowquery"]' \
--db-instance-class db.m4.large \

919
Amazon Relational Database Service User Guide
MySQL database log files

--engine MySQL

For Windows:

aws rds create-db-instance ^


--db-instance-identifier mydbinstance ^
--enable-cloudwatch-logs-exports '["audit","error","general","slowquery"]' ^
--db-instance-class db.m4.large ^
--engine MySQL

RDS API

You can publish MySQL logs with the RDS API. You can call the ModifyDBInstance action with the
following parameters:

• DBInstanceIdentifier
• CloudwatchLogsExportConfiguration

Note
A change to the CloudwatchLogsExportConfiguration parameter is always applied to the
DB instance immediately. Therefore, the ApplyImmediately parameter has no effect.

You can also publish MySQL logs by calling the following RDS API operations:

• CreateDBInstance
• RestoreDBInstanceFromDBSnapshot
• RestoreDBInstanceFromS3
• RestoreDBInstanceToPointInTime

Run one of these RDS API operations with the following parameters:

• DBInstanceIdentifier
• EnableCloudwatchLogsExports
• Engine
• DBInstanceClass

Other parameters might be required depending on the AWS CLI command you run.

Managing table-based MySQL logs


You can direct the general and slow query logs to tables on the DB instance by creating a DB parameter
group and setting the log_output server parameter to TABLE. General queries are then logged to the
mysql.general_log table, and slow queries are logged to the mysql.slow_log table. You can query
the tables to access the log information. Enabling this logging increases the amount of data written to
the database, which can degrade performance.

Both the general log and the slow query logs are disabled by default. In order to enable logging to
tables, you must also set the general_log and slow_query_log server parameters to 1.

Log tables keep growing until the respective logging activities are turned off by resetting the appropriate
parameter to 0. A large amount of data often accumulates over time, which can use up a considerable
percentage of your allocated storage space. Amazon RDS doesn't allow you to truncate the log tables,

920
Amazon Relational Database Service User Guide
MySQL database log files

but you can move their contents. Rotating a table saves its contents to a backup table and then creates
a new empty log table. You can manually rotate the log tables with the following command line
procedures, where the command prompt is indicated by PROMPT>:

PROMPT> CALL mysql.rds_rotate_slow_log;


PROMPT> CALL mysql.rds_rotate_general_log;

To completely remove the old data and reclaim the disk space, call the appropriate procedure twice in
succession.

Configuring MySQL binary logging


The binary log is a set of log files that contain information about data modifications made to an MySQL
server instance. The binary log contains information such as the following:

• Events that describe database changes such as table creation or row modifications
• Information about the duration of each statement that updated data
• Events for statements that could have updated data but didn't

The binary log records statements that are sent during replication. It is also required for some
recovery operations. For more information, see The Binary Log and Binary Log Overview in the MySQL
documentation.

The automated backups feature determines whether binary logging is turned on or off for MySQL. You
have the following options:

Turn binary logging on

Set the backup retention period to a positive nonzero value.


Turn binary logging off

Set the backup retention period to zero.

For more information, see Enabling automated backups (p. 593).

MySQL on Amazon RDS supports the row-based, statement-based, and mixed binary logging formats. We
recommend mixed unless you need a specific binlog format. For details on the different MySQL binary
log formats, see Binary logging formats in the MySQL documentation.

If you plan to use replication, the binary logging format is important because it determines the record of
data changes that is recorded in the source and sent to the replication targets. For information about the
advantages and disadvantages of different binary logging formats for replication, see Advantages and
disadvantages of statement-based and row-based replication in the MySQL documentation.
Important
Setting the binary logging format to row-based can result in very large binary log files. Large
binary log files reduce the amount of storage available for a DB instance and can increase the
amount of time to perform a restore operation of a DB instance.
Statement-based replication can cause inconsistencies between the source DB instance and a
read replica. For more information, see Determination of safe and unsafe statements in binary
logging in the MySQL documentation.
Enabling binary logging increases the number of write disk I/O operations to the DB instance.
You can monitor IOPS usage with the WriteIOPS CloudWatch metric.

To set the MySQL binary logging format

1. Open the Amazon RDS console at https://console.aws.amazon.com/rds/.

921
Amazon Relational Database Service User Guide
MySQL database log files

2. In the navigation pane, choose Parameter groups.


3. Choose the parameter group used by the DB instance you want to modify.

You can't modify a default parameter group. If the DB instance is using a default parameter group,
create a new parameter group and associate it with the DB instance.

For more information on parameter groups, see Working with parameter groups (p. 347).
4. From Parameter group actions, choose Edit.
5. Set the binlog_format parameter to the binary logging format of your choice (ROW, STATEMENT,
or MIXED).

You can turn off binary logging by setting the backup retention period of a DB instance to zero, but
this disables daily automated backups. We recommend that you don't disable backups. For more
information about the Backup retention period setting, see Settings for DB instances (p. 402).
6. Choose Save changes to save the updates to the DB parameter group.

Because the binlog_format parameter is dynamic, you don't need to reboot the DB instance for the
changes to apply.
Important
Changing a DB parameter group affects all DB instances that use that parameter group. If you
want to specify different binary logging formats for different MySQL DB instances in an AWS
Region, the DB instances must use different DB parameter groups. These parameter groups
identify different logging formats. Assign the appropriate DB parameter group to the each DB
instance.

Accessing MySQL binary logs


You can use the mysqlbinlog utility to download or stream binary logs from RDS for MySQL DB
instances. The binary log is downloaded to your local computer, where you can perform actions such as
replaying the log using the mysql utility. For more information about using the mysqlbinlog utility, see
Using mysqlbinlog to back up binary log files in the MySQL documentation.

To run the mysqlbinlog utility against an Amazon RDS instance, use the following options:

• --read-from-remote-server – Required.
• --host – The DNS name from the endpoint of the instance.
• --port – The port used by the instance.
• --user – A MySQL user that has been granted the REPLICATION SLAVE permission.
• --password – The password for the MySQL user, or omit a password value so that the utility prompts
you for a password.
• --raw – Download the file in binary format.
• --result-file – The local file to receive the raw output.
• --stop-never – Stream the binary log files.
• --verbose – When you use the ROW binlog format, include this option to see the row events as
pseudo-SQL statements. For more information on the --verbose option, see mysqlbinlog row event
display in the MySQL documentation.
• Specify the names of one or more binary log files. To get a list of the available logs, use the SQL
command SHOW BINARY LOGS.

For more information about mysqlbinlog options, see mysqlbinlog — Utility for processing binary log
files in the MySQL documentation.

The following examples show how to use the mysqlbinlog utility.

922
Amazon Relational Database Service User Guide
MySQL database log files

For Linux, macOS, or Unix:

mysqlbinlog \
--read-from-remote-server \
--host=MySQLInstance1.cg034hpkmmjt.region.rds.amazonaws.com \
--port=3306 \
--user ReplUser \
--password \
--raw \
--verbose \
--result-file=/tmp/ \
binlog.00098

For Windows:

mysqlbinlog ^
--read-from-remote-server ^
--host=MySQLInstance1.cg034hpkmmjt.region.rds.amazonaws.com ^
--port=3306 ^
--user ReplUser ^
--password ^
--raw ^
--verbose ^
--result-file=/tmp/ ^
binlog.00098

Amazon RDS normally purges a binary log as soon as possible, but the binary log must still be available
on the instance to be accessed by mysqlbinlog. To specify the number of hours for RDS to retain binary
logs, use the mysql.rds_set_configuration (p. 1758) stored procedure and specify a period with enough
time for you to download the logs. After you set the retention period, monitor storage usage for the DB
instance to ensure that the retained binary logs don't take up too much storage.

The following example sets the retention period to 1 day.

call mysql.rds_set_configuration('binlog retention hours', 24);

To display the current setting, use the mysql.rds_show_configuration (p. 1760) stored procedure.

call mysql.rds_show_configuration;

923
Amazon Relational Database Service User Guide
Oracle database log files

Oracle database log files


You can access Oracle alert logs, audit files, and trace files by using the Amazon RDS console or API. For
more information about viewing, downloading, and watching file-based database logs, see Monitoring
Amazon RDS log files (p. 895).

The Oracle audit files provided are the standard Oracle auditing files. Amazon RDS supports the Oracle
fine-grained auditing (FGA) feature. However, log access doesn't provide access to FGA events that are
stored in the SYS.FGA_LOG$ table and that are accessible through the DBA_FGA_AUDIT_TRAIL view.

The DescribeDBLogFiles API operation that lists the Oracle log files that are available for a
DB instance ignores the MaxRecords parameter and returns up to 1,000 records. The call returns
LastWritten as a POSIX date in milliseconds.

Topics
• Retention schedule (p. 924)
• Working with Oracle trace files (p. 924)
• Publishing Oracle logs to Amazon CloudWatch Logs (p. 927)
• Previous methods for accessing alert logs and listener logs (p. 930)

Retention schedule
The Oracle database engine might rotate log files if they get very large. To retain audit or trace files,
download them. If you store the files locally, you reduce your Amazon RDS storage costs and make more
space available for your data.

The following table shows the retention schedule for Oracle alert logs, audit files, and trace files on
Amazon RDS.

Log type Retention schedule

Alert logs The text alert log is rotated daily with 30-day retention managed by Amazon
RDS. The XML alert log is retained for at least seven days. You can access this
log by using the ALERTLOG view.

Audit files The default retention period for audit files is seven days. Amazon RDS might
delete audit files older than seven days.

Trace files The default retention period for trace files is seven days. Amazon RDS might
delete trace files older than seven days.

Listener logs The default retention period for the listener logs is seven days. Amazon RDS
might delete listener logs older than seven days.

Note
Audit files and trace files share the same retention configuration.

Working with Oracle trace files


Following, you can find descriptions of Amazon RDS procedures to create, refresh, access, and delete
trace files.

Topics

924
Amazon Relational Database Service User Guide
Oracle database log files

• Listing files (p. 925)


• Generating trace files and tracing a session (p. 925)
• Retrieving trace files (p. 926)
• Purging trace files (p. 926)

Listing files
You can use either of two procedures to allow access to any file in the background_dump_dest
path. The first procedure refreshes a view containing a listing of all files currently in
background_dump_dest.

EXEC rdsadmin.manage_tracefiles.refresh_tracefile_listing;

After the view is refreshed, query the following view to access the results.

SELECT * FROM rdsadmin.tracefile_listing;

An alternative to the previous process is to use FROM table to stream nonrelational data in a table-like
format to list database directory contents.

SELECT * FROM TABLE(rdsadmin.rds_file_util.listdir('BDUMP'));

The following query shows the text of a log file.

SELECT text FROM


TABLE(rdsadmin.rds_file_util.read_text_file('BDUMP','alert_dbname.log.date'));

On a read replica, get the name of the BDUMP directory by querying V$DATABASE.DB_UNIQUE_NAME.
If the unique name is DATABASE_B, then the BDUMP directory is BDUMP_B. The following
example queries the BDUMP name on a replica and then uses this name to query the contents of
alert_DATABASE.log.2020-06-23.

SELECT 'BDUMP' || (SELECT regexp_replace(DB_UNIQUE_NAME,'.*(_[A-Z])', '\1') FROM V


$DATABASE) AS BDUMP_VARIABLE FROM DUAL;

BDUMP_VARIABLE
--------------
BDUMP_B

SELECT TEXT FROM


table(rdsadmin.rds_file_util.read_text_file('BDUMP_B','alert_DATABASE.log.2020-06-23'));

Generating trace files and tracing a session


Because there are no restrictions on ALTER SESSION, many standard methods to generate trace files in
Oracle remain available to an Amazon RDS DB instance. The following procedures are provided for trace
files that require greater access.

Oracle method Amazon RDS method

oradebug hanganalyze 3 EXEC rdsadmin.manage_tracefiles.hanganalyze;

925
Amazon Relational Database Service User Guide
Oracle database log files

Oracle method Amazon RDS method

oradebug dump systemstate 266 EXEC


rdsadmin.manage_tracefiles.dump_systemstate;

You can use many standard methods to trace individual sessions connected to an Oracle DB instance in
Amazon RDS. To enable tracing for a session, you can run subprograms in PL/SQL packages supplied by
Oracle, such as DBMS_SESSION and DBMS_MONITOR. For more information, see Enabling tracing for a
session in the Oracle documentation.

Retrieving trace files


You can retrieve any trace file in background_dump_dest using a standard SQL query on an Amazon
RDS–managed external table. To use this method, you must execute the procedure to set the location for
this table to the specific trace file.

For example, you can use the rdsadmin.tracefile_listing view mentioned preceding to list all
of the trace files on the system. You can then set the tracefile_table view to point to the intended
trace file using the following procedure.

EXEC
rdsadmin.manage_tracefiles.set_tracefile_table_location('CUST01_ora_3260_SYSTEMSTATE.trc');

The following example creates an external table in the current schema with the location set to the file
provided. You can retrieve the contents into a local file using a SQL query.

SPOOL /tmp/tracefile.txt
SELECT * FROM tracefile_table;
SPOOL OFF;

Purging trace files


Trace files can accumulate and consume disk space. Amazon RDS purges trace files by default and
log files that are older than seven days. You can view and set the trace file retention period using the
show_configuration procedure. You should run the command SET SERVEROUTPUT ON so that you
can view the configuration results.

The following example shows the current trace file retention period, and then sets a new trace file
retention period.

# Show the current tracefile retention


SQL> EXEC rdsadmin.rdsadmin_util.show_configuration;
NAME:tracefile retention
VALUE:10080
DESCRIPTION:tracefile expiration specifies the duration in minutes before tracefiles in
bdump are automatically deleted.

# Set the tracefile retention to 24 hours:


SQL> EXEC rdsadmin.rdsadmin_util.set_configuration('tracefile retention',1440);
SQL> commit;

#show the new tracefile retention


SQL> EXEC rdsadmin.rdsadmin_util.show_configuration;
NAME:tracefile retention
VALUE:1440
DESCRIPTION:tracefile expiration specifies the duration in minutes before tracefiles in
bdump are automatically deleted.

926
Amazon Relational Database Service User Guide
Oracle database log files

In addition to the periodic purge process, you can manually remove files from the
background_dump_dest. The following example shows how to purge all files older than five minutes.

EXEC rdsadmin.manage_tracefiles.purge_tracefiles(5);

You can also purge all files that match a specific pattern (if you do, don't include the file extension, such
as .trc). The following example shows how to purge all files that start with SCHPOC1_ora_5935.

EXEC rdsadmin.manage_tracefiles.purge_tracefiles('SCHPOC1_ora_5935');

Publishing Oracle logs to Amazon CloudWatch Logs


You can configure your Amazon RDS for Oracle DB instance to publish log data to a log group in Amazon
CloudWatch Logs. With CloudWatch Logs, you can analyze the log data, and use CloudWatch to create
alarms and view metrics. You can use CloudWatch Logs to store your log records in highly durable
storage.

Amazon RDS publishes each Oracle database log as a separate database stream in the log group. For
example, if you configure the export function to include the audit log, audit data is stored in an audit
log stream in the /aws/rds/instance/my_instance/audit log group. RDS for Oracle supports the
following logs:

• Alert log
• Trace log
• Audit log
• Listener log
• Oracle Management Agent log

This Oracle Management Agent log consists of the log groups shown in the following table.

Log name CloudWatch log group

emctl.log oemagent-emctl

emdctlj.log oemagent-emdctlj

gcagent.log oemagent-gcagent

gcagent_errors.log oemagent-gcagent-errors

emagent.nohup oemagent-emagent-nohup

secure.log oemagent-secure

For more information, see Locating Management Agent Log and Trace Files in the Oracle documentation.

Console

To publish Oracle DB logs to CloudWatch Logs from the AWS Management Console

1. Open the Amazon RDS console at https://console.aws.amazon.com/rds/.


2. In the navigation pane, choose Databases, and then choose the DB instance that you want to
modify.
3. Choose Modify.

927
Amazon Relational Database Service User Guide
Oracle database log files

4. In the Log exports section, choose the logs that you want to start publishing to CloudWatch Logs.
5. Choose Continue, and then choose Modify DB Instance on the summary page.

AWS CLI

To publish Oracle logs, you can use the modify-db-instance command with the following
parameters:

• --db-instance-identifier
• --cloudwatch-logs-export-configuration

Note
A change to the --cloudwatch-logs-export-configuration option is always applied
to the DB instance immediately. Therefore, the --apply-immediately and --no-apply-
immediately options have no effect.

You can also publish Oracle logs using the following commands:

• create-db-instance
• restore-db-instance-from-db-snapshot
• restore-db-instance-from-s3
• restore-db-instance-to-point-in-time

Example

The following example creates an Oracle DB instance with CloudWatch Logs publishing enabled. The --
cloudwatch-logs-export-configuration value is a JSON array of strings. The strings can be any
combination of alert, audit, listener, and trace.

For Linux, macOS, or Unix:

aws rds create-db-instance \


--db-instance-identifier mydbinstance \
--cloudwatch-logs-export-configuration
'["trace","audit","alert","listener","oemagent"]' \
--db-instance-class db.m5.large \
--allocated-storage 20 \
--engine oracle-ee \
--engine-version 12.1.0.2.v18 \
--license-model bring-your-own-license \
--master-username myadmin \
--manage-master-user-password

For Windows:

aws rds create-db-instance ^


--db-instance-identifier mydbinstance ^
--cloudwatch-logs-export-configuration trace alert audit listener oemagent ^
--db-instance-class db.m5.large ^
--allocated-storage 20 ^
--engine oracle-ee ^
--engine-version 12.1.0.2.v18 ^
--license-model bring-your-own-license ^
--master-username myadmin ^
--manage-master-user-password

928
Amazon Relational Database Service User Guide
Oracle database log files

Example

The following example modifies an existing Oracle DB instance to publish log files to CloudWatch
Logs. The --cloudwatch-logs-export-configuration value is a JSON object. The key for this
object is EnableLogTypes, and its value is an array of strings with any combination of alert, audit,
listener, and trace.

For Linux, macOS, or Unix:

aws rds modify-db-instance \


--db-instance-identifier mydbinstance \
--cloudwatch-logs-export-configuration '{"EnableLogTypes":
["trace","alert","audit","listener","oemagent"]}'

For Windows:

aws rds modify-db-instance ^


--db-instance-identifier mydbinstance ^
--cloudwatch-logs-export-configuration EnableLogTypes=\"trace\",\"alert\",\"audit\",
\"listener\",\"oemagent\"

Example

The following example modifies an existing Oracle DB instance to disable publishing audit and listener
log files to CloudWatch Logs. The --cloudwatch-logs-export-configuration value is a JSON
object. The key for this object is DisableLogTypes, and its value is an array of strings with any
combination of alert, audit, listener, and trace.

For Linux, macOS, or Unix:

aws rds modify-db-instance \


--db-instance-identifier mydbinstance \
--cloudwatch-logs-export-configuration '{"DisableLogTypes":["audit","listener"]}'

For Windows:

aws rds modify-db-instance ^


--db-instance-identifier mydbinstance ^
--cloudwatch-logs-export-configuration DisableLogTypes=\"audit\",\"listener\"

RDS API

You can publish Oracle DB logs with the RDS API. You can call the ModifyDBInstance action with the
following parameters:

• DBInstanceIdentifier
• CloudwatchLogsExportConfiguration

Note
A change to the CloudwatchLogsExportConfiguration parameter is always applied to the
DB instance immediately. Therefore, the ApplyImmediately parameter has no effect.

You can also publish Oracle logs by calling the following RDS API operations:

• CreateDBInstance
• RestoreDBInstanceFromDBSnapshot

929
Amazon Relational Database Service User Guide
Oracle database log files

• RestoreDBInstanceFromS3
• RestoreDBInstanceToPointInTime

Run one of these RDS API operations with the following parameters:

• DBInstanceIdentifier
• EnableCloudwatchLogsExports
• Engine
• DBInstanceClass

Other parameters might be required depending on the RDS operation that you run.

Previous methods for accessing alert logs and listener logs


You can view the alert log using the Amazon RDS console. You can also use the following SQL statement
to access the alert log.

SELECT message_text FROM alertlog;

The listenerlog view contains entries for Oracle Database version 12.1.0.2 and earlier. To access the
listener log for these database versions, use the following query.

SELECT message_text FROM listenerlog;

For Oracle Database versions 12.2.0.1 and later, access the listener log using Amazon CloudWatch Logs.
Note
Oracle rotates the alert and listener logs when they exceed 10 MB, at which point they are
unavailable from Amazon RDS views.

930
Amazon Relational Database Service User Guide
PostgreSQL database log files

RDS for PostgreSQL database log files


RDS for PostgreSQL logs database activities to the default PostgreSQL log file. For an on-premises
PostgreSQL DB instance, these messages are stored locally in log/postgresql.log. For an RDS for
PostgreSQL DB instance, the log file is available on the Amazon RDS instance. Also, you must use the
Amazon RDS Console to view or download its contents. The default logging level captures login failures,
fatal server errors, deadlocks, and query failures.

For more information about how you can view, download, and watch file-based database logs, see
Monitoring Amazon RDS log files (p. 895). To learn more about PostgreSQL logs, see Working with
Amazon RDS and Aurora PostgreSQL logs: Part 1 and Working with Amazon RDS and Aurora PostgreSQL
logs: Part 2.

In addition to the standard PostgreSQL logs discussed in this topic, RDS for PostgreSQL also supports
the PostgreSQL Audit extension (pgAudit). Most regulated industries and government agencies need
to maintain an audit log or audit trail of changes made to data to comply with legal requirements. For
information about installing and using pgAudit, see Using pgAudit to log database activity (p. 2362).

Topics
• Parameters that affect logging behavior (p. 931)
• Turning on query logging for your RDS for PostgreSQL DB instance (p. 933)
• Publishing PostgreSQL logs to Amazon CloudWatch Logs (p. 936)

Parameters that affect logging behavior


You can customize the logging behavior for your RDS for PostgreSQL DB instance by modifying various
parameters. In the following table you can find the parameters that affect how long the logs are stored,
when to rotate the log, and whether to output the log as a CSV (comma-separated value) format.
You can also find the text output sent to STDERR, among other settings. To change settings for the
parameters that are modifiable, use a custom DB parameter group for your RDS for PostgreSQL instance.
For more information, see Working with DB parameter groups (p. 349). As noted in the table, the
log_line_prefix can't be changed.

Parameter Default Description

log_destination stderr Sets the output format for the log. The default
is stderr but you can also specify comma-
separated value (CSV) by adding csvlog to the
setting. For more information, see Setting the log
destination (stderr, csvlog) (p. 933)

log_filename postgresql.log.%Y-%m- Specifies the pattern for the log file name. In
%d-%H addition to the default, this parameter supports
postgresql.log.%Y-%m-%d for the filename
pattern.

log_line_prefix %t:%r:%u@%d:[%p]: Defines the prefix for each log line that gets
written to stderr, to note the time (%t), remote
host (%r), user (%u), database (%d), and process
ID (%p). You can't modify this parameter.

log_rotation_age 60 Minutes after which log file is automatically


rotated. You can change this value to between
1 and 1440 minutes. For more information, see
Setting log file rotation (p. 932).

931
Amazon Relational Database Service User Guide
PostgreSQL database log files

Parameter Default Description

log_rotation_size – The size (kB) at which the log is automatically


rotated. By default, this parameter isn't
used because logs are rotated based on the
log_rotation_age parameter. To learn more,
see Setting log file rotation (p. 932).

rds.log_retention_period 4320 PostgreSQL logs that are older than the specified
number of minutes are deleted. The default value
of 4320 minutes deletes log files after 3 days. For
more information, see Setting the log retention
period (p. 932).

To identify application issues, you can look for query failures, login failures, deadlocks, and fatal server
errors in the log. For example, suppose that you converted a legacy application from Oracle to Amazon
RDS PostgreSQL, but not all queries converted correctly. These incorrectly formatted queries generate
error messages that you can find in the logs to help identify problems. For more information about
logging queries, see Turning on query logging for your RDS for PostgreSQL DB instance (p. 933).

In the following topics, you can find information about how to set various parameters that control the
basic details for your PostgreSQL logs.

Topics
• Setting the log retention period (p. 932)
• Setting log file rotation (p. 932)
• Setting the log destination (stderr, csvlog) (p. 933)
• Understanding the log_line_prefix parameter (p. 933)

Setting the log retention period


The rds.log_retention_period parameter specifies how long your RDS for PostgreSQL DB instance
keeps its log files. The default setting is 3 days (4,320 minutes), but you can set this value to anywhere
from 1 day (1,440 minutes) to 7 days (10,080 minutes). Be sure that your RDS for PostgreSQL DB
instance has sufficient storage to hold the log files for the period of time.

We recommend that you have your logs routinely published to Amazon CloudWatch Logs so that you can
view and analyze system data long after the logs have been removed from your RDS for PostgreSQL DB
instance. For more information, see Publishing PostgreSQL logs to Amazon CloudWatch Logs (p. 936).

Setting log file rotation


Amazon RDS creates new log files every hour by default. The timing is controlled by the
log_rotation_age parameter. This parameter has a default value of 60 (minutes), but you can set it to
anywhere from 1 minute to 24 hours (1,440 minutes). When it's time for rotation, a new distinct log file
is created. The file is named according to the pattern specified by the log_filename parameter.

Log files can also be rotated according to their size, as specified in the log_rotation_size parameter.
This parameter specifies that the log should be rotated when it reaches the specified size (in kilobytes).
For an RDS for PostgreSQL DB instance, log_rotation_size is unset, that is, there is no value
specified. However, you can set the parameter from 0-2097151 kB (kilobytes).

The log file names are based on the file name pattern specified in the log_filename parameter. The
available settings for this parameter are as follows:

932
Amazon Relational Database Service User Guide
PostgreSQL database log files

• postgresql.log.%Y-%m-%d – Default format for the log file name. Includes the year, month, and
date in the name of the log file.
• postgresql.log.%Y-%m-%d-%H – Includes the hour in the log file name format.

For more information, see log_rotation_age and log_rotation_size in the PostgreSQL


documentation.

Setting the log destination (stderr, csvlog)


By default, Amazon RDS PostgreSQL generates logs in standard error (stderr) format. This format
is the default setting for the log_destination parameter. Each message is prefixed using the
pattern specified in the log_line_prefix parameter. For more information, see Understanding the
log_line_prefix parameter (p. 933).

RDS for PostgreSQL can also generate the logs in csvlog format. The csvlog is useful for analyzing
the log data as comma-separated values (CSV) data. For example, suppose that you use the log_fdw
extension to work with your logs as foreign tables. The foreign table created on stderr log files
contains a single column with log event data. By adding csvlog to the log_destination parameter,
you get the log file in the CSV format with demarcations for the multiple columns of the foreign table.
You can now sort and analyze your logs more easily. To learn how to use the log_fdw with csvlog, see
Using the log_fdw extension to access the DB log using SQL (p. 2401).

If you specify csvlog for this parameter, be aware that both stderr and csvlog files are
generated. Be sure to monitor the storage consumed by the logs, taking into account the
rds.log_retention_period and other settings that affect log storage and turnover. Using stderr
and csvlog more than doubles the storage consumed by the logs.

If you add csvlog to log_destination and you want to revert to the stderr alone, you need to reset
the parameter. To do so, open the Amazon RDS Console and then open the custom DB parameter group
for your instance. Choose the log_destination parameter, choose Edit parameter, and then choose
Reset.

For more information about configuring logging, see Working with Amazon RDS and Aurora PostgreSQL
logs: Part 1.

Understanding the log_line_prefix parameter


The stderr log format prefixes each log message with the details specified by the log_line_prefix
parameter, as follows.

%t:%r:%u@%d:[%p]:t

You can't change this setting. Each log entry sent to stderr includes the following information.

• %t – Time of log entry


• %r – Remote host address
• %u@%d – User name @ database name
• [%p] – Process ID if available

Turning on query logging for your RDS for PostgreSQL DB


instance
You can collect more detailed information about your database activities, including queries, queries
waiting for locks, checkpoints, and many other details by setting some of the parameters listed in the
following table. This topic focuses on logging queries.

933
Amazon Relational Database Service User Guide
PostgreSQL database log files

Parameter Default Description

log_connections – Logs each successful connection.

log_disconnections – Logs the end of each session and its duration.

log_checkpoints 1 Logs each checkpoint.

log_lock_waits – Logs long lock waits. By default, this parameter


isn't set.

log_min_duration_sample– (ms) Sets the minimum execution time


above which a sample of statements
is logged. Sample size is set using the
log_statement_sample_rate parameter.

log_min_duration_statement
– Any SQL statement that runs atleast for the
specified amount of time or longer gets logged.
By default, this parameter isn't set. Turning on this
parameter can help you find unoptimized queries.

log_statement – Sets the type of statements logged. By default,


this parameter isn't set, but you can change it
to all, ddl, or mod to specify the types of SQL
statements that you want logged. If you specify
anything other than none for this parameter,
you should also take additional steps to prevent
the exposure of passwords in the log files. For
more information, see Mitigating risk of password
exposure when using query logging (p. 936).

log_statement_sample_rate
– The percentage of statements exceeding the time
specified in log_min_duration_sample to
be logged, expressed as a floating point value
between 0.0 and 1.0.

log_statement_stats – Writes cumulative performance statistics to the


server log.

Using logging to find slow performing queries


You can log SQL statements and queries to help find slow performing queries. You turn on this capability
by modifying the settings in the log_statement and log_min_duration parameters as outlined in
this section. Before turning on query logging for your RDS for PostgreSQL DB instance, you should be
aware of possible password exposure in the logs and how to mitigate the risks. For more information, see
Mitigating risk of password exposure when using query logging (p. 936).

Following, you can find reference information about the log_statement and log_min_duration
parameters.

log_statement

This parameter specifies the type of SQL statements that should get sent to the log. The default value
is none. If you change this parameter to all, ddl, or mod, be sure to apply recommended actions
to mitigate the risk of exposing passwords in the logs. For more information, see Mitigating risk of
password exposure when using query logging (p. 936).

934
Amazon Relational Database Service User Guide
PostgreSQL database log files

all

Logs all statements. This setting is recommended for debugging purposes.


ddl

Logs all data definition language (DDL) statements, such as CREATE, ALTER, DROP, and so on.
mod

Logs all DDL statements and data manipulation language (DML) statements, such as INSERT,
UPDATE, and DELETE, which modify the data.
none

No SQL statements get logged. We recommend this setting to avoid the risk of exposing passwords
in the logs.

log_min_duration_statement

Any SQL statement that runs atleast for the specified amount of time or longer gets logged. By default,
this parameter isn't set. Turning on this parameter can help you find unoptimized queries.

–1–2147483647

The number of milliseconds (ms) of runtime over which a statement gets logged.

To set up query logging

These steps assume that your RDS for PostgreSQL DB instance uses a custom DB parameter group.

1. Set the log_statement parameter to all. The following example shows the information that is
written to the postgresql.log file with this parameter setting.

2022-10-05 22:05:52 UTC:52.95.4.1(11335):postgres@labdb:[3639]:LOG: statement: SELECT


feedback, s.sentiment,s.confidence
FROM support,aws_comprehend.detect_sentiment(feedback, 'en') s
ORDER BY s.confidence DESC;
2022-10-05 22:05:52 UTC:52.95.4.1(11335):postgres@labdb:[3639]:LOG: QUERY STATISTICS
2022-10-05 22:05:52 UTC:52.95.4.1(11335):postgres@labdb:[3639]:DETAIL: ! system usage
stats:
! 0.017355 s user, 0.000000 s system, 0.168593 s elapsed
! [0.025146 s user, 0.000000 s system total]
! 36644 kB max resident size
! 0/8 [0/8] filesystem blocks in/out
! 0/733 [0/1364] page faults/reclaims, 0 [0] swaps
! 0 [0] signals rcvd, 0/0 [0/0] messages rcvd/sent
! 19/0 [27/0] voluntary/involuntary context switches
2022-10-05 22:05:52 UTC:52.95.4.1(11335):postgres@labdb:[3639]:STATEMENT: SELECT
feedback, s.sentiment,s.confidence
FROM support,aws_comprehend.detect_sentiment(feedback, 'en') s
ORDER BY s.confidence DESC;
2022-10-05 22:05:56 UTC:52.95.4.1(11335):postgres@labdb:[3639]:ERROR: syntax error at
or near "ORDER" at character 1
2022-10-05 22:05:56 UTC:52.95.4.1(11335):postgres@labdb:[3639]:STATEMENT: ORDER BY
s.confidence DESC;
----------------------- END OF LOG ----------------------

2. Set the log_min_duration_statement parameter. The following example shows the information
that is written to the postgresql.log file when the parameter is set to 1.

935
Amazon Relational Database Service User Guide
PostgreSQL database log files

Queries that exceed the duration specified in the log_min_duration_statement parameter are
logged. The following shows an example. You can view the log file for your RDS for PostgreSQL DB
instance in the Amazon RDS Console.

2022-10-05 19:05:19 UTC:52.95.4.1(6461):postgres@labdb:[6144]:LOG: statement: DROP


table comments;
2022-10-05 19:05:19 UTC:52.95.4.1(6461):postgres@labdb:[6144]:LOG: duration: 167.754 ms
2022-10-05 19:08:07 UTC::@:[355]:LOG: checkpoint starting: time
2022-10-05 19:08:08 UTC::@:[355]:LOG: checkpoint complete: wrote 11 buffers (0.0%); 0
WAL file(s) added, 0 removed, 0 recycled; write=1.013 s, sync=0.006 s, total=1.033 s;
sync files=8, longest=0.004 s, average=0.001 s; distance=131028 kB, estimate=131028 kB
----------------------- END OF LOG ----------------------

Mitigating risk of password exposure when using query logging

We recommend that you keep log_statement set to none to avoid exposing passwords. If you set
log_statement to all, ddl, or mod, we recommend that you take one or more of the following steps.

• For the client, encrypt sensitive information. For more information, see Encryption Options in the
PostgreSQL documentation. Use the ENCRYPTED (and UNENCRYPTED) options of the CREATE and
ALTER statements. For more information, see CREATE USER in the PostgreSQL documentation.
• For your RDS for PostgreSQL DB instance, set up and use the PostgreSQL Auditing (pgAudit) extension.
This extension redacts sensitive information in CREATE and ALTER statements sent to the log. For
more information, see Using pgAudit to log database activity (p. 2362).
• Restrict access to the CloudWatch logs.
• Use stronger authentication mechanisms such as IAM.

Publishing PostgreSQL logs to Amazon CloudWatch Logs


To store your PostgreSQL log records in highly durable storage, you can use Amazon CloudWatch Logs.
With CloudWatch Logs, you can also perform real-time analysis of log data and use CloudWatch to view
metrics and create alarms. For example, if you set log_statements to ddl, you can set up an alarm to
alert you whenever a DDL statement is executed. You can choose to have your PostgreSQL logs uploaded
to CloudWatch Logs during the process of creating your RDS for PostgreSQL DB instance. If you chose
not to upload logs at that time, you can later modify your instance to start uploading logs from that
point forward. In other words, existing logs aren't uploaded. Only new logs are uploaded as they're
created on your modified RDS for PostgreSQL DB instance.

All currently available RDS for PostgreSQL versions support publishing log files to CloudWatch Logs. For
more information, see Amazon RDS for PostgreSQL updates in the Amazon RDS for PostgreSQL Release
Notes..

To work with CloudWatch Logs, configure your RDS for PostgreSQL DB instance to publish log data to a
log group.

You can publish the following log types to CloudWatch Logs for RDS for PostgreSQL:

• Postgresql log
• Upgrade log

After you complete the configuration, Amazon RDS publishes the log events to log streams within a
CloudWatch log group. For example, the PostgreSQL log data is stored within the log group /aws/rds/
instance/my_instance/postgresql. To view your logs, open the CloudWatch console at https://
console.aws.amazon.com/cloudwatch/.

936
Amazon Relational Database Service User Guide
PostgreSQL database log files

Console

To publish PostgreSQL logs to CloudWatch Logs using the console

1. Open the Amazon RDS console at https://console.aws.amazon.com/rds/.


2. In the navigation pane, choose Databases.
3. Choose the DB instance that you want to modify, and then choose Modify.
4. In the Log exports section, choose the logs that you want to start publishing to CloudWatch Logs.

The Log exports section is available only for PostgreSQL versions that support publishing to
CloudWatch Logs.
5. Choose Continue, and then choose Modify DB Instance on the summary page.

AWS CLI

You can publish PostgreSQL logs with the AWS CLI. You can call the modify-db-instance command
with the following parameters.

• --db-instance-identifier
• --cloudwatch-logs-export-configuration

Note
A change to the --cloudwatch-logs-export-configuration option is always applied
to the DB instance immediately. Therefore, the --apply-immediately and --no-apply-
immediately options have no effect.

You can also publish PostgreSQL logs by calling the following CLI commands:

• create-db-instance
• restore-db-instance-from-db-snapshot
• restore-db-instance-to-point-in-time

Run one of these CLI commands with the following options:

• --db-instance-identifier
• --enable-cloudwatch-logs-exports
• --db-instance-class
• --engine

Other options might be required depending on the CLI command you run.

Example Modify an instance to publish logs to CloudWatch Logs

The following example modifies an existing PostgreSQL DB instance to publish log files to CloudWatch
Logs. The --cloudwatch-logs-export-configuration value is a JSON object. The key for this
object is EnableLogTypes, and its value is an array of strings with any combination of postgresql and
upgrade.

For Linux, macOS, or Unix:

aws rds modify-db-instance \


--db-instance-identifier mydbinstance \
--cloudwatch-logs-export-configuration '{"EnableLogTypes":["postgresql", "upgrade"]}'

937
Amazon Relational Database Service User Guide
PostgreSQL database log files

For Windows:

aws rds modify-db-instance ^


--db-instance-identifier mydbinstance ^
--cloudwatch-logs-export-configuration '{"EnableLogTypes":["postgresql","upgrade"]}'

Example Create an instance to publish logs to CloudWatch Logs

The following example creates a PostgreSQL DB instance and publishes log files to CloudWatch Logs.
The --enable-cloudwatch-logs-exports value is a JSON array of strings. The strings can be any
combination of postgresql and upgrade.

For Linux, macOS, or Unix:

aws rds create-db-instance \


--db-instance-identifier mydbinstance \
--enable-cloudwatch-logs-exports '["postgresql","upgrade"]' \
--db-instance-class db.m4.large \
--engine postgres

For Windows:

aws rds create-db-instance ^


--db-instance-identifier mydbinstance ^
--enable-cloudwatch-logs-exports '["postgresql","upgrade"]' ^
--db-instance-class db.m4.large ^
--engine postgres

RDS API

You can publish PostgreSQL logs with the RDS API. You can call the ModifyDBInstance action with the
following parameters:

• DBInstanceIdentifier
• CloudwatchLogsExportConfiguration

Note
A change to the CloudwatchLogsExportConfiguration parameter is always applied to the
DB instance immediately. Therefore, the ApplyImmediately parameter has no effect.

You can also publish PostgreSQL logs by calling the following RDS API operations:

• CreateDBInstance
• RestoreDBInstanceFromDBSnapshot
• RestoreDBInstanceToPointInTime

Run one of these RDS API operations with the following parameters:

• DBInstanceIdentifier
• EnableCloudwatchLogsExports

938
Amazon Relational Database Service User Guide
PostgreSQL database log files

• Engine
• DBInstanceClass

Other parameters might be required depending on the operation that you run.

939
Amazon Relational Database Service User Guide
Monitoring RDS API calls in CloudTrail

Monitoring Amazon RDS API calls in AWS


CloudTrail
AWS CloudTrail is an AWS service that helps you audit your AWS account. AWS CloudTrail is turned on
for your AWS account when you create it. For more information about CloudTrail, see the AWS CloudTrail
User Guide.

Topics
• CloudTrail integration with Amazon RDS (p. 940)
• Amazon RDS log file entries (p. 940)

CloudTrail integration with Amazon RDS


All Amazon RDS actions are logged by CloudTrail. CloudTrail provides a record of actions taken by a user,
role, or an AWS service in Amazon RDS.

CloudTrail events
CloudTrail captures API calls for Amazon RDS as events. An event represents a single request from any
source and includes information about the requested action, the date and time of the action, request
parameters, and so on. Events include calls from the Amazon RDS console and from code calls to the
Amazon RDS API operations.

Amazon RDS activity is recorded in a CloudTrail event in Event history. You can use the CloudTrail
console to view the last 90 days of recorded API activity and events in an AWS Region. For more
information, see Viewing events with CloudTrail event history.

CloudTrail trails
For an ongoing record of events in your AWS account, including events for Amazon RDS, create a trail.
A trail is a configuration that enables delivery of events to a specified Amazon S3 bucket. CloudTrail
typically delivers log files within 15 minutes of account activity.
Note
If you don't configure a trail, you can still view the most recent events in the CloudTrail console
in Event history.

You can create two types of trails for an AWS account: a trail that applies to all Regions, or a trail that
applies to one Region. By default, when you create a trail in the console, the trail applies to all Regions.

Additionally, you can configure other AWS services to further analyze and act upon the event data
collected in CloudTrail logs. For more information, see:

• Overview for creating a trail


• CloudTrail supported services and integrations
• Configuring Amazon SNS notifications for CloudTrail
• Receiving CloudTrail log files from multiple Regions and Receiving CloudTrail log files from multiple
accounts

Amazon RDS log file entries


CloudTrail log files contain one or more log entries. CloudTrail log files are not an ordered stack trace of
the public API calls, so they do not appear in any specific order.

940
Amazon Relational Database Service User Guide
Amazon RDS log file entries

The following example shows a CloudTrail log entry that demonstrates the CreateDBInstance action.

{
"eventVersion": "1.04",
"userIdentity": {
"type": "IAMUser",
"principalId": "AKIAIOSFODNN7EXAMPLE",
"arn": "arn:aws:iam::123456789012:user/johndoe",
"accountId": "123456789012",
"accessKeyId": "AKIAI44QH8DHBEXAMPLE",
"userName": "johndoe"
},
"eventTime": "2018-07-30T22:14:06Z",
"eventSource": "rds.amazonaws.com",
"eventName": "CreateDBInstance",
"awsRegion": "us-east-1",
"sourceIPAddress": "192.0.2.0",
"userAgent": "aws-cli/1.15.42 Python/3.6.1 Darwin/17.7.0 botocore/1.10.42",
"requestParameters": {
"enableCloudwatchLogsExports": [
"audit",
"error",
"general",
"slowquery"
],
"dBInstanceIdentifier": "test-instance",
"engine": "mysql",
"masterUsername": "myawsuser",
"allocatedStorage": 20,
"dBInstanceClass": "db.m1.small",
"masterUserPassword": "****"
},
"responseElements": {
"dBInstanceArn": "arn:aws:rds:us-east-1:123456789012:db:test-instance",
"storageEncrypted": false,
"preferredBackupWindow": "10:27-10:57",
"preferredMaintenanceWindow": "sat:05:47-sat:06:17",
"backupRetentionPeriod": 1,
"allocatedStorage": 20,
"storageType": "standard",
"engineVersion": "8.0.28",
"dbInstancePort": 0,
"optionGroupMemberships": [
{
"status": "in-sync",
"optionGroupName": "default:mysql-8-0"
}
],
"dBParameterGroups": [
{
"dBParameterGroupName": "default.mysql8.0",
"parameterApplyStatus": "in-sync"
}
],
"monitoringInterval": 0,
"dBInstanceClass": "db.m1.small",
"readReplicaDBInstanceIdentifiers": [],
"dBSubnetGroup": {
"dBSubnetGroupName": "default",
"dBSubnetGroupDescription": "default",
"subnets": [
{
"subnetAvailabilityZone": {"name": "us-east-1b"},
"subnetIdentifier": "subnet-cbfff283",

941
Amazon Relational Database Service User Guide
Amazon RDS log file entries

"subnetStatus": "Active"
},
{
"subnetAvailabilityZone": {"name": "us-east-1e"},
"subnetIdentifier": "subnet-d7c825e8",
"subnetStatus": "Active"
},
{
"subnetAvailabilityZone": {"name": "us-east-1f"},
"subnetIdentifier": "subnet-6746046b",
"subnetStatus": "Active"
},
{
"subnetAvailabilityZone": {"name": "us-east-1c"},
"subnetIdentifier": "subnet-bac383e0",
"subnetStatus": "Active"
},
{
"subnetAvailabilityZone": {"name": "us-east-1d"},
"subnetIdentifier": "subnet-42599426",
"subnetStatus": "Active"
},
{
"subnetAvailabilityZone": {"name": "us-east-1a"},
"subnetIdentifier": "subnet-da327bf6",
"subnetStatus": "Active"
}
],
"vpcId": "vpc-136a4c6a",
"subnetGroupStatus": "Complete"
},
"masterUsername": "myawsuser",
"multiAZ": false,
"autoMinorVersionUpgrade": true,
"engine": "mysql",
"cACertificateIdentifier": "rds-ca-2015",
"dbiResourceId": "db-ETDZIIXHEWY5N7GXVC4SH7H5IA",
"dBSecurityGroups": [],
"pendingModifiedValues": {
"masterUserPassword": "****",
"pendingCloudwatchLogsExports": {
"logTypesToEnable": [
"audit",
"error",
"general",
"slowquery"
]
}
},
"dBInstanceStatus": "creating",
"publiclyAccessible": true,
"domainMemberships": [],
"copyTagsToSnapshot": false,
"dBInstanceIdentifier": "test-instance",
"licenseModel": "general-public-license",
"iAMDatabaseAuthenticationEnabled": false,
"performanceInsightsEnabled": false,
"vpcSecurityGroups": [
{
"status": "active",
"vpcSecurityGroupId": "sg-f839b688"
}
]
},
"requestID": "daf2e3f5-96a3-4df7-a026-863f96db793e",
"eventID": "797163d3-5726-441d-80a7-6eeb7464acd4",

942
Amazon Relational Database Service User Guide
Amazon RDS log file entries

"eventType": "AwsApiCall",
"recipientAccountId": "123456789012"
}

As shown in the userIdentity element in the preceding example, every event or log entry contains
information about who generated the request. The identity information helps you determine the
following:

• Whether the request was made with root or IAM user credentials.
• Whether the request was made with temporary security credentials for a role or federated user.
• Whether the request was made by another AWS service.

For more information about the userIdentity, see the CloudTrail userIdentity element. For more
information about CreateDBInstance and other Amazon RDS actions, see the Amazon RDS API
Reference.

943
Amazon Relational Database Service User Guide
Monitoring RDS with Database Activity Streams

Monitoring Amazon RDS with Database Activity


Streams
By using Database Activity Streams, you can monitor near real-time streams of database activity.

Topics
• Overview of Database Activity Streams (p. 944)
• Configuring unified auditing for Oracle Database (p. 948)
• Configuring auditing policy for Microsoft SQL Server (p. 949)
• Starting a database activity stream (p. 950)
• Modifying a database activity stream (p. 951)
• Getting the status of a database activity stream (p. 953)
• Stopping a database activity stream (p. 954)
• Monitoring database activity streams (p. 955)
• Managing access to database activity streams (p. 975)

Overview of Database Activity Streams


As an Amazon RDS database administrator, you need to safeguard your database and meet compliance
and regulatory requirements. One strategy is to integrate database activity streams with your monitoring
tools. In this way, you monitor and set alarms for auditing activity in your database.

Security threats are both external and internal. To protect against internal threats, you can control
administrator access to data streams by configuring the Database Activity Streams feature. Amazon RDS
DBAs don't have access to the collection, transmission, storage, and processing of the streams.

Topics
• How database activity streams work (p. 944)
• Auditing in Oracle Database and Microsoft SQL Server Database (p. 945)
• Asynchronous mode for database activity streams (p. 947)
• Requirements and limitations for database activity streams (p. 947)
• Region and version availability (p. 947)
• Supported DB instance classes for database activity streams (p. 947)

How database activity streams work


Amazon RDS pushes activities to an Amazon Kinesis data stream in near real time. The Kinesis stream
is created automatically. From Kinesis, you can configure AWS services such as Amazon Kinesis Data
Firehose and AWS Lambda to consume the stream and store the data.
Important
Use of the database activity streams feature in Amazon RDS is free, but Amazon Kinesis charges
for a data stream. For more information, see Amazon Kinesis Data Streams pricing.

You can configure applications for compliance management to consume database activity streams. These
applications can use the stream to generate alerts and audit activity on your database.

Amazon RDS supports database activity streams in Multi-AZ deployments. In this case, database activity
streams audit both the primary and standby instances.

944
Amazon Relational Database Service User Guide
Overview

Auditing in Oracle Database and Microsoft SQL Server Database


Auditing is the monitoring and recording of configured database actions. Amazon RDS doesn't capture
database activity by default. You create and manage audit policies in your database yourself.

Topics
• Unified auditing in Oracle Database (p. 945)
• Auditing in Microsoft SQL Server (p. 945)
• Non-native audit fields for Oracle Database and SQL Server (p. 946)
• DB parameter group override (p. 946)

Unified auditing in Oracle Database


In an Oracle database, a unified audit policy is a named group of audit settings that you can use to audit
an aspect of user behavior. A policy can be as simple as auditing the activities of a single user. You can
also create complex audit policies that use conditions.

An Oracle database writes audit records, including SYS audit records, to the unified audit trail. For
example, if an error occurs during an INSERT statement, standard auditing indicates the error number
and the SQL that was run. The audit trail resides in a read-only table in the AUDSYS schema. To access
these records, query the UNIFIED_AUDIT_TRAIL data dictionary view.

Typically, you configure database activity streams as follows:

1. Create an Oracle Database audit policy by using the CREATE AUDIT POLICY command.

The Oracle Database generates audit records.


2. Activate the audit policy by using the AUDIT POLICY command.
3. Configure database activity streams.

Only activities that match the Oracle Database audit policies are captured and sent to the Amazon
Kinesis data stream. When database activity streams are enabled, an Oracle database administrator
can't alter the audit policy or remove audit logs.

To learn more about unified audit policies, see About Auditing Activities with Unified Audit Policies and
AUDIT in the Oracle Database Security Guide.

Auditing in Microsoft SQL Server


Database Activity Stream uses SQLAudit feature to audit the SQL Server database.

RDS for SQL Server instance contains the following:

• Server audit – The SQL server audit collects a single instance of server or database-level actions, and a
group of actions to monitor. The server-level audits RDS_DAS_AUDIT and RDS_DAS_AUDIT_CHANGES
are managed by RDS.
• Server audit specification – The server audit specification records the server-level events. You can
modify the RDS_DAS_SERVER_AUDIT_SPEC specification. This specification is linked to the server
audit RDS_DAS_AUDIT. The RDS_DAS_CHANGES_AUDIT_SPEC specification is managed by RDS.
• Database audit specification – The database audit specification records the database-level events. You
can create a database audit specification RDS_DAS_DB_<name> and link it to RDS_DAS_AUDIT server
audit.

You can configure database activity streams by using the console or CLI. Typically, you configure
database activity streams as follows:

945
Amazon Relational Database Service User Guide
Overview

1. (Optional) Create a database audit specification with the CREATE DATABASE AUDIT
SPECIFICATION command and link it to RDS_DAS_AUDIT server audit.
2. (Optional) Modify the server audit specification with the ALTER SERVER AUDIT SPECIFICATION
command and define the policies.
3. Activate the database and server audit policies. For example:

ALTER DATABASE AUDIT SPECIFICATION [<Your database specification>] WITH


(STATE=ON)

ALTER SERVER AUDIT SPECIFICATION [RDS_DAS_SERVER_AUDIT_SPEC] WITH (STATE=ON)


4. Configure database activity streams.

Only activities that match the server and database audit policies are captured and sent to the Amazon
Kinesis data stream. When database activity streams are enabled and the policies are locked, a
database administrator can't alter the audit policy or remove audit logs.
Important
If the database audit specification for a specific database is enabled and the policy is in a
locked state, then the database can't be dropped.

For more information about SQL Server auditing, see SQL Server Audit Components in the Microsoft SQL
Server documentation.

Non-native audit fields for Oracle Database and SQL Server


When you start a database activity stream, every database event generates a corresponding activity
stream event. For example, a database user might run SELECT and INSERT statements. The database
audits these events and sends them to an Amazon Kinesis data stream.

The events are represented in the stream as JSON objects. A JSON object contains a
DatabaseActivityMonitoringRecord, which contains a databaseActivityEventList array.
Predefined fields in the array include class, clientApplication, and command.

By default, an activity stream doesn't include engine-native audit fields. You can configure Amazon RDS
for Oracle and SQL Server so that it includes these extra fields in the engineNativeAuditFields
JSON object.

In Oracle Database, most events in the unified audit trail map to fields in the RDS data activity
stream. For example, the UNIFIED_AUDIT_TRAIL.SQL_TEXT field in unified auditing maps to the
commandText field in a database activity stream. However, Oracle Database audit fields such as
OS_USERNAME don't map to predefined fields in a database activity stream.

In SQL Server, most of the event's fields that are recorded by the SQLAudit map to the fields in RDS
database activity stream. For example, the code field from sys.fn_get_audit_file in the audit maps
to the commandText field in a database activity stream. However, SQL Server database audit fields, such
as permission_bitmask, don’t map to predefined fields in a database activity stream.

For more information about databaseActivityEventList, see databaseActivityEventList JSON


array (p. 968).

DB parameter group override


Typically, you turn on unified auditing in RDS for Oracle by attaching a parameter group. However,
Database Activity Streams require additional configuration. To improve your customer experience,
Amazon RDS performs the following:

• If you activate an activity stream, RDS for Oracle ignores the auditing parameters in the parameter
group.

946
Amazon Relational Database Service User Guide
Overview

• If you deactivate an activity stream, RDS for Oracle stops ignoring the auditing parameters.

The database activity stream for SQL Server is independent of any parameters you set in the SQL Audit
option.

Asynchronous mode for database activity streams


Activity streams in Amazon RDS are always asynchronous. When a database session generates an activity
stream event, the session returns to normal activities immediately. In the background, Amazon RDS
makes the activity stream event into a durable record.

If an error occurs in the background task, Amazon RDS generates an event. This event indicates the
beginning and end of any time windows where activity stream event records might have been lost.
Asynchronous mode favors database performance over the accuracy of the activity stream.

Requirements and limitations for database activity streams


In RDS, database activity streams have the following requirements and limitations:

• Amazon Kinesis is required for database activity streams.


• AWS Key Management Service (AWS KMS) is required for database activity streams because they are
always encrypted.
• Applying additional encryption to your Amazon Kinesis data stream is incompatible with database
activity streams, which are already encrypted with your AWS KMS key.
• You create and manage audit policies yourself. Unlike Amazon Aurora, RDS for Oracle doesn't capture
database activities by default.
• You create and manage audit policies or specifications yourself. Unlike Amazon Aurora, Amazon RDS
doesn't capture database activities by default.
• In a Multi-AZ deployment, start the database activity stream only on the primary DB instance. The
activity stream audits both the primary and standby DB instances automatically. No additional steps
are required during a failover.
• Renaming a DB instance doesn't create a new Kinesis stream.
• CDBs aren't supported for RDS for Oracle.
• Read replicas aren't supported.

Region and version availability


Feature availability and support varies across specific versions of each database engine, and across AWS
Regions. For more information on version and Region availability with database activity streams, see
Database activity streams (p. 121).

Supported DB instance classes for database activity streams


For RDS for Oracle you can use database activity streams with the following DB instance classes:

• db.m4.*large
• db.m5.*large
• db.m5d.*large
• db.m6i.*large
• db.r4.*large
• db.r5.*large

947
Amazon Relational Database Service User Guide
Configuring Oracle unified auditing

• db.r5.*large.tpc*.mem*x
• db.r5b.*large
• db.r5b.*large.tpc*.mem*x
• db.r5d.*large
• db.r6i.*large
• db.x2idn.*large
• db.x2iedn.*large
• db.x2iezn.*large
• db.z1d.*large

For RDS for SQL Server you can use database activity streams with the following DB instance classes:

• db.m4.*large
• db.m5.*large
• db.m5d.*large
• db.m6i.*large
• db.r4.*large
• db.r5.*large
• db.r5b.*large
• db.r5d.*large
• db.r6i.*large
• db.x1e.*large
• db.z1d.*large

For more information about instance class types, see DB instance classes (p. 11).

Configuring unified auditing for Oracle Database


When you configure unified auditing for use with database activity streams, the following situations are
possible:

• Unified auditing isn't configured for your Oracle database.

In this case, create new policies with the CREATE AUDIT POLICY command, then activate them with
the AUDIT POLICY command. The following example creates and activates a policy to monitor users
with specific privileges and roles.

CREATE AUDIT POLICY table_pol


PRIVILEGES CREATE ANY TABLE, DROP ANY TABLE
ROLES emp_admin, sales_admin;

AUDIT POLICY table_pol;

For complete instructions, see Configuring Audit Policies in the Oracle Database documentation.
• Unified auditing is configured for your Oracle database.

When you activate a database activity stream, RDS for Oracle automatically clears existing audit data.
It also revokes audit trail privileges. RDS for Oracle can no longer do the following:
• Purge unified audit trail records.

948
Amazon Relational Database Service User Guide
Configuring SQL Server auditing

• Add, delete, or modify the unified audit policy.


• Update the last archived timestamp.
Important
We strongly recommend that you back up your audit data before activating a database
activity stream.

For a description of the UNIFIED_AUDIT_TRAIL view, see UNIFIED_AUDIT_TRAIL. If you have an


account with Oracle Support, see How To Purge The UNIFIED AUDIT TRAIL.

Configuring auditing policy for Microsoft SQL Server


A SQL Server database instance has the server audit RDS_DAS_AUDIT, which is managed by
Amazon RDS. You can define the policies to record server events in the server audit specification
RDS_DAS_SERVER_AUDIT_SPEC. You can create a database audit specification, such as
RDS_DAS_DB_<name>, and define the policies to record database events. For the list of server and
database level audit action groups, see SQL Server Audit Action Groups and Actions in the Microsoft SQL
Server documentation.

The default server policy monitors only failed logins and changes to any database or server audit
specifications for database activity streams.

Limitations for the audit and audit specifications include the following:

• You can't modify the server or database audit specifications when the database activity stream is in a
locked state.
• You can't modify the server audit RDS_DAS_AUDIT specification.
• You can't modify the SQL Server audit RDS_DAS_CHANGES or its related server audit specification
RDS_DAS_CHANGES_AUDIT_SPEC.
• When creating a database audit specification, you must use the format RDS_DAS_DB_<name> for
example, RDS_DAS_DB_databaseActions.

Important
For smaller instance classes, we recommend that you don't audit all but only the data that is
required. This helps to reduce the performance impact of Database Activity Streams on these
instance classes.

The following sample code modifies the server audit specification RDS_DAS_SERVER_AUDIT_SPEC and
audits any logout and successful login actions:

ALTER SERVER AUDIT SPECIFICATION [RDS_DAS_SERVER_AUDIT_SPEC]


WITH (STATE=OFF);
ALTER SERVER AUDIT SPECIFICATION [RDS_DAS_SERVER_AUDIT_SPEC]
ADD (LOGOUT_GROUP),
ADD (SUCCESSFUL_LOGIN_GROUP)
WITH (STATE = ON );

The following sample code creates a database audit specification RDS_DAS_DB_database_spec and
attaches it to the server audit RDS_DAS_AUDIT:

USE testDB;
CREATE DATABASE AUDIT SPECIFICATION [RDS_DAS_DB_database_spec]
FOR SERVER AUDIT [RDS_DAS_AUDIT]
ADD ( INSERT, UPDATE, DELETE
ON testTable BY testUser )

949
Amazon Relational Database Service User Guide
Starting a database activity stream

WITH (STATE = ON);

After the audit specifications are configured, make sure that the specifications
RDS_DAS_SERVER_AUDIT_SPEC and RDS_DAS_DB_<name> are set to a state of ON. Now they can send
the audit data to your database activity stream.

Starting a database activity stream


When you start an activity stream for the DB instance, each database activity event that you configured
in the audit policy generates an activity stream event. SQL commands such as CONNECT and SELECT
generate access events. SQL commands such as CREATE and INSERT generate change events.
Important
Turning on an activity stream for an Oracle DB instance clears existing audit data. It also
revokes audit trail privileges. When the stream is enabled, RDS for Oracle can no longer do the
following:

• Purge unified audit trail records.


• Add, delete, or modify the unified audit policy.
• Update the last archived time stamp.

Console
To start a database activity stream

1. Open the Amazon RDS console at https://console.aws.amazon.com/rds/.


2. In the navigation pane, choose Databases.
3. Choose the Amazon RDS database instance on which you want to start an activity stream. In a Multi-
AZ deployment, start the stream on only the primary instance. The activity stream audits both the
primary and the standby instances.
4. For Actions, choose Start activity stream.

The Start database activity stream: name window appears, where name is your RDS instance.
5. Enter the following settings:

• For AWS KMS key, choose a key from the list of AWS KMS keys.

Amazon RDS uses the KMS key to encrypt the key that in turn encrypts database activity. Choose
a KMS key other than the default key. For more information about encryption keys and AWS KMS,
see What is AWS Key Management Service? in the AWS Key Management Service Developer Guide.
• For Database activity events, choose Enable engine-native audit fields to include the engine
specific audit fields.
• Choose Immediately.

When you choose Immediately, the RDS instance restarts right away. If you choose During the
next maintenance window, the RDS instance doesn't restart right away. In this case, the database
activity stream doesn't start until the next maintenance window.
6. Choose Start database activity stream.

The status for the the database shows that the activity stream is starting.
Note
If you get the error You can't start a database activity stream in
this configuration, check Supported DB instance classes for database activity
streams (p. 947) to see whether your RDS instance is using a supported instance class.

950
Amazon Relational Database Service User Guide
Modifying a database activity stream

AWS CLI
To start database activity streams for a DB instance, configure the database using the start-activity-
stream AWS CLI command.

• --resource-arn arn – Specifies the Amazon Resource Name (ARN) of the DB instance.
• --kms-key-id key – Specifies the KMS key identifier for encrypting messages in the database
activity stream. The AWS KMS key identifier is the key ARN, key ID, alias ARN, or alias name for the
AWS KMS key.
• --engine-native-audit-fields-included – Includes engine-specific auditing fields in the data
stream. To exclude these fields, specify --no-engine-native-audit-fields-included (default).

The following example starts a database activity stream for a DB instance in asynchronous mode.

For Linux, macOS, or Unix:

aws rds start-activity-stream \


--mode async \
--kms-key-id my-kms-key-arn \
--resource-arn my-instance-arn \
--engine-native-audit-fields-included \
--apply-immediately

For Windows:

aws rds start-activity-stream ^


--mode async ^
--kms-key-id my-kms-key-arn ^
--resource-arn my-instance-arn ^
--engine-native-audit-fields-included ^
--apply-immediately

RDS API
To start database activity streams for a DB instance, configure the instance using the StartActivityStream
operation.

Call the action with the parameters below:

• Region
• KmsKeyId
• ResourceArn
• Mode
• EngineNativeAuditFieldsIncluded

Modifying a database activity stream


You might want to customize your Amazon RDS audit policy when your activity stream is started. If you
don't want to lose time and data by stopping your activity stream, you can change the audit policy state
to either of the following settings:

Locked (default)

The audit policies in your database are read-only.

951
Amazon Relational Database Service User Guide
Modifying a database activity stream

Unlocked

The audit policies in your database are read/write.

The basic steps are as follows:

1. Modify the audit policy state to unlocked.


2. Customize your audit policy.
3. Modify the audit policy state to locked.

Console

To modify the audit policy state of your activity stream

1. Open the Amazon RDS console at https://console.aws.amazon.com/rds/.


2. In the navigation pane, choose Databases.
3. For Actions, choose Modify database activity stream.

The Modify database activity stream: name window appears, where name is your RDS instance.
4. Choose either of the following options:

Locked

When you lock your audit policy, it becomes read-only. You can't edit your audit policy unless
you unlock the policy or stop the activity stream.
Unlocked

When you unlock your audit policy, it becomes read/write. You can edit your audit policy while
the activity stream is started.
5. Choose Modify DB activity stream.

The status for the Amazon RDS database shows Configuring activity stream.
6. (Optional) Choose the DB instance link. Then choose the Configuration tab.

The Audit policy status field shows one of the following values:

• Locked
• Unlocked
• Locking policy
• Unlocking policy

AWS CLI
To modify the activity stream state for the database instance, use the modify-activity-stream AWS CLI
command.

Option Required? Description

--resource-arn my- Yes The Amazon Resource Name (ARN) of your RDS
instance-ARN database instance.

952
Amazon Relational Database Service User Guide
Getting the activity stream status

Option Required? Description

--audit-policy-state No The new state of the audit policy for the database
activity stream on your instance: locked or
unlocked.

The following example unlocks the audit policy for the activity stream started on my-instance-ARN.

For Linux, macOS, or Unix:

aws rds modify-activity-stream \


--resource-arn my-instance-ARN \
--audit-policy-state unlocked

For Windows:

aws rds modify-activity-stream ^


--resource-arn my-instance-ARN ^
--audit-policy-state unlocked

The following example describes the instance my-instance. The partial sample output shows that the
audit policy is unlocked.

aws rds describe-db-instances --db-instance-identifier my-instance

{
"DBInstances": [
{
...
"Engine": "oracle-ee",
...
"ActivityStreamStatus": "started",
"ActivityStreamKmsKeyId": "ab12345e-1111-2bc3-12a3-ab1cd12345e",
"ActivityStreamKinesisStreamName": "aws-rds-das-db-AB1CDEFG23GHIJK4LMNOPQRST",
"ActivityStreamMode": "async",
"ActivityStreamEngineNativeAuditFieldsIncluded": true,
"ActivityStreamPolicyStatus": "unlocked",
...
}
]
}

RDS API
To modify the policy state of your database activity stream, use the ModifyActivityStream operation.

Call the action with the parameters below:

• AuditPolicyState
• ResourceArn

Getting the status of a database activity stream


You can get the status of an activity stream for your Amazon RDS database instance using the console or
AWS CLI.

953
Amazon Relational Database Service User Guide
Stopping a database activity stream

Console
To get the status of a database activity stream

1. Open the Amazon RDS console at https://console.aws.amazon.com/rds/.


2. In the navigation pane, choose Databases, and then choose the DB instance link.
3. Choose the Configuration tab, and check Database activity stream for status.

AWS CLI
You can get the activity stream configuration for a database instance as the response to a describe-db-
instances CLI request.

The following example describes my-instance.

aws rds --region my-region describe-db-instances --db-instance-identifier my-db

The following example shows a JSON response. The following fields are shown:

• ActivityStreamKinesisStreamName
• ActivityStreamKmsKeyId
• ActivityStreamStatus
• ActivityStreamMode
• ActivityStreamPolicyStatus

{
"DBInstances": [
{
...
"Engine": "oracle-ee",
...
"ActivityStreamStatus": "starting",
"ActivityStreamKmsKeyId": "ab12345e-1111-2bc3-12a3-ab1cd12345e",
"ActivityStreamKinesisStreamName": "aws-rds-das-db-AB1CDEFG23GHIJK4LMNOPQRST",
"ActivityStreamMode": "async",
"ActivityStreamEngineNativeAuditFieldsIncluded": true,
"ActivityStreamPolicyStatus": locked",
...
}
]
}

RDS API
You can get the activity stream configuration for a database as the response to a DescribeDBInstances
operation.

Stopping a database activity stream


You can stop an activity stream using the console or AWS CLI.

If you delete your Amazon RDS database instance, the activity stream is stopped and the underlying
Amazon Kinesis stream is deleted automatically.

954
Amazon Relational Database Service User Guide
Monitoring activity streams

Console
To turn off an activity stream

1. Open the Amazon RDS console at https://console.aws.amazon.com/rds/.


2. In the navigation pane, choose Databases.
3. Choose a database that you want to stop the database activity stream for.
4. For Actions, choose Stop activity stream. The Database Activity Stream window appears.

a. Choose Immediately.

When you choose Immediately, the RDS instance restarts right away. If you choose During
the next maintenance window, the RDS instance doesn't restart right away. In this case, the
database activity stream doesn't stop until the next maintenance window.
b. Choose Continue.

AWS CLI
To stop database activity streams for your database, configure the DB instance using the AWS CLI
command stop-activity-stream. Identify the AWS Region for the DB instance using the --region
parameter. The --apply-immediately parameter is optional.

For Linux, macOS, or Unix:

aws rds --region MY_REGION \


stop-activity-stream \
--resource-arn MY_DB_ARN \
--apply-immediately

For Windows:

aws rds --region MY_REGION ^


stop-activity-stream ^
--resource-arn MY_DB_ARN ^
--apply-immediately

RDS API
To stop database activity streams for your database, configure the DB instance using the
StopActivityStream operation. Identify the AWS Region for the DB instance using the Region parameter.
The ApplyImmediately parameter is optional.

Monitoring database activity streams


Database activity streams monitor and report activities. The stream of activity is collected and
transmitted to Amazon Kinesis. From Kinesis, you can monitor the activity stream, or other services
and applications can consume the activity stream for further analysis. You can find the underlying
Kinesis stream name by using the AWS CLI command describe-db-instances or the RDS API
DescribeDBInstances operation.

Amazon RDS manages the Kinesis stream for you as follows:

• Amazon RDS creates the Kinesis stream automatically with a 24-hour retention period.
• Amazon RDS scales the Kinesis stream if necessary.
• If you stop the database activity stream or delete the DB instance, Amazon RDS deletes the Kinesis
stream.

955
Amazon Relational Database Service User Guide
Monitoring activity streams

The following categories of activity are monitored and put in the activity stream audit log:

• SQL commands – All SQL commands are audited, and also prepared statements, built-in functions,
and functions in PL/SQL. Calls to stored procedures are audited. Any SQL statements issued inside
stored procedures or functions are also audited.
• Other database information – Activity monitored includes the full SQL statement, the row count
of affected rows from DML commands, accessed objects, and the unique database name. Database
activity streams also monitor the bind variables and stored procedure parameters.
Important
The full SQL text of each statement is visible in the activity stream audit log, including any
sensitive data. However, database user passwords are redacted if Oracle can determine them
from the context, such as in the following SQL statement.

ALTER ROLE role-name WITH password

• Connection information – Activity monitored includes session and network information, the server
process ID, and exit codes.

If an activity stream has a failure while monitoring your DB instance, you are notified through RDS
events.

Topics
• Accessing an activity stream from Kinesis (p. 956)
• Audit log contents and examples (p. 957)
• databaseActivityEventList JSON array (p. 968)
• Processing a database activity stream using the AWS SDK (p. 975)

Accessing an activity stream from Kinesis


When you enable an activity stream for a database, a Kinesis stream is created for you. From Kinesis,
you can monitor your database activity in real time. To further analyze database activity, you can
connect your Kinesis stream to consumer applications. You can also connect the stream to compliance
management applications such as IBM's Security Guardium or Imperva's SecureSphere Database Audit
and Protection.

You can access your Kinesis stream either from the RDS console or the Kinesis console.

To access an activity stream from Kinesis using the RDS console

1. Open the Amazon RDS console at https://console.aws.amazon.com/rds/.


2. In the navigation pane, choose Databases.
3. Choose the Amazon RDS database instance on which you started an activity stream.
4. Choose Configuration.
5. Under Database activity stream, choose the link under Kinesis stream.
6. In the Kinesis console, choose Monitoring to begin observing the database activity.

To access an activity stream from Kinesis using the Kinesis console

1. Open the Kinesis console at https://console.aws.amazon.com/kinesis.


2. Choose your activity stream from the list of Kinesis streams.

An activity stream's name includes the prefix aws-rds-das-db- followed by the resource ID of the
database. The following is an example.

956
Amazon Relational Database Service User Guide
Monitoring activity streams

aws-rds-das-db-NHVOV4PCLWHGF52NP

To use the Amazon RDS console to find the resource ID for the database, choose your DB instance
from the list of databases, and then choose the Configuration tab.

To use the AWS CLI to find the full Kinesis stream name for an activity stream, use a describe-
db-instances CLI request and note the value of ActivityStreamKinesisStreamName in the
response.
3. Choose Monitoring to begin observing the database activity.

For more information about using Amazon Kinesis, see What Is Amazon Kinesis Data Streams?.

Audit log contents and examples


Monitored events are represented in the database activity stream as JSON strings. The structure consists
of a JSON object containing a DatabaseActivityMonitoringRecord, which in turn contains a
databaseActivityEventList array of activity events.

Topics
• Examples of an audit log for an activity stream (p. 957)
• DatabaseActivityMonitoringRecords JSON object (p. 966)
• databaseActivityEvents JSON Object (p. 966)

Examples of an audit log for an activity stream


Following are sample decrypted JSON audit logs of activity event records.

Example Activity event record of a CONNECT SQL statement

The following activity event record shows a login with the use of a CONNECT SQL statement (command)
by a JDBC Thin Client (clientApplication) for your Oracle DB.

{
"class": "Standard",
"clientApplication": "JDBC Thin Client",
"command": "LOGON",
"commandText": null,
"dbid": "0123456789",
"databaseName": "ORCL",
"dbProtocol": "oracle",
"dbUserName": "TEST",
"endTime": null,
"errorMessage": null,
"exitCode": 0,
"logTime": "2021-01-15 00:15:36.233787",
"netProtocol": "tcp",
"objectName": null,
"objectType": null,
"paramList": [],
"pid": 17904,
"remoteHost": "123.456.789.012",
"remotePort": "25440",
"rowCount": null,
"serverHost": "987.654.321.098",
"serverType": "oracle",
"serverVersion": "19.0.0.0.ru-2020-01.rur-2020-01.r1.EE.3",

957
Amazon Relational Database Service User Guide
Monitoring activity streams

"serviceName": "oracle-ee",
"sessionId": 987654321,
"startTime": null,
"statementId": 1,
"substatementId": null,
"transactionId": "0000000000000000",
"engineNativeAuditFields": {
"UNIFIED_AUDIT_POLICIES": "TEST_POL_EVERYTHING",
"FGA_POLICY_NAME": null,
"DV_OBJECT_STATUS": null,
"SYSTEM_PRIVILEGE_USED": "CREATE SESSION",
"OLS_LABEL_COMPONENT_TYPE": null,
"XS_SESSIONID": null,
"ADDITIONAL_INFO": null,
"INSTANCE_ID": 1,
"DBID": 123456789
"DV_COMMENT": null,
"RMAN_SESSION_STAMP": null,
"NEW_NAME": null,
"DV_ACTION_NAME": null,
"OLS_PROGRAM_UNIT_NAME": null,
"OLS_STRING_LABEL": null,
"RMAN_SESSION_RECID": null,
"OBJECT_PRIVILEGES": null,
"OLS_OLD_VALUE": null,
"XS_TARGET_PRINCIPAL_NAME": null,
"XS_NS_ATTRIBUTE": null,
"XS_NS_NAME": null,
"DBLINK_INFO": null,
"AUTHENTICATION_TYPE": "(TYPE\u003d(DATABASE));(CLIENT ADDRESS\u003d((ADDRESS
\u003d(PROTOCOL\u003dtcp)(HOST\u003d205.251.233.183)(PORT\u003d25440))));",
"OBJECT_EDITION": null,
"OLS_PRIVILEGES_GRANTED": null,
"EXCLUDED_USER": null,
"DV_ACTION_OBJECT_NAME": null,
"OLS_LABEL_COMPONENT_NAME": null,
"EXCLUDED_SCHEMA": null,
"DP_TEXT_PARAMETERS1": null,
"XS_USER_NAME": null,
"XS_ENABLED_ROLE": null,
"XS_NS_ATTRIBUTE_NEW_VAL": null,
"DIRECT_PATH_NUM_COLUMNS_LOADED": null,
"AUDIT_OPTION": null,
"DV_EXTENDED_ACTION_CODE": null,
"XS_PACKAGE_NAME": null,
"OLS_NEW_VALUE": null,
"DV_RETURN_CODE": null,
"XS_CALLBACK_EVENT_TYPE": null,
"USERHOST": "a1b2c3d4e5f6.amazon.com",
"GLOBAL_USERID": null,
"CLIENT_IDENTIFIER": null,
"RMAN_OPERATION": null,
"TERMINAL": "unknown",
"OS_USERNAME": "sumepate",
"OLS_MAX_READ_LABEL": null,
"XS_PROXY_USER_NAME": null,
"XS_DATASEC_POLICY_NAME": null,
"DV_FACTOR_CONTEXT": null,
"OLS_MAX_WRITE_LABEL": null,
"OLS_PARENT_GROUP_NAME": null,
"EXCLUDED_OBJECT": null,
"DV_RULE_SET_NAME": null,
"EXTERNAL_USERID": null,
"EXECUTION_ID": null,
"ROLE": null,
"PROXY_SESSIONID": 0,

958
Amazon Relational Database Service User Guide
Monitoring activity streams

"DP_BOOLEAN_PARAMETERS1": null,
"OLS_POLICY_NAME": null,
"OLS_GRANTEE": null,
"OLS_MIN_WRITE_LABEL": null,
"APPLICATION_CONTEXTS": null,
"XS_SCHEMA_NAME": null,
"DV_GRANTEE": null,
"XS_COOKIE": null,
"DBPROXY_USERNAME": null,
"DV_ACTION_CODE": null,
"OLS_PRIVILEGES_USED": null,
"RMAN_DEVICE_TYPE": null,
"XS_NS_ATTRIBUTE_OLD_VAL": null,
"TARGET_USER": null,
"XS_ENTITY_TYPE": null,
"ENTRY_ID": 1,
"XS_PROCEDURE_NAME": null,
"XS_INACTIVITY_TIMEOUT": null,
"RMAN_OBJECT_TYPE": null,
"SYSTEM_PRIVILEGE": null,
"NEW_SCHEMA": null,
"SCN": 5124715
}
}

The following activity event record shows a login failure for your SQL Server DB.

{
"type": "DatabaseActivityMonitoringRecord",
"clusterId": "",
"instanceId": "db-4JCWQLUZVFYP7DIWP6JVQ77O3Q",
"databaseActivityEventList": [
{
"class": "LOGIN",
"clientApplication": "Microsoft SQL Server Management Studio",
"command": "LOGIN FAILED",
"commandText": "Login failed for user 'test'. Reason: Password did not match
that for the login provided. [CLIENT: local-machine]",
"databaseName": "",
"dbProtocol": "SQLSERVER",
"dbUserName": "test",
"endTime": null,
"errorMessage": null,
"exitCode": 0,
"logTime": "2022-10-06 21:34:42.7113072+00",
"netProtocol": null,
"objectName": "",
"objectType": "LOGIN",
"paramList": null,
"pid": null,
"remoteHost": "local machine",
"remotePort": null,
"rowCount": 0,
"serverHost": "172.31.30.159",
"serverType": "SQLSERVER",
"serverVersion": "15.00.4073.23.v1.R1",
"serviceName": "sqlserver-ee",
"sessionId": 0,
"startTime": null,
"statementId": "0x1eb0d1808d34a94b9d3dcf5432750f02",
"substatementId": 1,
"transactionId": "0",
"type": "record",
"engineNativeAuditFields": {
"target_database_principal_id": 0,

959
Amazon Relational Database Service User Guide
Monitoring activity streams

"target_server_principal_id": 0,
"target_database_principal_name": "",
"server_principal_id": 0,
"user_defined_information": "",
"response_rows": 0,
"database_principal_name": "",
"target_server_principal_name": "",
"schema_name": "",
"is_column_permission": false,
"object_id": 0,
"server_instance_name": "EC2AMAZ-NFUJJNO",
"target_server_principal_sid": null,
"additional_information": "<action_info "xmlns=\"http://
schemas.microsoft.com/sqlserver/2008/sqlaudit_data\"><pooled_connection>0</
pooled_connection><error>0x00004818</error><state>8</state><address>local machine</
address><PasswordFirstNibbleHash>B</PasswordFirstNibbleHash></action_info>"-->,
"duration_milliseconds": 0,
"permission_bitmask": "0x00000000000000000000000000000000",
"data_sensitivity_information": "",
"session_server_principal_name": "",
"connection_id": "98B4F537-0F82-49E3-AB08-B9D33B5893EF",
"audit_schema_version": 1,
"database_principal_id": 0,
"server_principal_sid": null,
"user_defined_event_id": 0,
"host_name": "EC2AMAZ-NFUJJNO"
}
}
]
}

Note
If a database activity stream isn't enabled, then the last field in the JSON document is
"engineNativeAuditFields": { }.

Example Activity event record of a CREATE TABLE statement

The following example shows a CREATE TABLE event for your Oracle database.

{
"class": "Standard",
"clientApplication": "sqlplus@ip-12-34-5-678 (TNS V1-V3)",
"command": "CREATE TABLE",
"commandText": "CREATE TABLE persons(\n person_id NUMBER GENERATED BY DEFAULT AS
IDENTITY,\n first_name VARCHAR2(50) NOT NULL,\n last_name VARCHAR2(50) NOT NULL,\n
PRIMARY KEY(person_id)\n)",
"dbid": "0123456789",
"databaseName": "ORCL",
"dbProtocol": "oracle",
"dbUserName": "TEST",
"endTime": null,
"errorMessage": null,
"exitCode": 0,
"logTime": "2021-01-15 00:22:49.535239",
"netProtocol": "beq",
"objectName": "PERSONS",
"objectType": "TEST",
"paramList": [],
"pid": 17687,
"remoteHost": "123.456.789.0",
"remotePort": null,
"rowCount": null,
"serverHost": "987.654.321.01",
"serverType": "oracle",

960
Amazon Relational Database Service User Guide
Monitoring activity streams

"serverVersion": "19.0.0.0.ru-2020-01.rur-2020-01.r1.EE.3",
"serviceName": "oracle-ee",
"sessionId": 1234567890,
"startTime": null,
"statementId": 43,
"substatementId": null,
"transactionId": "090011007F0D0000",
"engineNativeAuditFields": {
"UNIFIED_AUDIT_POLICIES": "TEST_POL_EVERYTHING",
"FGA_POLICY_NAME": null,
"DV_OBJECT_STATUS": null,
"SYSTEM_PRIVILEGE_USED": "CREATE SEQUENCE, CREATE TABLE",
"OLS_LABEL_COMPONENT_TYPE": null,
"XS_SESSIONID": null,
"ADDITIONAL_INFO": null,
"INSTANCE_ID": 1,
"DV_COMMENT": null,
"RMAN_SESSION_STAMP": null,
"NEW_NAME": null,
"DV_ACTION_NAME": null,
"OLS_PROGRAM_UNIT_NAME": null,
"OLS_STRING_LABEL": null,
"RMAN_SESSION_RECID": null,
"OBJECT_PRIVILEGES": null,
"OLS_OLD_VALUE": null,
"XS_TARGET_PRINCIPAL_NAME": null,
"XS_NS_ATTRIBUTE": null,
"XS_NS_NAME": null,
"DBLINK_INFO": null,
"AUTHENTICATION_TYPE": "(TYPE\u003d(DATABASE));(CLIENT ADDRESS\u003d((PROTOCOL
\u003dbeq)(HOST\u003d123.456.789.0)));",
"OBJECT_EDITION": null,
"OLS_PRIVILEGES_GRANTED": null,
"EXCLUDED_USER": null,
"DV_ACTION_OBJECT_NAME": null,
"OLS_LABEL_COMPONENT_NAME": null,
"EXCLUDED_SCHEMA": null,
"DP_TEXT_PARAMETERS1": null,
"XS_USER_NAME": null,
"XS_ENABLED_ROLE": null,
"XS_NS_ATTRIBUTE_NEW_VAL": null,
"DIRECT_PATH_NUM_COLUMNS_LOADED": null,
"AUDIT_OPTION": null,
"DV_EXTENDED_ACTION_CODE": null,
"XS_PACKAGE_NAME": null,
"OLS_NEW_VALUE": null,
"DV_RETURN_CODE": null,
"XS_CALLBACK_EVENT_TYPE": null,
"USERHOST": "ip-10-13-0-122",
"GLOBAL_USERID": null,
"CLIENT_IDENTIFIER": null,
"RMAN_OPERATION": null,
"TERMINAL": "pts/1",
"OS_USERNAME": "rdsdb",
"OLS_MAX_READ_LABEL": null,
"XS_PROXY_USER_NAME": null,
"XS_DATASEC_POLICY_NAME": null,
"DV_FACTOR_CONTEXT": null,
"OLS_MAX_WRITE_LABEL": null,
"OLS_PARENT_GROUP_NAME": null,
"EXCLUDED_OBJECT": null,
"DV_RULE_SET_NAME": null,
"EXTERNAL_USERID": null,
"EXECUTION_ID": null,
"ROLE": null,
"PROXY_SESSIONID": 0,

961
Amazon Relational Database Service User Guide
Monitoring activity streams

"DP_BOOLEAN_PARAMETERS1": null,
"OLS_POLICY_NAME": null,
"OLS_GRANTEE": null,
"OLS_MIN_WRITE_LABEL": null,
"APPLICATION_CONTEXTS": null,
"XS_SCHEMA_NAME": null,
"DV_GRANTEE": null,
"XS_COOKIE": null,
"DBPROXY_USERNAME": null,
"DV_ACTION_CODE": null,
"OLS_PRIVILEGES_USED": null,
"RMAN_DEVICE_TYPE": null,
"XS_NS_ATTRIBUTE_OLD_VAL": null,
"TARGET_USER": null,
"XS_ENTITY_TYPE": null,
"ENTRY_ID": 12,
"XS_PROCEDURE_NAME": null,
"XS_INACTIVITY_TIMEOUT": null,
"RMAN_OBJECT_TYPE": null,
"SYSTEM_PRIVILEGE": null,
"NEW_SCHEMA": null,
"SCN": 5133083
}
}

The following example shows a CREATE TABLE event for your SQL Server database.

{
"type": "DatabaseActivityMonitoringRecord",
"clusterId": "",
"instanceId": "db-4JCWQLUZVFYP7DIWP6JVQ77O3Q",
"databaseActivityEventList": [
{
"class": "SCHEMA",
"clientApplication": "Microsoft SQL Server Management Studio - Query",
"command": "ALTER",
"commandText": "Create table [testDB].[dbo].[TestTable2](\r\ntextA
varchar(6000),\r\n textB varchar(6000)\r\n)",
"databaseName": "testDB",
"dbProtocol": "SQLSERVER",
"dbUserName": "test",
"endTime": null,
"errorMessage": null,
"exitCode": 1,
"logTime": "2022-10-06 21:44:38.4120677+00",
"netProtocol": null,
"objectName": "dbo",
"objectType": "SCHEMA",
"paramList": null,
"pid": null,
"remoteHost": "local machine",
"remotePort": null,
"rowCount": 0,
"serverHost": "172.31.30.159",
"serverType": "SQLSERVER",
"serverVersion": "15.00.4073.23.v1.R1",
"serviceName": "sqlserver-ee",
"sessionId": 84,
"startTime": null,
"statementId": "0x5178d33d56e95e419558b9607158a5bd",
"substatementId": 1,
"transactionId": "4561864",
"type": "record",
"engineNativeAuditFields": {
"target_database_principal_id": 0,

962
Amazon Relational Database Service User Guide
Monitoring activity streams

"target_server_principal_id": 0,
"target_database_principal_name": "",
"server_principal_id": 2,
"user_defined_information": "",
"response_rows": 0,
"database_principal_name": "dbo",
"target_server_principal_name": "",
"schema_name": "",
"is_column_permission": false,
"object_id": 1,
"server_instance_name": "EC2AMAZ-NFUJJNO",
"target_server_principal_sid": null,
"additional_information": "",
"duration_milliseconds": 0,
"permission_bitmask": "0x00000000000000000000000000000000",
"data_sensitivity_information": "",
"session_server_principal_name": "test",
"connection_id": "EE1FE3FD-EF2C-41FD-AF45-9051E0CD983A",
"audit_schema_version": 1,
"database_principal_id": 1,
"server_principal_sid":
"0x010500000000000515000000bdc2795e2d0717901ba6998cf4010000",
"user_defined_event_id": 0,
"host_name": "EC2AMAZ-NFUJJNO"
}
}
]
}

Example Activity event record of a SELECT statement

The following example shows a SELECT event for your Oracle DB.

{
"class": "Standard",
"clientApplication": "sqlplus@ip-12-34-5-678 (TNS V1-V3)",
"command": "SELECT",
"commandText": "select count(*) from persons",
"databaseName": "1234567890",
"dbProtocol": "oracle",
"dbUserName": "TEST",
"endTime": null,
"errorMessage": null,
"exitCode": 0,
"logTime": "2021-01-15 00:25:18.850375",
"netProtocol": "beq",
"objectName": "PERSONS",
"objectType": "TEST",
"paramList": [],
"pid": 17687,
"remoteHost": "123.456.789.0",
"remotePort": null,
"rowCount": null,
"serverHost": "987.654.321.09",
"serverType": "oracle",
"serverVersion": "19.0.0.0.ru-2020-01.rur-2020-01.r1.EE.3",
"serviceName": "oracle-ee",
"sessionId": 1080639707,
"startTime": null,
"statementId": 44,
"substatementId": null,
"transactionId": null,
"engineNativeAuditFields": {
"UNIFIED_AUDIT_POLICIES": "TEST_POL_EVERYTHING",

963
Amazon Relational Database Service User Guide
Monitoring activity streams

"FGA_POLICY_NAME": null,
"DV_OBJECT_STATUS": null,
"SYSTEM_PRIVILEGE_USED": null,
"OLS_LABEL_COMPONENT_TYPE": null,
"XS_SESSIONID": null,
"ADDITIONAL_INFO": null,
"INSTANCE_ID": 1,
"DV_COMMENT": null,
"RMAN_SESSION_STAMP": null,
"NEW_NAME": null,
"DV_ACTION_NAME": null,
"OLS_PROGRAM_UNIT_NAME": null,
"OLS_STRING_LABEL": null,
"RMAN_SESSION_RECID": null,
"OBJECT_PRIVILEGES": null,
"OLS_OLD_VALUE": null,
"XS_TARGET_PRINCIPAL_NAME": null,
"XS_NS_ATTRIBUTE": null,
"XS_NS_NAME": null,
"DBLINK_INFO": null,
"AUTHENTICATION_TYPE": "(TYPE\u003d(DATABASE));(CLIENT ADDRESS\u003d((PROTOCOL
\u003dbeq)(HOST\u003d123.456.789.0)));",
"OBJECT_EDITION": null,
"OLS_PRIVILEGES_GRANTED": null,
"EXCLUDED_USER": null,
"DV_ACTION_OBJECT_NAME": null,
"OLS_LABEL_COMPONENT_NAME": null,
"EXCLUDED_SCHEMA": null,
"DP_TEXT_PARAMETERS1": null,
"XS_USER_NAME": null,
"XS_ENABLED_ROLE": null,
"XS_NS_ATTRIBUTE_NEW_VAL": null,
"DIRECT_PATH_NUM_COLUMNS_LOADED": null,
"AUDIT_OPTION": null,
"DV_EXTENDED_ACTION_CODE": null,
"XS_PACKAGE_NAME": null,
"OLS_NEW_VALUE": null,
"DV_RETURN_CODE": null,
"XS_CALLBACK_EVENT_TYPE": null,
"USERHOST": "ip-12-34-5-678",
"GLOBAL_USERID": null,
"CLIENT_IDENTIFIER": null,
"RMAN_OPERATION": null,
"TERMINAL": "pts/1",
"OS_USERNAME": "rdsdb",
"OLS_MAX_READ_LABEL": null,
"XS_PROXY_USER_NAME": null,
"XS_DATASEC_POLICY_NAME": null,
"DV_FACTOR_CONTEXT": null,
"OLS_MAX_WRITE_LABEL": null,
"OLS_PARENT_GROUP_NAME": null,
"EXCLUDED_OBJECT": null,
"DV_RULE_SET_NAME": null,
"EXTERNAL_USERID": null,
"EXECUTION_ID": null,
"ROLE": null,
"PROXY_SESSIONID": 0,
"DP_BOOLEAN_PARAMETERS1": null,
"OLS_POLICY_NAME": null,
"OLS_GRANTEE": null,
"OLS_MIN_WRITE_LABEL": null,
"APPLICATION_CONTEXTS": null,
"XS_SCHEMA_NAME": null,
"DV_GRANTEE": null,
"XS_COOKIE": null,
"DBPROXY_USERNAME": null,

964
Amazon Relational Database Service User Guide
Monitoring activity streams

"DV_ACTION_CODE": null,
"OLS_PRIVILEGES_USED": null,
"RMAN_DEVICE_TYPE": null,
"XS_NS_ATTRIBUTE_OLD_VAL": null,
"TARGET_USER": null,
"XS_ENTITY_TYPE": null,
"ENTRY_ID": 13,
"XS_PROCEDURE_NAME": null,
"XS_INACTIVITY_TIMEOUT": null,
"RMAN_OBJECT_TYPE": null,
"SYSTEM_PRIVILEGE": null,
"NEW_SCHEMA": null,
"SCN": 5136972
}
}

The following example shows a SELECT event for your SQL Server DB.

{
"type": "DatabaseActivityMonitoringRecord",
"clusterId": "",
"instanceId": "db-4JCWQLUZVFYP7DIWP6JVQ77O3Q",
"databaseActivityEventList": [
{
"class": "TABLE",
"clientApplication": "Microsoft SQL Server Management Studio - Query",
"command": "SELECT",
"commandText": "select * from [testDB].[dbo].[TestTable]",
"databaseName": "testDB",
"dbProtocol": "SQLSERVER",
"dbUserName": "test",
"endTime": null,
"errorMessage": null,
"exitCode": 1,
"logTime": "2022-10-06 21:24:59.9422268+00",
"netProtocol": null,
"objectName": "TestTable",
"objectType": "TABLE",
"paramList": null,
"pid": null,
"remoteHost": "local machine",
"remotePort": null,
"rowCount": 0,
"serverHost": "172.31.30.159",
"serverType": "SQLSERVER",
"serverVersion": "15.00.4073.23.v1.R1",
"serviceName": "sqlserver-ee",
"sessionId": 62,
"startTime": null,
"statementId": "0x03baed90412f564fad640ebe51f89b99",
"substatementId": 1,
"transactionId": "4532935",
"type": "record",
"engineNativeAuditFields": {
"target_database_principal_id": 0,
"target_server_principal_id": 0,
"target_database_principal_name": "",
"server_principal_id": 2,
"user_defined_information": "",
"response_rows": 0,
"database_principal_name": "dbo",
"target_server_principal_name": "",
"schema_name": "dbo",
"is_column_permission": true,
"object_id": 581577110,

965
Amazon Relational Database Service User Guide
Monitoring activity streams

"server_instance_name": "EC2AMAZ-NFUJJNO",
"target_server_principal_sid": null,
"additional_information": "",
"duration_milliseconds": 0,
"permission_bitmask": "0x00000000000000000000000000000001",
"data_sensitivity_information": "",
"session_server_principal_name": "test",
"connection_id": "AD3A5084-FB83-45C1-8334-E923459A8109",
"audit_schema_version": 1,
"database_principal_id": 1,
"server_principal_sid":
"0x010500000000000515000000bdc2795e2d0717901ba6998cf4010000",
"user_defined_event_id": 0,
"host_name": "EC2AMAZ-NFUJJNO"
}
}
]
}

DatabaseActivityMonitoringRecords JSON object


The database activity event records are in a JSON object that contains the following information.

JSON Field Data Description


Type

type string The type of JSON record. The value is


DatabaseActivityMonitoringRecords.

version string The version of the database activity monitoring


records. Oracle DB uses version 1.3 and SQL
Server uses version 1.4. These engine versions
introduce the engineNativeAuditFields
JSON object.

databaseActivityEvents (p. 966) string A JSON object that contains the activity events.

key string An encryption key that you use to decrypt the


databaseActivityEventList (p. 968)

databaseActivityEvents JSON Object


The databaseActivityEvents JSON object contains the following information.

Top-level fields in JSON record

Each event in the audit log is wrapped inside a record in JSON format. This record contains the following
fields.

type

This field always has the value DatabaseActivityMonitoringRecords.


version

This field represents the version of the database activity stream data protocol or contract. It defines
which fields are available.

966
Amazon Relational Database Service User Guide
Monitoring activity streams

databaseActivityEvents

An encrypted string representing one or more activity events. It's represented as a base64 byte
array. When you decrypt the string, the result is a record in JSON format with fields as shown in the
examples in this section.
key

The encrypted data key used to encrypt the databaseActivityEvents string. This is the same
AWS KMS key that you provided when you started the database activity stream.

The following example shows the format of this record.

{
"type":"DatabaseActivityMonitoringRecords",
"version":"1.3",
"databaseActivityEvents":"encrypted audit records",
"key":"encrypted key"
}

"type":"DatabaseActivityMonitoringRecords",
"version":"1.4",
"databaseActivityEvents":"encrypted audit records",
"key":"encrypted key"

Take the following steps to decrypt the contents of the databaseActivityEvents field:

1. Decrypt the value in the key JSON field using the KMS key you provided when starting database
activity stream. Doing so returns the data encryption key in clear text.
2. Base64-decode the value in the databaseActivityEvents JSON field to obtain the ciphertext, in
binary format, of the audit payload.
3. Decrypt the binary ciphertext with the data encryption key that you decoded in the first step.
4. Decompress the decrypted payload.
• The encrypted payload is in the databaseActivityEvents field.
• The databaseActivityEventList field contains an array of audit records. The type fields in the
array can be record or heartbeat.

The audit log activity event record is a JSON object that contains the following information.

JSON Field Data Type Description

type string The type of JSON record. The value is


DatabaseActivityMonitoringRecord.

instanceId string The DB instance resource identifier. It corresponds to the DB


instance attribute DbiResourceId.

databaseActivityEventList
string
(p. 968) An array of activity audit records or heartbeat messages.

967
Amazon Relational Database Service User Guide
Monitoring activity streams

databaseActivityEventList JSON array


The audit log payload is an encrypted databaseActivityEventList JSON array. The following table
lists alphabetically the fields for each activity event in the decrypted DatabaseActivityEventList
array of an audit log.

When unified auditing is enabled in Oracle Database, the audit records are populated in this new
audit trail. The UNIFIED_AUDIT_TRAIL view displays audit records in tabular form by retrieving
the audit records from the audit trail. When you start a database activity stream, a column in
UNIFIED_AUDIT_TRAIL maps to a field in the databaseActivityEventList array.
Important
The event structure is subject to change. Amazon RDS might add new fields to activity events
in the future. In applications that parse the JSON data, make sure that your code can ignore or
take appropriate actions for unknown field names.

databaseActivityEventList fields for Amazon RDS for Oracle

Field Data Source Description


Type

class string AUDIT_TYPE column in The class of activity event.


UNIFIED_AUDIT_TRAIL This corresponds to the
AUDIT_TYPE column in the
UNIFIED_AUDIT_TRAIL view.
Valid values for Amazon RDS
for Oracle are the following:

• Standard
• FineGrainedAudit
• XS
• Database Vault
• Label Security
• RMAN_AUDIT
• Datapump
• Direct path API

For more information, see


UNIFIED_AUDIT_TRAIL in the
Oracle documentation.

clientApplication string CLIENT_PROGRAM_NAME in The application the client used


UNIFIED_AUDIT_TRAIL to connect as reported by the
client. The client doesn't have
to provide this information,
so the value can be null. A
sample value is JDBC Thin
Client.

command string ACTION_NAME column in Name of the action executed


UNIFIED_AUDIT_TRAIL by the user. To understand the
complete action, read both
the command name and the
AUDIT_TYPE value. A sample
value is ALTER DATABASE.

968
Amazon Relational Database Service User Guide
Monitoring activity streams

Field Data Source Description


Type

commandText string SQL_TEXT column in The SQL statement associated


UNIFIED_AUDIT_TRAIL with the event. A sample value
is ALTER DATABASE BEGIN
BACKUP.

databaseName string NAME column in V$DATABASE The name of the database.

dbid number DBID column in Numerical identifier for the


UNIFIED_AUDIT_TRAIL database. A sample value is
1559204751.

dbProtocol string N/A The database protocol. In this


beta, the value is oracle.

dbUserName string DBUSERNAME column in Name of the database user


UNIFIED_AUDIT_TRAIL whose actions were audited. A
sample value is RDSADMIN.

endTime string N/A This field isn't used for RDS for
Oracle and is always null.

969
Amazon Relational Database Service User Guide
Monitoring activity streams

Field Data Source Description


Type

engineNativeAuditFields object UNIFIED_AUDIT_TRAIL By default, this object is


empty. When you start the
activity stream with the --
engine-native-audit-
fields-included option,
this object includes the
following columns and their
values:

ADDITIONAL_INFO
APPLICATION_CONTEXTS
AUDIT_OPTION
AUTHENTICATION_TYPE
CLIENT_IDENTIFIER
CURRENT_USER
DBLINK_INFO
DBPROXY_USERNAME
DIRECT_PATH_NUM_COLUMNS_LOADED
DP_BOOLEAN_PARAMETERS1
DP_TEXT_PARAMETERS1
DV_ACTION_CODE
DV_ACTION_NAME
DV_ACTION_OBJECT_NAME
DV_COMMENT
DV_EXTENDED_ACTION_CODE
DV_FACTOR_CONTEXT
DV_GRANTEE
DV_OBJECT_STATUS
DV_RETURN_CODE
DV_RULE_SET_NAME
ENTRY_ID
EXCLUDED_OBJECT
EXCLUDED_SCHEMA
EXCLUDED_USER
EXECUTION_ID
EXTERNAL_USERID
FGA_POLICY_NAME
GLOBAL_USERID
INSTANCE_ID
KSACL_SERVICE_NAME
KSACL_SOURCE_LOCATION
KSACL_USER_NAME
NEW_NAME
NEW_SCHEMA
OBJECT_EDITION
OBJECT_PRIVILEGES
OLS_GRANTEE
OLS_LABEL_COMPONENT_NAME
OLS_LABEL_COMPONENT_TYPE
OLS_MAX_READ_LABEL
OLS_MAX_WRITE_LABEL
OLS_MIN_WRITE_LABEL
OLS_NEW_VALUE
OLS_OLD_VALUE
OLS_PARENT_GROUP_NAME
OLS_POLICY_NAME
OLS_PRIVILEGES_GRANTED
OLS_PRIVILEGES_USED
OLS_PROGRAM_UNIT_NAME
OLS_STRING_LABEL

970
Amazon Relational Database Service User Guide
Monitoring activity streams

Field Data Source Description


Type
OS_USERNAME
PROTOCOL_ACTION_NAME
PROTOCOL_MESSAGE
PROTOCOL_RETURN_CODE
PROTOCOL_SESSION_ID
PROTOCOL_USERHOST
PROXY_SESSIONID
RLS_INFO
RMAN_DEVICE_TYPE
RMAN_OBJECT_TYPE
RMAN_OPERATION
RMAN_SESSION_RECID
RMAN_SESSION_STAMP
ROLE
SCN
SYSTEM_PRIVILEGE
SYSTEM_PRIVILEGE_USED
TARGET_USER
TERMINAL
UNIFIED_AUDIT_POLICIES
USERHOST
XS_CALLBACK_EVENT_TYPE
XS_COOKIE
XS_DATASEC_POLICY_NAME
XS_ENABLED_ROLE
XS_ENTITY_TYPE
XS_INACTIVITY_TIMEOUT
XS_NS_ATTRIBUTE
XS_NS_ATTRIBUTE_NEW_VAL
XS_NS_ATTRIBUTE_OLD_VAL
XS_NS_NAME
XS_PACKAGE_NAME
XS_PROCEDURE_NAME
XS_PROXY_USER_NAME
XS_SCHEMA_NAME
XS_SESSIONID
XS_TARGET_PRINCIPAL_NAME
XS_USER_NAME

For more information, see


UNIFIED_AUDIT_TRAIL
in the Oracle Database
documentation.

errorMessage string N/A This field isn't used for RDS for
Oracle and is always null.

exitCode number RETURN_CODE column in Oracle Database error code


UNIFIED_AUDIT_TRAIL generated by the action. If the
action succeeded, the value is
0.

logTime string EVENT_TIMESTAMP_UTC column in Timestamp of the creation


UNIFIED_AUDIT_TRAIL of the audit trail entry. A
sample value is 2020-11-27
06:56:14.981404.

971
Amazon Relational Database Service User Guide
Monitoring activity streams

Field Data Source Description


Type

netProtocol string AUTHENTICATION_TYPE column in The network communication


UNIFIED_AUDIT_TRAIL protocol. A sample value is
TCP.

objectName string OBJECT_NAME column in The name of the object


UNIFIED_AUDIT_TRAIL affected by the action. A
sample value is employees.

objectType string OBJECT_SCHEMA column in The schema name of object


UNIFIED_AUDIT_TRAIL affected by the action. A
sample value is hr.

paramList list SQL_BINDS column in The list of bind variables,


UNIFIED_AUDIT_TRAIL if any, associated with
SQL_TEXT. A sample value is
parameter_1,parameter_2.

pid number OS_PROCESS column in Operating system process


UNIFIED_AUDIT_TRAIL identifier of the Oracle
database process. A sample
value is 22396.

remoteHost string AUTHENTICATION_TYPE column in Either the client IP address


UNIFIED_AUDIT_TRAIL or name of the host from
which the session was
spawned. A sample value is
123.456.789.123.

remotePort string AUTHENTICATION_TYPE column in The client port number.


UNIFIED_AUDIT_TRAIL A typical value in Oracle
Database environments is
1521.

rowCount number N/A This field isn't used for RDS for
Oracle and is always null.

serverHost string Database host The IP address of the database


server host. A sample value is
123.456.789.123.

serverType string N/A The database server type. The


value is always ORACLE.

serverVersion string Database host The Amazon RDS for Oracle


version, Release Update (RU),
and Release Update Revision
(RUR). A sample value is
19.0.0.0.ru-2020-01.rur-2020-01.r1

serviceName string Database host The name of the service. A


sample value is oracle-ee.

sessionId number SESSIONID column in The session identifier of


UNIFIED_AUDIT_TRAIL the audit. An example is
1894327130.

972
Amazon Relational Database Service User Guide
Monitoring activity streams

Field Data Source Description


Type

startTime string N/A This field isn't used for RDS for
Oracle and is always null.

statementId number STATEMENT_ID column in Numeric ID for each statement


UNIFIED_AUDIT_TRAIL run. A statement can cause
many actions. A sample value
is 142197.

substatementId N/A N/A This field isn't used for RDS for
Oracle and is always null.

transactionId string TRANSACTION_ID column in The identifier of the


UNIFIED_AUDIT_TRAIL transaction in which the object
is modified. A sample value is
02000800D5030000.

databaseActivityEventList fields for Amazon RDS for SQL Server

Field Data Source Description


Type

class string sys.fn_get_audit_file.class_type The class of activity event. For more


mapped to information, see SQL Server Audit
(Database Engine) in the Microsoft
sys.dm_audit_class_type_map.class_type_desc
documentation.

string
clientApplication The application that the client connects
sys.fn_get_audit_file.application_name
as reported by the client (SQL Server
version 14 and higher). This field is null
in SQL Server version 13.

command string sys.fn_get_audit_file.action_id The general category of the SQL


mapped to sys.dm_audit_actions.name statement. The value for this field
depends on the value of the class.

commandText string sys.fn_get_audit_file.statement This field indicates the SQL statement.

databaseNamestring sys.fn_get_audit_file.database_name Name of the database.

dbProtocol string N/A The database protocol. This value is


SQLSERVER.

dbUserName string The database user for the client


sys.fn_get_audit_file.server_principal_name
authentication.

endTime string N/A This field isn't used by Amazon RDS for
SQL Server and the value is null.

object
engineNativeAuditFields
Each field in sys.fn_get_audit_file that By default, this object is empty. When
is not listed in this column. you start the activity stream with the
--engine-native-audit-fields-
included option, this object includes
other native engine audit fields, which
are not returned by this JSON map.

973
Amazon Relational Database Service User Guide
Monitoring activity streams

Field Data Source Description


Type

errorMessagestring N/A This field isn't used by Amazon RDS for


SQL Server and the value is null.

exitCode integer sys.fn_get_audit_file.succeeded Indicates whether the action that


started the event succeeded. This field
can't be null. For all the events except
login events, this field reports whether
the permission check succeeded or
failed, but not whether the operation
succeeded or failed.

Values include:

• 0 – Fail
• 1 – Success

logTime string sys.fn_get_audit_file.event_time The event timestamp that is recorded


by the SQL Server.

netProtocol string N/A This field isn't used by Amazon RDS for
SQL Server and the value is null.

objectName string sys.fn_get_audit_file.object_name The name of the database object if


the SQL statement is operating on an
object.

objectType string sys.fn_get_audit_file.class_type The database object type if the SQL


mapped to statement is operating on an object
type.
sys.dm_audit_class_type_map.class_type_desc

paramList string N/A This field isn't used by Amazon RDS for
SQL Server and the value is null.

pid integer N/A This field isn't used by Amazon RDS for
SQL Server and the value is null.

remoteHost string sys.fn_get_audit_file.client_ip The IP address or hostname of the


client that issued the SQL statement
(SQL Server version 14 and higher). This
field is null in SQL Server version 13.

remotePort integer N/A This field isn't used by Amazon RDS for
SQL Server and the value is null.

rowCount integer sys.fn_get_audit_file.affected_rows The number of table rows affected by


the SQL statement (SQL Server version
14 and higher). This field is in SQL
Server version 13.

serverHost string Database Host The IP address of the host database


server.

serverType string N/A The database server type. The value is


SQLSERVER.

974
Amazon Relational Database Service User Guide
Managing access to activity streams

Field Data Source Description


Type

string
serverVersion Database Host The database server version, for
example, 15.00.4073.23.v1.R1 for SQL
Server 2017.

serviceName string Database Host The name of the service. An example


value is sqlserver-ee.

sessionId integer sys.fn_get_audit_file.session_id Unique identifier of the session.

startTime string N/A This field isn't used by Amazon RDS for
SQL Server and the value is null.

statementId string A unique identifier for the client's


sys.fn_get_audit_file.sequence_group_id
SQL statement. The identifier
is different for each event that
is generated. A sample value is
0x38eaf4156267184094bb82071aaab644.

integer
substatementId sys.fn_get_audit_file.sequence_numberAn identifier to determine the sequence
number for a statement. This identifier
helps when large records are split into
multiple records.

integer
transactionId sys.fn_get_audit_file.transaction_id An identifier of a transaction. If there
aren't any active transactions, the value
is zero.

type string Database activity stream generated The type of event. The values are
record or heartbeat.

Processing a database activity stream using the AWS SDK


You can programmatically process an activity stream by using the AWS SDK.

Managing access to database activity streams


Any user with appropriate AWS Identity and Access Management (IAM) role privileges for database
activity streams can create, start, stop, and modify the activity stream settings for a DB instance. These
actions are included in the audit log of the stream. For best compliance practices, we recommend that
you don't provide these privileges to DBAs.

You set access to database activity streams using IAM policies. For more information about Amazon RDS
authentication, see Identity and access management for Amazon RDS (p. 2606). For more information
about creating IAM policies, see Creating and using an IAM policy for IAM database access (p. 2646).

Example Policy to allow configuring database activity streams

To give users fine-grained access to modify activity streams, use the service-specific operation context
keys rds:StartActivityStream and rds:StopActivityStream in an IAM policy. The following
IAM policy example allows a user or role to configure activity streams.

{
"Version":"2012-10-17",

975
Amazon Relational Database Service User Guide
Managing access to activity streams

"Statement":[
{
"Sid":"ConfigureActivityStreams",
"Effect":"Allow",
"Action": [
"rds:StartActivityStream",
"rds:StopActivityStream"
],
"Resource":"*",
}
]
}

Example Policy to allow starting database activity streams

The following IAM policy example allows a user or role to start activity streams.

{
"Version":"2012-10-17",
"Statement":[
{
"Sid":"AllowStartActivityStreams",
"Effect":"Allow",
"Action":"rds:StartActivityStream",
"Resource":"*"
}
]
}

Example Policy to allow stopping database activity streams

The following IAM policy example allows a user or role to stop activity streams.

{
"Version":"2012-10-17",
"Statement":[
{
"Sid":"AllowStopActivityStreams",
"Effect":"Allow",
"Action":"rds:StopActivityStream",
"Resource":"*"
}
]
}

Example Policy to deny starting database activity streams

The following IAM policy example prevents a user or role from starting activity streams.

{
"Version":"2012-10-17",
"Statement":[
{
"Sid":"DenyStartActivityStreams",
"Effect":"Deny",
"Action":"rds:StartActivityStream",
"Resource":"*"
}
]
}

976
Amazon Relational Database Service User Guide
Managing access to activity streams

Example Policy to deny stopping database activity streams

The following IAM policy example prevents a user or role from stopping activity streams.

{
"Version":"2012-10-17",
"Statement":[
{
"Sid":"DenyStopActivityStreams",
"Effect":"Deny",
"Action":"rds:StopActivityStream",
"Resource":"*"
}
]
}

977
Amazon Relational Database Service User Guide
Database customization challenge

Working with Amazon RDS Custom


Amazon RDS Custom automates database administration tasks and operations. RDS Custom makes it
possible for you as a database administrator to access and customize your database environment and
operating system. With RDS Custom, you can customize to meet the requirements of legacy, custom, and
packaged applications.

For the latest webinars and blogs about RDS Custom, see Amazon RDS Custom resources.

Topics
• Addressing the challenge of database customization (p. 978)
• Management model and benefits for Amazon RDS Custom (p. 979)
• Amazon RDS Custom architecture (p. 981)
• Security in Amazon RDS Custom (p. 988)
• Working with RDS Custom for Oracle (p. 993)
• Working with RDS Custom for SQL Server (p. 1087)

Addressing the challenge of database


customization
Amazon RDS Custom brings the benefits of Amazon RDS to a market that can't easily move to a fully
managed service because of customizations that are required with third-party applications. Amazon RDS
Custom saves administrative time, is durable, and scales with your business.

If you need the entire database and operating system to be fully managed by AWS, we recommend
Amazon RDS. If you need administrative rights to the database and underlying operating system to make
dependent applications available, Amazon RDS Custom is the better choice. If you want full management
responsibility and simply need a managed compute service, the best option is self-managing your
commercial databases on Amazon EC2.

To deliver a managed service experience, Amazon RDS doesn't let you access the underlying host.
Amazon RDS also restricts access to some procedures and objects that require high-level privileges.
However, for some applications, you might need to perform operations as a privileged operating system
(OS) user.

For example, you might need to do the following:

• Install custom database and OS patches and packages.


• Configure specific database settings.
• Configure file systems to share files directly with their applications.

Previously, if you needed to customize your application, you had to deploy your database on-premises
or on Amazon EC2. In this case, you bear most or all of the responsibility for database management, as
summarized in the following table.

Feature On-premises Amazon EC2 Amazon RDS


responsibility responsibility responsibility

Application Customer Customer Customer


optimization

978
Amazon Relational Database Service User Guide
RDS Custom management model and benefits

Feature On-premises Amazon EC2 Amazon RDS


responsibility responsibility responsibility

Scaling Customer Customer AWS

High availability Customer Customer AWS

Database backups Customer Customer AWS

Database software Customer Customer AWS


patching

Database software Customer Customer AWS


install

OS patching Customer Customer AWS

OS installation Customer Customer AWS

Server maintenance Customer AWS AWS

Hardware lifecycle Customer AWS AWS

Power, network, and Customer AWS AWS


cooling

When you manage database software yourself, you gain more control, but you're also more prone to
user errors. For example, when you make changes manually, you might accidentally cause application
downtime. You might spend hours checking every change to identify and fix an issue. Ideally, you want a
managed database service that automates common DBA tasks, but also supports privileged access to the
database and underlying operating system.

Management model and benefits for Amazon RDS


Custom
Amazon RDS Custom is a managed database service for legacy, custom, and packaged applications that
require access to the underlying operating system and database environment. RDS Custom automates
setup, operation, and scaling of databases in the AWS Cloud while granting you access to the database
and underlying operating system. With this access, you can configure settings, install patches, and enable
native features to meet the dependent application's requirements. With RDS Custom, you can run your
database workload using the AWS Management Console or the AWS CLI.

RDS Custom supports only the Oracle Database and Microsoft SQL Server DB engines.

Topics
• Shared responsibility model in RDS Custom (p. 979)
• Support perimeter and unsupported configurations in RDS Custom (p. 981)
• Key benefits of RDS Custom (p. 981)

Shared responsibility model in RDS Custom


With RDS Custom, you use the managed features of Amazon RDS, but you manage the host
and customize the OS as you do in Amazon EC2. You take on additional database management

979
Amazon Relational Database Service User Guide
Shared responsibility model in RDS Custom

responsibilities beyond what you do in Amazon RDS. The result is that you have more control over
database and DB instance management than you do in Amazon RDS, while still benefiting from RDS
automation.

Shared responsibility means the following:

1. You own part of the process when using an RDS Custom feature.

For example, in RDS Custom for Oracle, you control which Oracle database patches to use and when
to apply them to your DB instances.
2. You are responsible for making sure that any customizations to RDS Custom features work correctly.

To help protect against invalid customization, RDS Custom has automation software that runs outside
of your DB instance. If your underlying Amazon EC2 instance becomes impaired, RDS Custom attempts
to resolve these problems automatically by either rebooting or replacing the EC2 instance. The
only user-visible change is a new IP address. For more information, see Amazon RDS Custom host
replacement (p. 983).

The following table details the shared responsibility model for different features of RDS Custom.

Feature Amazon EC2 Amazon RDS RDS Custom for RDS Custom
responsibility responsibility Oracle responsibility for SQL Server
responsibility

Application Customer Customer Customer Customer


optimization

Scaling Customer AWS Shared Shared

High availability Customer AWS Customer Customer

Database backups Customer AWS Shared Shared

Database software Customer AWS Shared AWS


patching

Database software Customer AWS Shared AWS


install

OS patching Customer AWS Customer AWS

OS installation Customer AWS Shared AWS

Server maintenance AWS AWS AWS AWS

Hardware lifecycle AWS AWS AWS AWS

Power, network, and AWS AWS AWS AWS


cooling

You can create an RDS Custom DB instance using Microsoft SQL Server. In this case:

• You don't manage your own media.


• You don't need to purchase SQL Server licenses separately. AWS holds the license for the SQL Server
database software.

You can create an RDS Custom DB instance using Oracle Database. In this case, you do the following:

980
Amazon Relational Database Service User Guide
Support perimeter and unsupported
configurations in RDS Custom

• Manage your own media.

When using RDS Custom, you upload your own database installation files and patches. You create
a custom engine version (CEV) from these files. Then you can create an RDS Custom DB instance by
using this CEV.
• Manage your own licenses.

You bring your own Oracle Database licenses and manage licenses by yourself.

Support perimeter and unsupported configurations


in RDS Custom
RDS Custom provides a monitoring capability called the support perimeter. This feature ensures that
your host and database environment are configured correctly. If you make a change that causes
your DB instance to go outside the support perimeter, RDS Custom changes the instance status
to unsupported-configuration until you manually fix the configuration problems. For more
information, see RDS Custom support perimeter (p. 985).

Key benefits of RDS Custom


With RDS Custom, you can do the following:

• Automate many of the same administrative tasks as Amazon RDS, including the following:
• Lifecycle management of databases
• Automated backups and point-in-time recovery (PITR)
• Monitoring the health of RDS Custom DB instances and observing changes to the infrastructure,
operating system, and databases
• Notification or taking action to fix issues depending on disruption to the DB instance
• Install third-party applications.

You can install software to run custom applications and agents. Because you have privileged access to
the host, you can modify file systems to support legacy applications.
• Install custom patches.

You can apply custom database patches or modify OS packages on your RDS Custom DB instances.
• Stage an on-premises database before moving it to a fully managed service.

If you manage your own on-premises database, you can stage the database to RDS Custom as-is.
After you familiarize yourself with the cloud environment, you can migrate your database to a fully
managed Amazon RDS DB instance.
• Create your own automation.

You can create, schedule, and run custom automation scripts for reporting, management, or diagnostic
tools.

Amazon RDS Custom architecture


Amazon RDS Custom architecture is based on Amazon RDS, with important differences. The following
diagram shows the key components of the RDS Custom architecture.

981
Amazon Relational Database Service User Guide
VPC

Topics
• VPC (p. 982)
• RDS Custom automation and monitoring (p. 983)
• Amazon S3 (p. 986)
• AWS CloudTrail (p. 986)

VPC
As in Amazon RDS, your RDS Custom DB instance resides in a virtual private cloud (VPC).

982
Amazon Relational Database Service User Guide
RDS Custom automation and monitoring

Your RDS Custom DB instance consists of the following main components:

• Amazon EC2 instance


• Instance endpoint
• Operating system installed on the Amazon EC2 instance
• Amazon EBS storage, which contains any additional file systems

RDS Custom automation and monitoring


RDS Custom has automation software that runs outside of the DB instance. This software communicates
with agents on the DB instance and with other components within the overall RDS Custom environment.

The RDS Custom monitoring and recovery features offer similar functionality to Amazon RDS. By
default, RDS Custom is in full automation mode. The automation software has the following primary
responsibilities:

• Collect metrics and send notifications


• Perform automatic instance recovery

An important responsibility of RDS Custom automation is responding to problems with your Amazon
EC2 instance. For various reasons, the host might become impaired or unreachable. RDS Custom resolves
these problems by either rebooting or replacing the Amazon EC2 instance.

Topics
• Amazon RDS Custom host replacement (p. 983)
• RDS Custom support perimeter (p. 985)

Amazon RDS Custom host replacement


If the Amazon EC2 host becomes impaired, RDS Custom attempts to reboot it. If this effort fails, RDS
Custom uses the same stop and start feature included in Amazon EC2. The only customer-visible change
when a host is replaced is a new public IP address.

Topics

983
Amazon Relational Database Service User Guide
RDS Custom automation and monitoring

• Stopping and starting the host (p. 984)


• Effects of host replacement (p. 984)
• Best practices for Amazon EC2 hosts (p. 984)

Stopping and starting the host


RDS Custom automatically takes the following steps, with no user intervention required:

1. Stops the Amazon EC2 host.

The EC2 instance performs a normal shutdown and stops running. Any Amazon EBS volumes remain
attached to the instance, and their data persists. Any data stored in the instance store volumes (not
supported on RDS Custom) or RAM of the host computer is gone.

For more information, see Stop and start your instance in the Amazon EC2 User Guide for Linux
Instances.
2. Starts the Amazon EC2 host.

The EC2 instance migrates to a new underlying host hardware. In some cases, the RDS Custom DB
instance remains on the original host.

Effects of host replacement


In RDS Custom, you have full control over the root device volume and Amazon EBS storage volumes. The
root volume can contain important data and configurations that you don't want to lose.

RDS Custom for Oracle retains all database and customer data after the operation, including root volume
data. No user intervention is required. On RDS Custom for SQL Server, database data is retained, but any
data on the C: drive, including operating system and customer data, is lost.

After the replacement process, the Amazon EC2 host has a new public IP address. The host retains the
following:

• Instance ID
• Private IP addresses
• Elastic IP addresses
• Instance metadata
• Data storage volume data
• Root volume data (on RDS Custom for Oracle)

Best practices for Amazon EC2 hosts


The Amazon EC2 host replacement feature covers the majority of Amazon EC2 impairment scenarios. We
recommend that you adhere to the following best practices:

• Before you change your configuration or the operating system, back up your data. If the root volume
or operating system becomes corrupt, host replacement can't repair it. Your only options are restoring
from a DB snapshot or point-in-time recovery.
• Don't manually stop or terminate the physical Amazon EC2 host. Both actions result in the instance
being put outside the RDS Custom support perimeter.
• (RDS Custom for SQL Server) If you attach additional volumes to the Amazon EC2 host, configure
them to remount upon restart. If the host is impaired, RDS Custom might stop and start the host
automatically.

984
Amazon Relational Database Service User Guide
RDS Custom automation and monitoring

RDS Custom support perimeter


RDS Custom provides additional monitoring capability called the support perimeter. This additional
monitoring ensures that your RDS Custom DB instance uses a supported AWS infrastructure, operating
system, and database.

The support perimeter checks that your DB instance conforms to the requirements listed in Fixing
unsupported configurations in RDS Custom for Oracle (p. 1080) and Fixing unsupported configurations
in RDS Custom for SQL Server (p. 1172). If any of these requirements aren't met, RDS Custom considers
your DB instance to be outside of the support perimeter.

Topics
• Unsupported configurations in RDS Custom (p. 985)
• Troubleshooting unsupported configurations (p. 985)

Unsupported configurations in RDS Custom


When your DB instance is outside the support perimeter, RDS Custom changes the DB instance status to
unsupported-configuration and sends event notifications. After you fix the configuration problems,
RDS Custom changes the DB instance status back to available.

While your DB instance is in the unsupported-configuration state, the following is true:

• Your database is reachable. An exception is when the DB instance is in the unsupported-


configuration because the database is shutting down unexpectedly.
• You can't modify your DB instance.
• You can't take DB snapshots.
• Automatic backups aren't created.
• For RDS Custom for SQL Server DB instances only, RDS Custom doesn't replace the underlying Amazon
EC2 instance if it becomes impaired. For more information about host replacement, see Amazon RDS
Custom host replacement (p. 983).
• You can delete your DB instance, but most other RDS Custom API operations aren't available.
• RDS Custom continues to support point-in-time recovery (PITR) by archiving redo log files and
uploading them to Amazon S3. PITR in an unsupported-configuration state differs in the
following ways:
• PITR can take a long time to completely restore to a new RDS Custom DB instance. This situation
occurs because you can't take either automated or manual snapshots while the instance is in the
unsupported-configuration state.
• PITR has to replay more redo logs starting from the most recent snapshot taken before the instance
entered the unsupported-configuration state.
• In some cases, the DB instance is in the unsupported-configuration state because you made
a change that prevented the uploading of archived redo log files. Examples include stopping the
EC2 instance, stopping the RDS Custom agent, and detaching EBS volumes. In such cases, PITR can't
restore the DB instance to the latest restorable time.

Troubleshooting unsupported configurations


RDS Custom provides troubleshooting guidance for the unsupported-configuration state. Although
some guidance applies to both RDS Custom for Oracle and RDS Custom for SQL Server, other guidance
depends on your DB engine. For engine-specific troubleshooting information, see the following topics:

• Fixing unsupported configurations in RDS Custom for Oracle (p. 1080)


• Fixing unsupported configurations in RDS Custom for SQL Server (p. 1172)

985
Amazon Relational Database Service User Guide
Amazon S3

Amazon S3
If you use RDS Custom for Oracle, you upload installation media to a user-created Amazon S3 bucket.
RDS Custom for Oracle uses the media in this bucket to create a custom engine version (CEV). A CEV is a
binary volume snapshot of a database version and Amazon Machine Image (AMI). From the CEV, you can
create an RDS Custom DB instance. For more information, see Working with custom engine versions for
Amazon RDS Custom for Oracle (p. 1015).

For both RDS Custom for Oracle and RDS Custom for SQL Server, RDS Custom automatically creates an
Amazon S3 bucket prefixed with the string do-not-delete-rds-custom-. RDS Custom uses the do-
not-delete-rds-custom- S3 bucket to store the following types of files:

• AWS CloudTrail logs for the trail created by RDS Custom


• Support perimeter artifacts (see RDS Custom support perimeter (p. 985))
• Database redo log files (RDS Custom for Oracle only)
• Transaction logs (RDS Custom for SQL Server only)
• Custom engine version artifacts (RDS Custom for Oracle only)

RDS Custom creates the do-not-delete-rds-custom- S3 bucket when you create either of the
following resources:

• Your first CEV for RDS Custom for Oracle


• Your first DB instance for RDS Custom for SQL Server

RDS Custom creates one bucket for each combination of the following:

• AWS account ID
• Engine type (either RDS Custom for Oracle or RDS Custom for SQL Server)
• AWS Region

For example, if you create RDS Custom for Oracle CEVs in a single AWS Region, one do-not-delete-
rds-custom- bucket exists. If you create multiple RDS Custom for SQL Server instances, and they reside
in different AWS Regions, one do-not-delete-rds-custom- bucket exists in each AWS Region. If you
create one RDS Custom for Oracle instance and two RDS Custom for SQL Server instances in a single
AWS Region, two do-not-delete-rds-custom- buckets exist.

AWS CloudTrail
RDS Custom automatically creates an AWS CloudTrail trail whose name begins with do-not-delete-
rds-custom-. The RDS Custom support perimeter relies on the events from CloudTrail to determine
whether your actions affect RDS Custom automation. For more information, see Troubleshooting
unsupported configurations (p. 985).

RDS Custom creates the trail when you create your first DB instance. RDS Custom creates one trail for
each combination of the following:

• AWS account ID
• Engine type (either RDS Custom for Oracle or RDS Custom for SQL Server)
• AWS Region

When you delete an RDS Custom DB instance, the CloudTrail for this instance isn't automatically
removed. In this case, your AWS account continues to be billed for the undeleted CloudTrail. RDS Custom

986
Amazon Relational Database Service User Guide
AWS CloudTrail

is not responsible for the deletion of this resource. To learn how to remove the CloudTrail manually, see
Deleting a trail in the AWS CloudTrail User Guide.

987
Amazon Relational Database Service User Guide
RDS Custom security

Security in Amazon RDS Custom


Familiarize yourself with the security considerations for RDS Custom.

Topics
• How RDS Custom securely manages tasks on your behalf (p. 988)
• SSL certificates (p. 989)
• Securing your Amazon S3 bucket against the confused deputy problem (p. 989)
• Rotating RDS Custom for Oracle credentials for compliance programs (p. 990)

How RDS Custom securely manages tasks on your


behalf
RDS Custom uses the following tools and techniques to securely run operations on your behalf:

AWSServiceRoleForRDSCustom service-linked role

A service-linked role is predefined by the service and includes all permissions that the service needs
to call other AWS services on your behalf. For RDS Custom, AWSServiceRoleForRDSCustom is
a service-linked role that is defined according to the principle of least privilege. RDS Custom uses
the permissions in AmazonRDSCustomServiceRolePolicy, which is the policy attached to this
role, to perform most provisioning and all off-host management tasks. For more information, see
AmazonRDSCustomServiceRolePolicy.

When it performs tasks on the host, RDS Custom automation uses credentials from the service-
linked role to run commands using AWS Systems Manager. You can audit the command history
through the Systems Manager command history and AWS CloudTrail. Systems Manager connects
to your RDS Custom DB instance using your networking setup. For more information, see Step 3:
Configure IAM and your Amazon VPC (p. 1003).
Temporary IAM credentials

When provisioning or deleting resources, RDS Custom sometimes uses temporary credentials derived
from the credentials of the calling IAM principal. These IAM credentials are restricted by the IAM
policies attached to that principal and expire after the operation is completed. To learn about the
permissions required for IAM principals who use RDS Custom, see Step 4: Grant required permissions
to your IAM user or role (p. 1012).
Amazon EC2 instance profile

An EC2 instance profile is a container for an IAM role that you can use to pass role information to an
EC2 instance. An EC2 instance underlies an RDS Custom DB instance. You provide an instance profile
when you create an RDS Custom DB instance. RDS Custom uses EC2 instance profile credentials
when it performs host-based management tasks such as backups. For more information, see Create
your IAM role and instance profile manually (p. 1007).
SSH key pair

When RDS Custom creates the EC2 instance that underlies a DB instance, it creates an SSH key pair
on your behalf. The key uses the naming prefix do-not-delete-rds-custom-ssh-privatekey-
db-. AWS Secrets Manager stores this SSH private key as a secret in your AWS account. Amazon RDS
doesn't store, access, or use these credentials. For more information, see Amazon EC2 key pairs and
Linux instances.

Note
RDS Custom

988
Amazon Relational Database Service User Guide
SSL certificates

SSL certificates
RDS Custom DB instances don't support managed SSL certificates. If you want to deploy SSL, you can
self-manage SSL certificates in your own wallet and create an SSL listener to secure the connections
between the client database or for database replication. For more information, see Configuring Transport
Layer Security Authentication in the Oracle Database documentation.

Securing your Amazon S3 bucket against the


confused deputy problem
When you create an Amazon RDS Custom for Oracle custom engine version (CEV) or an RDS Custom for
SQL Server DB instance, RDS Custom creates an Amazon S3 bucket. The S3 bucket stores files such as
CEV artifacts, redo (transaction) logs, configuration items for the support perimeter, and AWS CloudTrail
logs.

You can make these S3 buckets more secure by using the global condition context keys to prevent
the confused deputy problem. For more information, see Preventing cross-service confused deputy
problems (p. 2640).

The following RDS Custom for Oracle example shows the use of the aws:SourceArn and
aws:SourceAccount global condition context keys in an S3 bucket policy. For RDS Custom for Oracle,
make sure to include the Amazon Resource Names (ARNs) for the CEVs and the DB instances. For RDS
Custom for SQL Server, make sure to include the ARN for the DB instances.

...
{
"Sid": "AWSRDSCustomForOracleInstancesObjectLevelAccess",
"Effect": "Allow",
"Principal": {
"Service": "custom.rds.amazonaws.com"
},
"Action": [
"s3:GetObject",
"s3:GetObjectVersion",
"s3:DeleteObject",
"s3:DeleteObjectVersion",
"s3:GetObjectRetention",
"s3:BypassGovernanceRetention"
],
"Resource": "arn:aws:s3:::do-not-delete-rds-custom-123456789012-us-east-2-c8a6f7/
RDSCustomForOracle/Instances/*",
"Condition": {
"ArnLike": {
"aws:SourceArn": [
"arn:aws:rds:us-east-2:123456789012:db:*",
"arn:aws:rds:us-east-2:123456789012:cev:*/*"
]
},
"StringEquals": {
"aws:SourceAccount": "123456789012"
}
}
},
...

989
Amazon Relational Database Service User Guide
Rotating RDS Custom for Oracle
credentials for compliance programs

Rotating RDS Custom for Oracle credentials for


compliance programs
Some compliance programs require database user credentials to change periodically, for example, every
90 days. RDS Custom for Oracle automatically rotates credentials for some predefined database users.

Topics
• Automatic rotation of credentials for predefined users (p. 990)
• Guidelines for rotating user credentials (p. 991)
• Rotating user credentials manually (p. 991)

Automatic rotation of credentials for predefined users


If your RDS Custom for Oracle DB instance is hosted in Amazon RDS, credentials for the following
predefined Oracle users rotate every 30 days automatically. Credentials for the preceding users reside in
AWS Secrets Manager.

Predefined Oracle users

Database user Created Supported engine versions Notes


by

SYS Oracle custom-oracle-ee and custom-oracle-


ee-cdb

SYSTEM Oracle custom-oracle-ee and custom-oracle-


ee-cdb

RDSADMIN RDS custom-oracle-ee

C##RDSADMIN RDS custom-oracle-ee-cdb Usernames with a C## prefix exist only


in CDBs. For more information about
CDBs, see Overview of Amazon RDS
Custom for Oracle architecture.

RDS_DATAGUARD RDS custom-oracle-ee This user exists only in read replicas,


source databases for read replicas, and
databases that you have physically
migrated into RDS Custom using
Oracle Data Guard.

C##RDS_DATAGUARDRDS custom-oracle-ee-cdb This user exists only in read replicas,


source databases for read replicas, and
databases that you have physically
migrated into RDS Custom using
Oracle Data Guard. Usernames with a
C## prefix exist only in CDBs. For more
information about CDBs, see Overview
of Amazon RDS Custom for Oracle
architecture.

An exception to the automatic credential rotation is an RDS Custom for Oracle DB instance that
you have manually configured as a standby database. RDS only rotates credentials for read

990
Amazon Relational Database Service User Guide
Rotating RDS Custom for Oracle
credentials for compliance programs

replicas that you have created using the create-db-instance-read-replica CLI command or
CreateDBInstanceReadReplica API.

Guidelines for rotating user credentials


To make sure that your credentials rotate according to your compliance program, note the following
guidelines:

• If your DB instance rotates credentials automatically, don't manually change or delete a secret,
password file, or password for users listed in Predefined Oracle users (p. 990). Otherwise, RDS
Custom might place your DB instance outside of the support perimeter, which suspends automatic
rotation.
• The RDS master user is not predefined, so you are responsible for either changing the password
manually or setting up automatic rotation in Secrets Manager. For more information, see Rotate AWS
Secrets Manager secrets.

Rotating user credentials manually


For the following categories of databases, RDS doesn't automatically rotate the credentials for the users
listed in Predefined Oracle users (p. 990):

• A database that you configured manually to function as a standby database.


• An on-premises database.
• A DB instance that is outside of the support perimeter or in a state in which the RDS Custom
automation can't run. In this case, RDS Custom also doesn't rotate keys.

If your database is in any of the preceding categories, you must rotate your user credentials manually.

To rotate user credentials manually for a DB instance

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In Databases, make sure that RDS isn't currently backing up your DB instance or performing
operations such configuring high availability.
3. In the database details page, choose Configuration and note the Resource ID for the DB instance. Or
you can use the AWS CLI command describe-db-instances.
4. Open the Secrets Manager console at https://console.aws.amazon.com/secretsmanager/.
5. In the search box, enter your DB Resource ID and find the secret in the following form:

do-not-delete-rds-custom-db-resource-id-numeric-string

This secret stores the password for RDSADMIN, SYS, and SYSTEM. The following sample key is for the
DB instance with the DB resource ID db-ABCDEFG12HIJKLNMNOPQRS3TUVWX:

do-not-delete-rds-custom-db-ABCDEFG12HIJKLNMNOPQRS3TUVWX-123456

Important
If your DB instance is a read replica and uses the custom-oracle-ee-cdb engine, two
secrets exist with the suffix db-resource-id-numeric-string, one for the master user
and the other for RDSADMIN, SYS, and SYSTEM. To find the correct secret, run the following
command on the host:

991
Amazon Relational Database Service User Guide
Rotating RDS Custom for Oracle
credentials for compliance programs

cat /opt/aws/rdscustomagent/config/database_metadata.json | python3 -c "import


sys,json; print(json.load(sys.stdin)['dbMonitoringUserPassword'])"

The dbMonitoringUserPassword attribute indicates the secret for RDSADMIN, SYS, and
SYSTEM.
6. If your DB instance exists in an Oracle Data Guard configuration, find the secret in the following
form:

do-not-delete-rds-custom-db-resource-id-numeric-string-dg

This secret stores the password for RDS_DATAGUARD. The following sample key is for the DB
instance with the DB resource ID db-ABCDEFG12HIJKLNMNOPQRS3TUVWX:

do-not-delete-rds-custom-db-ABCDEFG12HIJKLNMNOPQRS3TUVWX-789012-dg

7. For all database users listed in Predefined Oracle users (p. 990), update the passwords by following
the instructions in Modify an AWS Secrets Manager secret.
8. If your database is a standalone database or a source database in an Oracle Data Guard
configuration:

a. Start your Oracle SQL client and log in as SYS.


b. Run a SQL statement in the following form for each database user listed in Predefined Oracle
users (p. 990):

ALTER USER user-name IDENTIFIED BY pwd-from-secrets-manager ACCOUNT UNLOCK;

For example, if the new password for RDSADMIN stored in Secrets Manager is pwd-123, run the
following statement:

ALTER USER RDSADMIN IDENTIFIED BY pwd-123 ACCOUNT UNLOCK;

9. If your DB instance runs Oracle Database 12c Release 1 (12.1) and is managed by Oracle Data Guard,
manually copy the password file (orapw) from the primary DB instance to each standby DB instance.

If your DB instance is hosted in Amazon RDS, the password file location is /rdsdbdata/config/
orapw. For databases that aren't hosted in Amazon RDS, the default location is $ORACLE_HOME/
dbs/orapw$ORACLE_SID on Linux and UNIX and %ORACLE_HOME%\database\PWD%ORACLE_SID
%.ora on Windows.

992
Amazon Relational Database Service User Guide
Working with RDS Custom for Oracle

Working with RDS Custom for Oracle


Following, you can find instructions for creating, managing, and maintaining your RDS Custom for Oracle
DB instances.

Topics
• RDS Custom for Oracle workflow (p. 993)
• Database architecture for Amazon RDS Custom for Oracle (p. 997)
• RDS Custom for Oracle requirements and limitations (p. 999)
• Setting up your environment for Amazon RDS Custom for Oracle (p. 1002)
• Working with custom engine versions for Amazon RDS Custom for Oracle (p. 1015)
• Configuring a DB instance for Amazon RDS Custom for Oracle (p. 1035)
• Managing an Amazon RDS Custom for Oracle DB instance (p. 1047)
• Working with Oracle replicas for RDS Custom for Oracle (p. 1060)
• Backing up and restoring an Amazon RDS Custom for Oracle DB instance (p. 1065)
• Migrating an on-premises database to RDS Custom for Oracle (p. 1072)
• Upgrading a DB instance for Amazon RDS Custom for Oracle (p. 1073)
• Troubleshooting DB issues for Amazon RDS Custom for Oracle (p. 1078)

RDS Custom for Oracle workflow


The following diagram shows the typical workflow for RDS Custom for Oracle.

The steps are as follows:

993
Amazon Relational Database Service User Guide
RDS Custom for Oracle workflow

1. Upload your database software to your Amazon S3 bucket.

For more information, see Step 3: Upload your installation files to Amazon S3 (p. 1017).
2. Create an RDS Custom for Oracle custom engine version (CEV) from your media.

Choose either the multitenant or non-multitenant architecture. For more information, see Creating a
CEV (p. 1026).
3. Create an RDS Custom for Oracle DB instance from a CEV.

For more information, see Creating an RDS Custom for Oracle DB instance (p. 1035).
4. Connect your application to the DB instance endpoint.

For more information, see Connecting to your RDS Custom DB instance using SSH (p. 1041) and
Connecting to your RDS Custom DB instance using Session Manager (p. 1040).
5. (Optional) Access the host to customize your software.
6. Monitor notifications and messages generated by RDS Custom automation.

Database installation files


Your responsibility for media is a key difference between Amazon RDS and RDS Custom. Amazon RDS,
which is a fully managed service, supplies the Amazon Machine Image (AMI) and database software. The
Amazon RDS database software is preinstalled, so you need only choose a database engine and version,
and create your database.

For RDS Custom, you supply your own media. When you create a custom engine version, RDS Custom
installs the media that you provide. RDS Custom media contains your database installation files and
patches. This service model is called Bring Your Own Media (BYOM).

Custom engine version


An RDS Custom custom engine version (CEV) is a binary volume snapshot of a database version and AMI.
By default, RDS Custom for Oracle uses the most recent AMI that Amazon EC2 makes available. You can
also choose to reuse an existing AMI.

CEV manifest
After you download Oracle database installation files from Oracle, you upload them to an Amazon S3
bucket. When you create your CEV, you specify the file names in a JSON document called a CEV manifest.
RDS Custom for Oracle uses the specified files and the AMI to create your CEV.

RDS Custom for Oracle provides JSON manifest templates with our recommended .zip files for each
supported Oracle Database release. For example, the following template is for the 19.17.0.0.0 RU.

{
"mediaImportTemplateVersion": "2020-08-14",
"databaseInstallationFileNames": [
"V982063-01.zip"
],
"opatchFileNames": [
"p6880880_190000_Linux-x86-64.zip"
],
"psuRuPatchFileNames": [
"p34419443_190000_Linux-x86-64.zip",
"p34411846_190000_Linux-x86-64.zip"
],
"otherPatchFileNames": [
"p28852325_190000_Linux-x86-64.zip",

994
Amazon Relational Database Service User Guide
RDS Custom for Oracle workflow

"p29997937_190000_Linux-x86-64.zip",
"p31335037_190000_Linux-x86-64.zip",
"p32327201_190000_Linux-x86-64.zip",
"p33613829_190000_Linux-x86-64.zip",
"p34006614_190000_Linux-x86-64.zip",
"p34533061_190000_Linux-x86-64.zip",
"p34533150_190000_Generic.zip",
"p28730253_190000_Linux-x86-64.zip",
"p29213893_1917000DBRU_Generic.zip",
"p33125873_1917000DBRU_Linux-x86-64.zip",
"p34446152_1917000DBRU_Linux-x86-64.zip"
]
}

You can also specify installation parameters in the JSON manifest. For example, you can set nondefault
values for the Oracle base, Oracle home, and the ID and name of the UNIX/Linux user and group. For
more information, see JSON fields in the CEV manifest (p. 1020).

CEV naming format


Name your CEV using a customer-specified string. The name format is the following, depending on your
Oracle Database release:

• 19.customized_string
• 18.customized_string
• 12.2.customized_string
• 12.1.customized_string

You can use 1–50 alphanumeric characters, underscores, dashes, and periods. For example, you might
name your CEV 19.my_cev1.

Multitenant architecture
The multitenant architecture enables an Oracle database to function as a multitenant container database
(CDB). A CDB includes zero, one, or many customer-created pluggable databases (PDBs). A PDB is a
portable collection of schemas and objects that appears to an application as a non-CDB.

When you create a CEV, you can specify the either the multitenant or non-multitenant architecture.
You can create an RDS Custom for Oracle CDB only when the CEV that you used to create it uses the
multitenant architecture. For more information, see Working with custom engine versions for Amazon
RDS Custom for Oracle (p. 1015).

Creating a DB instance for RDS Custom for Oracle


After you create your CEV, it's available for use. You can create multiple CEVs, and you can create
multiple RDS Custom for Oracle DB instances from any CEV. You can also change the status of a CEV to
make it available or inactive.

You can either create your RDS Custom for Oracle DB instance with the Oracle Multitenant architecture
(custom-oracle-ee-cdb engine type) or with the traditional non-CDB architecture (custom-oracle-
ee engine type). When you create a container database (CDB), it contains one pluggable database (PDB)
and one PDB seed. You can create additional PDBs manually using Oracle SQL.

To create your RDS Custom for Oracle DB instance, use the create-db-instance command. In this
command, specify which CEV to use. The procedure is similar to creating an Amazon RDS DB instance.
However, some parameters are different. For more information, see Configuring a DB instance for
Amazon RDS Custom for Oracle (p. 1035).

995
Amazon Relational Database Service User Guide
RDS Custom for Oracle workflow

Database connection
Like an Amazon RDS DB instance, an RDS Custom DB instance resides in a virtual private cloud (VPC).
Your application connects to the Oracle database using an Oracle listener.

If your database is a CDB, you can use the listener L_RDSCDB_001 to connect to the CDB root and to a
PDB. If you plug a non-CDB into a CDB, make sure to set USE_SID_AS_SERVICE_LISTENER = ON so
that migrated applications keep the same settings.

When you connect to a non-CDB, the master user is the user for the non-CDB. When you connect to a
CDB, the master user is the user for the PDB. To connect to the CDB root, log in to the host, start a SQL
client, and create an administrative user with SQL commands.

RDS Custom customization


You can access the RDS Custom host to install or customize software. To avoid conflicts between your
changes and the RDS Custom automation, you can pause the automation for a specified period. During
this period, RDS Custom doesn't perform monitoring or instance recovery. At the end of the period, RDS
Custom resumes full automation. For more information, see Pausing and resuming your RDS Custom DB
instance (p. 1049).

996
Amazon Relational Database Service User Guide
Database architecture for Amazon RDS Custom for Oracle

Database architecture for Amazon RDS Custom for


Oracle
RDS Custom for Oracle supports both the multitenant and non-multitenant architecture.

Topics
• Supported Oracle database architectures (p. 997)
• Supported engine types (p. 997)
• Supported features in the multitenant architecture (p. 997)

Supported Oracle database architectures


Oracle Database 19c supports both the multitenant and non-multitenant (non-CDB) architecture.
The multitenant architecture enables an Oracle database to function as a container database (CDB). A
CDB includes pluggable databases (PDBs). A PDB is a portable collection of schemas and objects that
appears to an application as a traditional Oracle database. For more information, see Introduction to the
Multitenant Architecture in the Oracle Multitenant Administrator’s Guide.

The multitenant and non-multitenant architectures are mutually exclusive. If a database isn't a CDB, it's
a non-CDB and so can't contain other databases. In RDS Custom for Oracle, only Oracle Database 19c
supports the multitenant architecture. Thus, if you create instances using previous database releases, you
can create only non-CDBs.

Supported engine types


When you create an Amazon RDS Custom for Oracle CEV or DB instance, choose either of the following
engine types:

• custom-oracle-ee-cdb

This engine type specifies the multitenant architecture. This option is available only for Oracle
Database 19c. When you create an RDS for Oracle DB instance using the multitenant architecture, your
CDB includes the following containers:
• CDB root (CDB$ROOT)
• PDB seed (PDB$SEED)
• Initial PDB

You can create more PDBs using the Oracle SQL command CREATE PLUGGABLE DATABASE. You can't
use RDS APIs to create or delete PDBs.
• custom-oracle-ee

This engine type specifies the traditional non-CDB architecture. A non-CDB can't contain pluggable
databases (PDBs).

For more information, see Multitenant architecture considerations (p. 1035).

Supported features in the multitenant architecture


An RDS Custom for Oracle CDB instance supports the following features:

• Backups
• Restoring and point-time-restore (PITR) from backups

997
Amazon Relational Database Service User Guide
Database architecture for Amazon RDS Custom for Oracle

• Read replicas
• Minor version upgrades

998
Amazon Relational Database Service User Guide
RDS Custom for Oracle requirements and limitations

RDS Custom for Oracle requirements and limitations


In this topic, you can find a summary of the Amazon RDS Custom for Oracle feature availability and
requirements for quick reference.

Topics
• AWS Region and database version support for RDS Custom for Oracle (p. 999)
• Edition and licensing support for RDS Custom for Oracle (p. 999)
• DB instance class support for RDS Custom for Oracle (p. 999)
• General requirements for RDS Custom for Oracle (p. 1000)
• General limitations for RDS Custom for Oracle (p. 1000)

AWS Region and database version support for RDS Custom for
Oracle
Feature availability and support vary across specific versions of each database engine, and across AWS
Regions. For more information on version and Region availability of RDS Custom for Oracle, see RDS
Custom (p. 151).

Edition and licensing support for RDS Custom for Oracle


RDS Custom for Oracle supports only Enterprise Edition on the BYOL model.

DB instance class support for RDS Custom for Oracle


RDS Custom for Oracle supports the following DB instance classes.

Type Size

db.r6i db.r6i.large | db.r6i.xlarge | db.r6i.2xlarge | db.r6i.4xlarge |


db.r6i.8xlarge | db.r6i.12xlarge | db.r6i.16xlarge | db.r6i.24xlarge |
db.r6i.32xlarge

db.r5b db.r5b.large | db.r5b.xlarge | db.r5b.2xlarge | db.r5b.4xlarge |


db.r5b.8xlarge | db.r5b.12xlarge | db.r5b.16xlarge | db.r5b.24xlarge

db.r5 db.r5.large | db.r5.xlarge | db.r5.2xlarge | db.r5.4xlarge |


db.r5.8xlarge | db.r5.12xlarge | db.r5.16xlarge | db.r5.24xlarge

db.x2iedn db.x2iedn.xlarge | db.x2iedn.2xlarge | db.x2iedn.4xlarge |


db.x2iedn.8xlarge | db.x2iedn.16xlarge | db.x2iedn.24xlarge |
db.x2iedn.32xlarge

db.m6i db.m6i.large | db.m6i.xlarge | db.m6i.2xlarge | db.m6i.4xlarge |


db.m6i.8xlarge | db.m6i.12xlarge | db.m6i.16xlarge | db.m6i.24xlarge |
db.m6i.32xlarge

db.m5 db.m5.large | db.m5.xlarge | db.m5.2xlarge | db.m5.4xlarge |


db.m5.8xlarge | db.m5.12xlarge | db.m5.16xlarge | db.m5.24xlarge

db.t3 db.t3.medium | db.t3.large | db.t3.xlarge | db.t3.2xlarge

999
Amazon Relational Database Service User Guide
RDS Custom for Oracle requirements and limitations

General requirements for RDS Custom for Oracle


Make sure to follow these requirements for Amazon RDS Custom for Oracle:

• Use Oracle Software Delivery Cloud to download Oracle installation and patch files. For more
information, see Prerequisites for creating an RDS Custom for Oracle DB instance (p. 1002).
• Use the DB instance classes shown in DB instance class support for RDS Custom for Oracle (p. 999).
The DB instances must run Oracle Linux 7 Update 9.
• Specify the gp2, gp3, or io1 solid state drives for storage. The maximum storage limit is 64 TiB.
• Make sure that you have an AWS KMS key to create an RDS Custom DB instance. For more information,
see Step 1: Create or reuse a symmetric encryption AWS KMS key (p. 1003).
• Use only the approved Oracle database installation and patch files. For more information, see Step 2:
Download your database installation files and patches from Oracle Software Delivery Cloud (p. 1016).
• Create an AWS Identity and Access Management (IAM) role and instance profile. For more information,
see Step 3: Configure IAM and your Amazon VPC (p. 1003).
• Make sure to supply a networking configuration that RDS Custom can use to access other AWS
services. For specific requirements, see Step 3: Configure IAM and your Amazon VPC (p. 1003).
• Make sure that the combined number of RDS Custom and Amazon RDS DB instances doesn't exceed
your quota limit. For example, if your quota for Amazon RDS is 40 DB instances, you can have 20 RDS
Custom for Oracle DB instances and 20 Amazon RDS DB instances.

General limitations for RDS Custom for Oracle


The following limitations apply to RDS Custom for Oracle:

• You can't provide your own AMI. You can specify only the default AMI or an AMI that has been
previously used by a CEV.
• You can't modify a CEV to use a different AMI.
• You can't modify the DB instance identifier of an existing RDS Custom for Oracle DB instance.
• You can't specify the multitenant architecture for a database release other than Oracle Database 19c.
• You can't create a CDB instance from a CEV that uses the custom-oracle-ee engine. The CEV must
use custom-oracle-ee-cdb.
• Not all Amazon RDS options are supported. For example, when you create or modify an RDS Custom
for Oracle DB instance, you can't do the following:
• Change the number of CPU cores and threads per core on the DB instance class.
• Turn on storage autoscaling.
• Create a Multi-AZ deployment.
Note
For an alternative HA solution, see the AWS blog article Build high availability for Amazon
RDS Custom for Oracle using read replicas.
• Set backup retention to 0.
• Configure Kerberos authentication.
• Specify your own DB parameter group or option group.
• Turn on Performance Insights.
• Turn on automatic minor version upgrade.
• You can't specify a DB instance storage size greater than the maximum of 64 TiB.
• You can't create multiple Oracle databases on a single RDS Custom for Oracle DB instance.
• You can’t stop your RDS Custom for Oracle DB instance or its underlying Amazon EC2 instance. Billing
for an RDS Custom for Oracle DB instance can't be stopped.

1000
Amazon Relational Database Service User Guide
RDS Custom for Oracle requirements and limitations

• You can't use automatic shared memory management. RDS Custom for Oracle supports only
automatic memory management. For more information, see Automatic Memory Management in the
Oracle Database Administrator’s Guide.
• Make sure not to change the DB_UNIQUE_NAME for the primary DB instance. Changing the name
causes any restore operation to become stuck.

For limitations specific to modifying an RDS Custom for Oracle DB instance, see Modifying your RDS
Custom for Oracle DB instance (p. 1052). For replication limitations, see General limitations for RDS
Custom for Oracle replication (p. 1062).

1001
Amazon Relational Database Service User Guide
Setting up your RDS Custom for Oracle environment

Setting up your environment for Amazon RDS


Custom for Oracle
Before you create an Amazon RDS Custom for Oracle DB instance, perform the following tasks.

Topics
• Prerequisites for creating an RDS Custom for Oracle DB instance (p. 1002)
• Step 1: Create or reuse a symmetric encryption AWS KMS key (p. 1003)
• Step 2: Download and install the AWS CLI (p. 1003)
• Step 3: Configure IAM and your Amazon VPC (p. 1003)
• Step 4: Grant required permissions to your IAM user or role (p. 1012)

Prerequisites for creating an RDS Custom for Oracle DB instance


Before creating an RDS Custom for Oracle DB instance, make sure that you meet the following
prerequisites:

• You have access to My Oracle Support and Oracle Software Delivery Cloud to download the supported
list of installation files and patches for the Enterprise Edition of any of the following Oracle Database
releases:
• Oracle Database 19c
• Oracle Database 18c
• Oracle Database 12c Release 2 (12.2)
• Oracle Database 12c Release 1 (12.1)

If you use an unknown patch, custom engine version (CEV) creation fails. In this case, contact the RDS
Custom support team and ask it to add the missing patch.

For more information, see Step 2: Download your database installation files and patches from Oracle
Software Delivery Cloud (p. 1016).
• You have access to Amazon S3. This service is required for the following reasons:
• You upload your Oracle installation files to S3 buckets. You use the uploaded installation files to
create your RDS Custom CEV.
• RDS Custom for Oracle uses scripts downloaded from internally defined S3 buckets to perform
actions on your DB instances. These scripts are necessary for onboarding and RDS Custom
automation.
• RDS Custom for Oracle uploads certain files to S3 buckets located in your customer
account. These buckets use the following naming format: do-not-delete-rds-
custom-account_id-region-six_character_alphanumeric_string. For example, you might
have a bucket named do-not-delete-rds-custom-123456789012-us-east-1-12a3b4.

For more information, see Step 3: Upload your installation files to Amazon S3 (p. 1017) and Creating a
CEV (p. 1026).
• You supply your own virtual private cloud (VPC) and security group configuration. For more
information, see Step 3: Configure IAM and your Amazon VPC (p. 1003).
• The AWS Identity and Access Management (IAM) user that creates a CEV or RDS Custom DB instance
has the required permissions for IAM, CloudTrail, and Amazon S3.

For more information, see Step 4: Grant required permissions to your IAM user or role (p. 1012).

1002
Amazon Relational Database Service User Guide
Setting up your RDS Custom for Oracle environment

For each task, the following sections describe the requirements and limitations specific to the task.
For example, when you create your RDS Custom DB for Oracle instance, use either the db.m5 or db.r5
instance classes running Oracle Linux 7 Update 9. For general requirements that apply to RDS Custom,
see RDS Custom for Oracle requirements and limitations (p. 999).

Step 1: Create or reuse a symmetric encryption AWS KMS key


Customer managed keys are AWS KMS keys in your AWS account that you create, own, and manage. A
customer managed symmetric encryption KMS key is required for RDS Custom. When you create an RDS
Custom for Oracle DB instance, you supply the KMS key identifier. For more information, see Configuring
a DB instance for Amazon RDS Custom for Oracle (p. 1035).

You have the following options:

• If you have an existing customer managed KMS key in your AWS account, you can use it with RDS
Custom. No further action is necessary.
• If you already created a customer managed symmetric encryption KMS key for a different RDS Custom
engine, you can reuse the same KMS key. No further action is necessary.
• If you don't have an existing customer managed symmetric encryption KMS key in your account, create
a KMS key by following the instructions in Creating keys in the AWS Key Management Service Developer
Guide.
• If you're creating a CEV or RDS Custom DB instance, and your KMS key is in a different AWS account,
make sure to use the AWS CLI. You can't use the AWS console with cross-account KMS keys.

Important
RDS Custom doesn't support AWS managed KMS keys.

Make sure that your symmetric encryption key grants access to the kms:Decrypt and
kms:GenerateDataKey operations to the AWS Identity and Access Management (IAM) role in your IAM
instance profile. If you have a new symmetric encryption key in your account, no changes are required.
Otherwise, make sure that your symmetric encryption key's policy grants access to these operations.

For more information, see Step 3: Configure IAM and your Amazon VPC (p. 1003).

For more information about configuring IAM for RDS Custom for Oracle, see Step 3: Configure IAM and
your Amazon VPC (p. 1003).

Step 2: Download and install the AWS CLI


AWS provides you with a command-line interface to use RDS Custom features. You can use either version
1 or version 2 of the AWS CLI.

For information about downloading and installing the AWS CLI, see Installing or updating the latest
version of the AWS CLI.

Skip this step if either of the following is true:

• You plan to access RDS Custom only from the AWS Management Console.
• You have already downloaded the AWS CLI for Amazon RDS or a different RDS Custom DB engine.

Step 3: Configure IAM and your Amazon VPC


You use an IAM role or IAM user (known as an IAM entity) to create an RDS Custom DB instance using the
console or AWS CLI. This IAM entity must have the necessary permissions for instance creation.

1003
Amazon Relational Database Service User Guide
Setting up your RDS Custom for Oracle environment

Your RDS Custom DB instance is in a virtual private cloud (VPC) based on the Amazon VPC service, just
like an Amazon EC2 instance or Amazon RDS instance. You provide and configure your own VPC. Thus,
you have full control over your instance networking setup.

You can configure your IAM identity and virtual private cloud (VPC) using either of the following
techniques:

• Configure IAM and your VPC using AWS CloudFormation (p. 1004) (recommended)
• Follow the procedures in Create your IAM role and instance profile manually (p. 1007) and Configure
your VPC manually (p. 1011)

We strongly recommend that you configure your RDS Custom for Oracle environment using AWS
CloudFormation. This technique is the easiest and least error-prone.

Configure IAM and your VPC using AWS CloudFormation


A CloudFormation stack is a collection of AWS resources that you can manage as a single unit. To
simplify RDS Custom configuration, you can use the AWS CloudFormation template files to create
CloudFormation stacks. For more information, see Creating a stack on the AWS CloudFormation console
in AWS CloudFormation User Guide.

AWS CloudFormation configuration steps


• IAM and VPC resources created by CloudFormation (p. 1004)
• Step 1: Download the CloudFormation template files (p. 1004)
• Step 2: Configure IAM using CloudFormation (p. 1005)
• Step 3: Configure your VPC using CloudFormation (p. 1006)

IAM and VPC resources created by CloudFormation


When you use the CloudFormation templates, they create stacks that includes the following resources in
your AWS account:

• An Amazon EC2 instance profile named AWSRDSCustomInstanceProfile-region


• An IAM role named AWSRDSCustomInstanceRole-region
• A DB subnet group named rds-custom-private
• The following VPC endpoints, which are necessary for your DB instance to communicate with
dependent AWS services:
• com.amazonaws.region.ec2messages
• com.amazonaws.region.events
• com.amazonaws.region.logs
• com.amazonaws.region.monitoring
• com.amazonaws.region.s3
• com.amazonaws.region.secretsmanager
• com.amazonaws.region.ssm
• com.amazonaws.region.ssmmessages

Unlike RDS Custom for SQL Server, RDS Custom for Oracle doesn't create an access control list or
security groups. You must attach you own security group, subnets, and route tables.

Step 1: Download the CloudFormation template files


A CloudFormation template is a declaration of the AWS resources that make up a stack. The template is
stored as a JSON file.

1004
Amazon Relational Database Service User Guide
Setting up your RDS Custom for Oracle environment

To download the CloudFormation template files

1. Open the context (right-click) menu for the link custom-oracle-iam.zip and choose Save Link As.
2. Save the file to your computer.
3. Repeat the previous steps for the link custom-vpc.zip.

If you already configured your VPC for RDS Custom, skip this step.

Step 2: Configure IAM using CloudFormation

When you use the CloudFormation template for IAM, it creates the following required resources:

• An instance profile named AWSRDSCustomInstanceProfile-region


• A service role named AWSRDSCustomInstanceRole-region
• An access policy named AWSRDSCustomIamRolePolicy that is attached to the service role

To configure IAM using CloudFormation

1. Open the CloudFormation console at https://console.aws.amazon.com/cloudformation.


2. Start the Create Stack wizard, and choose Create Stack.
3. On the Create stack page, do the following:

a. For Prepare template, choose Template is ready.


b. For Template source, choose Upload a template file.
c. For Choose file, navigate to, then choose custom-oracle-iam.json.
d. Choose Next.
4. On the Specify stack details page, do the following:

a. For Stack name, enter custom-oracle-iam.


b. Choose Next.
5. On the Configure stack options page, choose Next.
6. On the Review custom-oracle-iam page, do the following:

a. Select the I acknowledge that AWS CloudFormation might create IAM resources with custom
names check box.
b. Choose Submit.

CloudFormation creates the IAM roles that RDS Custom for Oracle requires. In the left panel, when
custom-oracle-iam shows CREATE_COMPLETE, proceed to the next step.
7. In the left panel, choose custom-oracle-iam. In the right panel, do the following:

a. Choose Stack info. Your stack has an ID in the format


arn:aws:cloudformation:region:account-no:stack/custom-oracle-iam/identifier.
b. Choose Resources. You should see the following:

• An instance profile named AWSRDSCustomInstanceProfile-region


• A service role named AWSRDSCustomInstanceRole-region

When you create your RDS Custom DB instance, you need to supply the instance profile ID.

1005
Amazon Relational Database Service User Guide
Setting up your RDS Custom for Oracle environment

Step 3: Configure your VPC using CloudFormation

If you've already configured your VPC for a different RDS Custom engine, and want to reuse the existing
VPC, skip this step. This section assumes that the following:

• You've already used CloudFormation to create your IAM instance profile and role.
• You know your route table ID.

For a DB instance to be private, it must be in a private subnet. For a subnet to be private, it must
not be associated with a route table that has a default internet gateway. For more information, see
Configure route tables in the Amazon VPC User Guide.

When you use the CloudFormation template for your VPC, it creates the following required resources:

• A private VPC
• A subnet group named rds-custom-private
• VPC endpoints that use the naming format vpce-string

To configure your VPC using CloudFormation

1. Open the CloudFormation console at https://console.aws.amazon.com/cloudformation.


2. Start the Create Stack wizard, and choose Create Stack and then With new resources (standard).
3. On the Create stack page, do the following:

a. For Prepare template, choose Template is ready.


b. For Template source, choose Upload a template file.
c. For Choose file, navigate to, then choose custom-vpc.json.
d. Choose Next.
4. On the Specify stack details page, do the following:

a. For Stack name, enter custom-vpc.


b. For Parameters, choose the private subnets to use for RDS Custom DB instances.
c. Choose the private VPC ID to use for RDS Custom DB instances.
d. Enter the route table associated with the private subnets.
e. Choose Next.
5. On the Configure stack options page, choose Next.
6. On the Review custom-vpc page, choose Submit.

CloudFormation configures your private VPC. In the left panel, when custom-vpc shows
CREATE_COMPLETE, proceed to the next step.
7. (Optional) Review the details of your VPC. In the Stacks pane, choose custom-vpc. In the right pane,
do the following:

a. Choose Stack info. Your stack has an ID in the format


arn:aws:cloudformation:region:account-no:stack/custom-vpc/identifier.
b. Choose Resources. You should see a subnet group named rds-custom-private and several VPC
endpoints that use the naming format vpce-string. Each endpoint corresponds to an AWS
service that RDS Custom needs to communicate with.
c. Choose Parameters. You should see the private subnets, private VPC, and the route table that
you specified when you created the stack. When you create a DB instance, you need to supply
the VPC ID and subnet group.
1006
Amazon Relational Database Service User Guide
Setting up your RDS Custom for Oracle environment

Create your IAM role and instance profile manually


Configuration is easiest when you use CloudFormation. However, you can also configure IAM manually.
For manual setup, you create the following resources:

• An IAM instance profile named AWSRDSCustomInstanceProfile-region, where region is the


AWS Region where you plan to deploy your DB instances.
• An IAM role AWSRDSCustomInstanceRole-region for the instance profile.

The following steps show how to create the instance profile and role and then add the role to your
profile.

To create the RDS Custom instance profile and add the necessary role to it

1. Create the IAM role that uses the naming format AWSRDSCustomInstanceRole-region with a
trust policy that Amazon EC2 can use to assume this role.
2. Add an access policy to AWSRDSCustomInstanceRole-region.
3. Create an IAM instance profile for RDS Custom that uses the naming format
AWSRDSCustomInstanceProfile-region.
4. Add the AWSRDSCustomInstanceRole-region IAM role to the instance profile.

Step 1: Create the role AWSRDSCustomInstanceRoleForRdsCustomInstance


In this step, you create the role using the naming format AWSRDSCustomInstanceRole-region. Using
the trust policy, Amazon EC2 can assume the role. The following example assumes that you have set the
environment variable $REGION to the AWS Region in which you want to create your DB instance.

aws iam create-role \


--role-name AWSRDSCustomInstanceRole-$REGION \
--assume-role-policy-document '{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Effect": "Allow",
"Principal": {
"Service": "ec2.amazonaws.com"
}
}
]
}'

Step 2: Add an access policy to AWSRDSCustomInstanceRoleForRdsCustomInstance


When you embed an inline policy in an IAM role, the inline policy is used as part of the role's access
(permissions) policy. You create the AWSRDSCustomIamRolePolicy policy that permits Amazon EC2 to
send and receive messages and perform various actions.

The following example creates the access policy named AWSRDSCustomIamRolePolicy, and adds it
to the IAM role AWSRDSCustomInstanceRole-region. This example assumes that you have set the
following environment variables:

$REGION

Set this variable to the AWS Region in which you plan to create your DB instance.
$ACCOUNT_ID

Set this variable to your AWS account number.

1007
Amazon Relational Database Service User Guide
Setting up your RDS Custom for Oracle environment

$KMS_KEY

Set this variable to the Amazon Resource Name (ARN) of the AWS KMS key that you want to use for
your RDS Custom DB instances. To specify more than one KMS key, add it to the Resources section
of statement ID (Sid) 11.

aws iam put-role-policy \


--role-name AWSRDSCustomInstanceRole-$REGION \
--policy-name AWSRDSCustomIamRolePolicy \
--policy-document '{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "1",
"Effect": "Allow",
"Action": [
"ssm:DescribeAssociation",
"ssm:GetDeployablePatchSnapshotForInstance",
"ssm:GetDocument",
"ssm:DescribeDocument",
"ssm:GetManifest",
"ssm:GetParameter",
"ssm:GetParameters",
"ssm:ListAssociations",
"ssm:ListInstanceAssociations",
"ssm:PutInventory",
"ssm:PutComplianceItems",
"ssm:PutConfigurePackageResult",
"ssm:UpdateAssociationStatus",
"ssm:UpdateInstanceAssociationStatus",
"ssm:UpdateInstanceInformation",
"ssm:GetConnectionStatus",
"ssm:DescribeInstanceInformation",
"ssmmessages:CreateControlChannel",
"ssmmessages:CreateDataChannel",
"ssmmessages:OpenControlChannel",
"ssmmessages:OpenDataChannel"
],
"Resource": [
"*"
]
},
{
"Sid": "2",
"Effect": "Allow",
"Action": [
"ec2messages:AcknowledgeMessage",
"ec2messages:DeleteMessage",
"ec2messages:FailMessage",
"ec2messages:GetEndpoint",
"ec2messages:GetMessages",
"ec2messages:SendReply"
],
"Resource": [
"*"
]
},
{
"Sid": "3",
"Effect": "Allow",
"Action": [
"logs:PutRetentionPolicy",
"logs:PutLogEvents",
"logs:DescribeLogStreams",

1008
Amazon Relational Database Service User Guide
Setting up your RDS Custom for Oracle environment

"logs:DescribeLogGroups",
"logs:CreateLogStream",
"logs:CreateLogGroup"
],
"Resource": [
"arn:aws:logs:'$REGION':*:log-group:rds-custom-instance*"
]
},
{
"Sid": "4",
"Effect": "Allow",
"Action": [
"s3:putObject",
"s3:getObject",
"s3:getObjectVersion"
],
"Resource": [
"arn:aws:s3:::do-not-delete-rds-custom-*/*"
]
},
{
"Sid": "5",
"Effect": "Allow",
"Action": [
"cloudwatch:PutMetricData"
],
"Resource": [
"*"
],
"Condition": {
"StringEquals": {
"cloudwatch:namespace": [
"RDSCustomForOracle/Agent"
]
}
}
},
{
"Sid": "6",
"Effect": "Allow",
"Action": [
"events:PutEvents"
],
"Resource": [
"*"
]
},
{
"Sid": "7",
"Effect": "Allow",
"Action": [
"secretsmanager:GetSecretValue",
"secretsmanager:DescribeSecret"
],
"Resource": [
"arn:aws:secretsmanager:'$REGION':'$ACCOUNT_ID':secret:do-not-delete-rds-
custom-*"
]
},
{
"Sid": "8",
"Effect": "Allow",
"Action": [
"s3:ListBucketVersions"
],
"Resource": [

1009
Amazon Relational Database Service User Guide
Setting up your RDS Custom for Oracle environment

"arn:aws:s3:::do-not-delete-rds-custom-*"
]
},
{
"Sid": "9",
"Effect": "Allow",
"Action": "ec2:CreateSnapshots",
"Resource": [
"arn:aws:ec2:*:*:instance/*",
"arn:aws:ec2:*:*:volume/*"
],
"Condition": {
"StringEquals": {
"ec2:ResourceTag/AWSRDSCustom": "custom-oracle"
}
}
},
{
"Sid": "10",
"Effect": "Allow",
"Action": "ec2:CreateSnapshots",
"Resource": [
"arn:aws:ec2:*::snapshot/*"
]
},
{
"Sid": "11",
"Effect": "Allow",
"Action": [
"kms:Decrypt",
"kms:GenerateDataKey"
],
"Resource": [
"arn:aws:kms:'$REGION':'$ACCOUNT_ID':key/'$KMS_KEY'"
]
},
{
"Sid": "12",
"Effect": "Allow",
"Action": "ec2:CreateTags",
"Resource": "*",
"Condition": {
"StringLike": {
"ec2:CreateAction": [
"CreateSnapshots"
]
}
}
}
]
}'

Step 3: Create your RDS Custom instance profile

An instance profile is a container that includes a single IAM role. RDS Custom uses the instance profile to
pass the role to the instance.

If you use the CLI to create a role, you create the role and instance profile as separate actions, with
potentially different names. Create your IAM instance profile as follows, naming it using the format
AWSRDSCustomInstanceProfile-region. The following example assumes that you have set the
environment variable $REGION to the AWS Region in which you want to create your DB instance.

aws iam create-instance-profile \

1010
Amazon Relational Database Service User Guide
Setting up your RDS Custom for Oracle environment

--instance-profile-name AWSRDSCustomInstanceProfile-$REGION

Step 4: Add AWSRDSCustomInstanceRoleForRdsCustomInstance to your RDS Custom instance


profile

Add your IAM role to the instance profile that you previously created. The following example assumes
that you have set the environment variable $REGION to the AWS Region in which you want to create
your DB instance.

aws iam add-role-to-instance-profile \


--instance-profile-name AWSRDSCustomInstanceProfile-$REGION \
--role-name AWSRDSCustomInstanceRole-$REGION

Configure your VPC manually


If you don't want to use AWS CloudFormation, you can configure your VPC endpoints manually.

Topics
• Create VPC endpoints for dependent AWS services (p. 1011)
• Configure the instance metadata service (p. 1012)

Create VPC endpoints for dependent AWS services

RDS Custom sends communication from your DB instance to other AWS services. To make sure that RDS
Custom can communicate, it validates network connectivity to the following AWS services:

• Amazon CloudWatch
• Amazon CloudWatch Logs
• Amazon CloudWatch Events
• Amazon EC2
• Amazon EventBridge
• Amazon S3
• AWS Secrets Manager
• AWS Systems Manager

If RDS Custom can't communicate with the necessary services, it publishes the following event:

Database instance in incompatible-network. SSM Agent connection not available. Amazon RDS
can't connect to the dependent AWS services.

To avoid incompatible-network errors, make sure that VPC components involved in communication
between your RDS Custom DB instance and AWS services satisfy the following requirements:

• The DB instance can make outbound connections on port 443 to other AWS services.
• The VPC allows incoming responses to requests originating from your RDS Custom DB instance.
• RDS Custom can correctly resolve the domain names of endpoints for each AWS service.

RDS Custom relies on AWS Systems Manager connectivity for its automation. For information about how
to configure VPC endpoints, see Creating VPC endpoints for Systems Manager. For the list of endpoints
in each Region, see AWS Systems Manager endpoints and quotas in the Amazon Web Services General
Reference.

1011
Amazon Relational Database Service User Guide
Setting up your RDS Custom for Oracle environment

If you already configured a VPC for a different RDS Custom DB engine, you can reuse that VPC and skip
this process.

Configure the instance metadata service

Make sure that your instance can do the following:

• Access the instance metadata service using Instance Metadata Service Version 2 (IMDSv2).
• Allow outbound communications through port 80 (HTTP) to the IMDS link IP address.
• Request instance metadata from http://169.254.169.254, the IMDSv2 link.

For more information, see Use IMDSv2 in the Amazon EC2 User Guide for Linux Instances.

RDS Custom for Oracle automation uses IMDSv2 by default, by setting HttpTokens=enabled on the
underlying Amazon EC2 instance. However, you can use IMDSv1 if you want. For more information, see
Configure the instance metadata options in the Amazon EC2 User Guide for Linux Instances.

Step 4: Grant required permissions to your IAM user or role


Make sure that the IAM user or role that creates the CEV or RDS Custom DB instance has either of the
following policies:

• The AdministratorAccess policy


• The AmazonRDSFullAccess policy with required permissions for Amazon S3 and AWS KMS (required
for both CEV and DB creation), CEV creation, and DB instance creation.

Topics
• IAM permissions required for Amazon S3 and AWS KMS (p. 1012)
• IAM permissions required for creating a CEV (p. 1013)
• IAM permissions required for creating a DB instance from a CEV (p. 1013)

IAM permissions required for Amazon S3 and AWS KMS


To create CEVs or RDS Custom for Oracle DB instances, the IAM identity needs to access Amazon S3 and
AWS KMS. The following sample JSON policy grants the required permissions.

{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "CreateS3Bucket",
"Effect": "Allow",
"Action": [
"s3:CreateBucket",
"s3:PutBucketPolicy",
"s3:PutBucketObjectLockConfiguration",
"s3:PutBucketVersioning"
],
"Resource": "arn:aws:s3:::do-not-delete-rds-custom-*"
},
{
"Sid": "CreateKmsGrant",
"Effect": "Allow",
"Action": [
"kms:CreateGrant",
"kms:DescribeKey"

1012
Amazon Relational Database Service User Guide
Setting up your RDS Custom for Oracle environment

],
"Resource": "*"
}
]
}

For more information about the kms:CreateGrant permission, see AWS KMS key
management (p. 2589).

IAM permissions required for creating a CEV


To create a CEV, the IAM identity needs the following additional permissions:

s3:GetObjectAcl
s3:GetObject
s3:GetObjectTagging
s3:ListBucket
mediaimport:CreateDatabaseBinarySnapshot

The following sample JSON policy grants the additional permissions necessary to access bucket my-
custom-installation-files and its contents.

{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AccessToS3MediaBucket",
"Effect": "Allow",
"Action": [
"s3:GetObjectAcl",
"s3:GetObject",
"s3:GetObjectTagging",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::my-custom-installation-files",
"arn:aws:s3:::my-custom-installation-files/*"
]
},
{
"Sid": "PermissionForByom",
"Effect": "Allow",
"Action": [
"mediaimport:CreateDatabaseBinarySnapshot"
],
"Resource": "*"
}
]
}

You can grant similar permissions for Amazon S3 to caller accounts using an S3 bucket policy.

IAM permissions required for creating a DB instance from a CEV


To create an RDS Custom for Oracle DB instance from an existing CEV, the IAM entity needs the following
additional permissions.

iam:SimulatePrincipalPolicy
cloudtrail:CreateTrail
cloudtrail:StartLogging

1013
Amazon Relational Database Service User Guide
Setting up your RDS Custom for Oracle environment

The following sample JSON policy grants the permissions necessary to validate an IAM role and log
information to an AWS CloudTrail.

{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ValidateIamRole",
"Effect": "Allow",
"Action": "iam:SimulatePrincipalPolicy",
"Resource": "*"
},
{
"Sid": "CreateCloudTrail",
"Effect": "Allow",
"Action": [
"cloudtrail:CreateTrail",
"cloudtrail:StartLogging"
],
"Resource": "arn:aws:cloudtrail:*:*:trail/do-not-delete-rds-custom-*"
}
]
}

1014
Amazon Relational Database Service User Guide
Working with CEVs for RDS Custom for Oracle

Working with custom engine versions for Amazon


RDS Custom for Oracle
A custom engine version (CEV) for Amazon RDS Custom for Oracle is a binary volume snapshot of a
database engine and specific Amazon Machine Image (AMI). By default, RDS Custom for Oracle uses the
latest available AMI managed by RDS Custom, but you can specify an AMI that was used in a previous
CEV. You store your database installation files in Amazon S3. RDS Custom uses the installation files and
the AMI to create your CEV for you.

Topics
• Preparing to create a CEV (p. 1015)
• Creating a CEV (p. 1026)
• Modifying CEV status (p. 1030)
• Viewing CEV details (p. 1031)
• Deleting a CEV (p. 1033)

Preparing to create a CEV


To create a CEV, access the installation files and patches that are stored in your Amazon S3 bucket for
any of the following releases:

• Oracle Database 19c


• Oracle Database 18c
• Oracle Database 12c Release 2 (12.2)
• Oracle Database 12c Release 1 (12.1)

For example, you can use the April 2021 RU/RUR for Oracle Database 19c, or any valid combination
of installation files and patches. For more information on the versions and Regions supported by RDS
Custom for Oracle, see RDS Custom with RDS for Oracle.

Topics
• Step 1 (Optional): Download the manifest templates (p. 1015)
• Step 2: Download your database installation files and patches from Oracle Software Delivery
Cloud (p. 1016)
• Step 3: Upload your installation files to Amazon S3 (p. 1017)
• Step 4 (Optional): Share your installation media in S3 across AWS accounts (p. 1018)
• Step 5: Prepare the CEV manifest (p. 1020)
• Step 6 (Optional): Validate the CEV manifest (p. 1026)
• Step 7: Add necessary IAM permissions (p. 1026)

Step 1 (Optional): Download the manifest templates


A CEV manifest is a JSON document that includes the list of database installation .zip files for your CEV.
To create a CEV, do the following:

1. Identify the Oracle database installation files that you want to include in your CEV.
2. Download the installation files.
3. Create a JSON manifest that lists the installation files.

1015
Amazon Relational Database Service User Guide
Working with CEVs for RDS Custom for Oracle

RDS Custom for Oracle provides JSON manifest templates with our recommended .zip files for each
supported Oracle Database release. For example, the following template is for the 19.17.0.0.0 RU.

{
"mediaImportTemplateVersion": "2020-08-14",
"databaseInstallationFileNames": [
"V982063-01.zip"
],
"opatchFileNames": [
"p6880880_190000_Linux-x86-64.zip"
],
"psuRuPatchFileNames": [
"p34419443_190000_Linux-x86-64.zip",
"p34411846_190000_Linux-x86-64.zip"
],
"otherPatchFileNames": [
"p28852325_190000_Linux-x86-64.zip",
"p29997937_190000_Linux-x86-64.zip",
"p31335037_190000_Linux-x86-64.zip",
"p32327201_190000_Linux-x86-64.zip",
"p33613829_190000_Linux-x86-64.zip",
"p34006614_190000_Linux-x86-64.zip",
"p34533061_190000_Linux-x86-64.zip",
"p34533150_190000_Generic.zip",
"p28730253_190000_Linux-x86-64.zip",
"p29213893_1917000DBRU_Generic.zip",
"p33125873_1917000DBRU_Linux-x86-64.zip",
"p34446152_1917000DBRU_Linux-x86-64.zip"
]
}

Each template has an associated readme that includes instructions for downloading the patches, URLs
for the .zip files, and file checksums. You can use these templates as they are or modify them with
your own patches. To review the templates, download custom-oracle-manifest.zip to your local disk
and then open it with a file archiving application. For more information, see Step 5: Prepare the CEV
manifest (p. 1020).

Step 2: Download your database installation files and patches from Oracle
Software Delivery Cloud
When you have identified the installation files that you want for your CEV, download them to your local
system. The Oracle Database installation files and patches are hosted on Oracle Software Delivery Cloud.
Each CEV requires a base release, such as Oracle Database 19c or Oracle Database 12c Release 2 (12.2),
and an optional list of patches.

To download the database installation files for Oracle Database

1. Go to https://edelivery.oracle.com/ and sign in.


2. In the box, enter Oracle Database Enterprise Edition and choose Search.
3. Choose one of the following base releases:

• DLP: Oracle Database Enterprise Edition 19.3.0.0.0 ( Oracle Database Enterprise Edition ).
• Choose DLP: Oracle Database 12c Enterprise Edition 18.0.0.0.0 ( Oracle Database Enterprise
Edition ).
• Choose DLP: Oracle Database 12c Enterprise Edition 12.2.0.1.0 ( Oracle Database Enterprise
Edition ).
• Choose DLP: Oracle Database 12c Enterprise Edition 12.1.0.2.0 ( Oracle Database Enterprise
Edition ).
4. Choose Continue.

1016
Amazon Relational Database Service User Guide
Working with CEVs for RDS Custom for Oracle

5. Clear the Download Queue check box.


6. Choose the option that corresponds to your base release:

• Oracle Database 19.3.0.0.0 - Long Term Release.


• Oracle Database 18.0.0.0.0
• Oracle Database 12.2.0.1.0.
• Oracle Database 12.1.0.2.0.
7. Choose Linux x86-64 in Platform/Languages.
8. Choose Continue, and then sign the waiver.
9. Choose the .zip file that corresponds to your database release:

DatabaseZip files SHA-256 hash


release

19c V982063-01.zip BA8329C757133DA313ED3B6D7F86C5AC42CD9970A28BF2E6233F3235233AA8D

18c V978967-01.zip C96A4FD768787AF98272008833FE10B172691CF84E42816B138C12D4DE63AB9

12.2 V839960-01.zip 96ED97D21F15C1AC0CCE3749DA6C3DAC7059BB60672D76B008103FC754D22DD

12.1 V46095-01_1of2.zip31FDC2AF41687B4E547A3A18F796424D8C1AF36406D2160F65B0AF6A9CD4735
V46095-01_2of2.zipfor V46095-01_1of2.zip

03DA14F5E875304B28F0F3BB02AF0EC33227885B99C9865DF70749D1E220ACC
for V46095-01_2of2.zip
10. Download your desired Oracle patches from updates.oracle.com or support.oracle.com to
your local system. You can find the URLs for the patches in the following locations:

• The readme files in the .zip file that you downloaded in Step 1 (Optional): Download the manifest
templates (p. 1015)
• The patches listed in each Release Update (RU) in Release notes for Amazon Relational Database
Service (Amazon RDS) for Oracle

Step 3: Upload your installation files to Amazon S3


Upload your Oracle installation and patch files to Amazon S3 using the AWS CLI. The S3 bucket that
contains your installation files must be in the same AWS Region as your CEV.

Choose either of the following options:

• Use aws s3 cp to upload a single .zip file.

Upload each installation .zip file separately. Don't combine the .zip files into a single .zip file.
• Use aws s3 sync to upload a directory.

List your installation files using either the AWS Management Console or the AWS CLI.

Examples in this section use the following placeholders:

• install-or-patch-file.zip – Oracle installation media file. For example,


p32126828_190000_Linux-x86-64.zip is a patch.
• my-custom-installation-files – Your Amazon S3 bucket designated for your uploaded
installation files.
• 123456789012/cev1 – An optional prefix in your Amazon S3 bucket.

1017
Amazon Relational Database Service User Guide
Working with CEVs for RDS Custom for Oracle

• source-bucket – An Amazon S3 bucket where you can optionally stage files.

The following example uploads install-or-patch-file.zip to the 123456789012/cev1 folder


in the RDS Custom Amazon S3 bucket. Run a separate aws s3 command for each .zip that you want to
upload.

For Linux, macOS, or Unix:

aws s3 cp install-or-patch-file.zip \
s3://my-custom-installation-files/123456789012/cev1/

For Windows:

aws s3 cp install-or-patch-file.zip ^
s3://my-custom-installation-files/123456789012/cev1/

Verify that your S3 bucket is in the AWS Region where you plan to run the create-custom-db-
engine-version command.

aws s3api get-bucket-location --bucket my-custom-installation-files

List the files in your RDS Custom Amazon S3 bucket as follows.

aws s3 ls \
s3://my-custom-installation-files/123456789012/cev1/

The following example uploads the files in your local cev1 folder to the 123456789012/cev1 folder in
your Amazon S3 bucket.

For Linux, macOS, or Unix:

aws s3 sync cev1 \


s3://my-custom-installation-files/123456789012/cev1/

For Windows:

aws s3 sync cev1 ^


s3://my-custom-installation-files/123456789012/cev1/

The following example uploads all files in source-bucket to the 123456789012/cev1 folder in your
Amazon S3 bucket.

For Linux, macOS, or Unix:

aws s3 sync s3://source-bucket/ \


s3://my-custom-installation-files/123456789012/cev1/

For Windows:

aws s3 sync s3://source-bucket/ ^


s3://my-custom-installation-files/123456789012/cev1/

Step 4 (Optional): Share your installation media in S3 across AWS accounts


For the purposes of this section, the Amazon S3 bucket that contains your uploaded Oracle installation
files is your media bucket. Your organization might use multiple AWS accounts in an AWS Region. If so,

1018
Amazon Relational Database Service User Guide
Working with CEVs for RDS Custom for Oracle

you might want to use one AWS account to populate your media bucket and a different AWS account to
create CEVs. If you don't intend to share your media bucket, skip to the next section.

This section assumes the following:

• You can access the account that created your media bucket and a different account in which you intend
to create CEVs.
• You intend to create CEVs in only one AWS Region. If you intend to use multiple Regions, create a
media bucket in each Region.
• You're using the CLI. If you're using the Amazon S3 console, adapt the following steps.

To configure your media bucket for sharing across AWS accounts

1. Log in to the AWS account that contains the S3 bucket into which you uploaded your installation
media.
2. Start with either a blank JSON policy template or an existing policy that you can adapt.

The following command retrieves an existing policy and saves it as my-policy.json. In this
example, the S3 bucket containing your installation files is named oracle-media-bucket.

aws s3api get-bucket-policy \


--bucket oracle-media-bucket \
--query Policy \
--output text > my-policy.json

3. Edit the media bucket permissions as follows:

• In the Resource element of your template, specify the S3 bucket into which you uploaded your
Oracle Database installation files.
• In the Principal element, specify the ARNs for all AWS accounts that you intend to use to create
CEVs. You can add the root, a user, or a role to the S3 bucket allow list. For more information, see
IAM identifiers in the AWS Identity and Access Management User Guide.

{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "GrantAccountsAccess",
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::account-1:root",
"arn:aws:iam::account-2:user/user-name-with-path",
"arn:aws:iam::account-3:role/role-name-with-path",
...
]
},
"Action": [
"s3:GetObject",
"s3:GetObjectAcl",
"s3:GetObjectTagging",
"s3:ListBucket",
"s3:GetBucketLocation"
],
"Resource": [
"arn:aws:s3:::oracle-media-bucket",
"arn:aws:s3:::oracle-media-bucket/*"
]
}

1019
Amazon Relational Database Service User Guide
Working with CEVs for RDS Custom for Oracle

]
}

4. Attach the policy to your media bucket.

In the following example, oracle-media-bucket is the name of the S3 bucket that contains your
installation files, and my-policy.json is the name of your JSON file.

aws s3api put-bucket-policy \


--bucket oracle-media-bucket \
--policy file://my-policy.json

5. Log in to an AWS account in which you intend to create CEVs.


6. Verify that this account can access the media bucket in the AWS account that created it.

aws s3 ls --query "Buckets[].Name"

For more information, see aws s3 ls in the AWS CLI Command Reference.
7. Create a CEV by following the steps in Creating a CEV (p. 1026).

Step 5: Prepare the CEV manifest


A CEV manifest is a JSON document that includes the following:

• (Required) The list of installation .zip files that you uploaded to Amazon S3. RDS Custom applies the
patches in the order in which they're listed in the manifest.
• (Optional) Installation parameters that set nondefault values for the Oracle base, Oracle home, and
the ID and name of the UNIX/Linux user and group. Be aware that you can’t modify the installation
parameters for an existing CEV or an existing DB instance. You also can’t upgrade from one CEV to
another CEV when the installation parameters have different settings.

For sample CEV manifests, see the JSON templates that you downloaded in Step 1 (Optional): Download
the manifest templates (p. 1015). You can also review the samples in CEV manifest examples (p. 1023).

Topics
• JSON fields in the CEV manifest (p. 1020)
• Creating the CEV manifest (p. 1023)
• CEV manifest examples (p. 1023)

JSON fields in the CEV manifest

The following table describes the JSON fields in the manifest.

JSON fields in the CEV manifest

JSON field Description

MediaImportTemplateVersion Version of the CEV manifest. The date is in the format YYYY-MM-DD.

Ordered list of installation files for the database.


databaseInstallationFileNames

opatchFileNames Ordered list of OPatch installers used for the Oracle DB engine. Only one
value is valid. Values for opatchFileNames must start with p6880880_.

psuRuPatchFileNames The PSU and RU patches for this database.

1020
Amazon Relational Database Service User Guide
Working with CEVs for RDS Custom for Oracle

JSON field Description


Important
If you include psuRuPatchFileNames, opatchFileNames
is required. Values for opatchFileNames must start with
p6880880_.

OtherPatchFileNames The patches that aren't in the list of PSU and RU patches. RDS Custom
applies these patches after applying the PSU and RU patches.
Important
If you include OtherPatchFileNames, opatchFileNames
is required. Values for opatchFileNames must start with
p6880880_.

1021
Amazon Relational Database Service User Guide
Working with CEVs for RDS Custom for Oracle

JSON field Description

installationParameters Nondefault settings for the Oracle base, Oracle home, and the ID and name
of the UNIX/Linux user and group. You can set the following parameters:

oracleBase

The directory under which your Oracle binaries are installed. It is the
mount point of the binary volume that stores your files. The Oracle base
directory can include multiple Oracle homes. For example, if /home/
oracle/oracle.19.0.0.0.ru-2020-04.rur-2020-04.r1.EE.1 is
one of your Oracle home directories, then /home/oracle is the Oracle
base directory. A user-specified Oracle base directory is not a symbolic
link.

If you don't specify the Oracle base, the default directory is /rdsdbbin.
oracleHome

The directory in which your Oracle database binaries are


installed. For example, if you specify /home/oracle/ as
your Oracle base, then you might specify /home/oracle/
oracle.19.0.0.0.ru-2020-04.rur-2020-04.r1.EE.1/ as your
Oracle home. A user-specified Oracle home directory is not a symbolic
link. The Oracle home value is referenced by the $ORACLE_HOME
environment variable.

If you don't specify the Oracle home, the default naming format is /
rdsdbbin/oracle.major-engine-version.custom.r1.engine-
edition.1.
unixUname

The name of the UNIX user that owns the Oracle software. RDS Custom
assumes this user when running local database commands. If you specify
both unixUid and unixUname, RDS Custom creates the user if it
doesn't exist, and then assigns the UID to the user if it's not the same as
the initial UID.

The default user name is rdsdb.


unixUid

The ID (UID) of the UNIX user that owns the Oracle software. If you
specify both unixUid and unixUname, RDS Custom creates the user if it
doesn't exist, and then assigns the UID to the user if it's not the same as
the initial UID.

The default UID is 61001. This is the UID of the user rdsdb.
unixGroupName

The name of the UNIX group. The UNIX user that owns the Oracle
software belongs to this group.

The default group name is rdsdb.


unixGroupId

The ID of the UNIX group to which the UNIX user belongs.

The default group ID is 1000. This is the ID of the group rdsdb.

1022
Amazon Relational Database Service User Guide
Working with CEVs for RDS Custom for Oracle

Each Oracle Database release has a different list of supported installation files. When you create your
CEV manifest, make sure to specify only files that are supported by RDS Custom for Oracle. Otherwise,
CEV creation fails with an error. All patches listed in Release notes for Amazon Relational Database
Service (Amazon RDS) for Oracle are supported.

Creating the CEV manifest

To create a CEV manifest

1. List all installation files that you plan to apply, in the order that you want to apply them.
2. Correlate the installation files with the JSON fields described in JSON fields in the CEV
manifest (p. 1020).
3. Do either of the following:

• Create the CEV manifest as a JSON text file.


• Edit the CEV manifest template when you create the CEV in the console. For more information, see
Creating a CEV (p. 1026).

CEV manifest examples

The following examples show CEV manifest files for different Oracle Database releases. If you include a
JSON field in your manifest, make sure that it isn't empty. For example, the following CEV manifest isn't
valid because otherPatchFileNames is empty.

{
"mediaImportTemplateVersion": "2020-08-14",
"databaseInstallationFileNames": [
"V982063-01.zip"
],
"opatchFileNames": [
"p6880880_190000_Linux-x86-64.zip"
],
"psuRuPatchFileNames": [
"p32126828_190000_Linux-x86-64.zip"
],
"otherPatchFileNames": [
]
}

Topics

• Sample CEV manifest for Oracle Database 12c Release 1 (12.1) (p. 1023)
• Sample CEV manifest for Oracle Database 12c Release 2 (12.2) (p. 1024)
• Sample CEV manifest for Oracle Database 18c (p. 1025)
• Sample CEV manifest for Oracle Database 19c (p. 1026)

Example Sample CEV manifest for Oracle Database 12c Release 1 (12.1)

In the following example for the July 2021 PSU for Oracle Database 12c Release 1 (12.1), RDS Custom
applies the patches in the order specified. Thus, RDS Custom applies p32768233, then p32876425, then
p18759211, and so on. The example sets new values for the UNIX user and group, and the Oracle home
and Oracle base.

{
"mediaImportTemplateVersion":"2020-08-14",

1023
Amazon Relational Database Service User Guide
Working with CEVs for RDS Custom for Oracle

"databaseInstallationFileNames":[
"V46095-01_1of2.zip",
"V46095-01_2of2.zip"
],
"opatchFileNames":[
"p6880880_121010_Linux-x86-64.zip"
],
"psuRuPatchFileNames":[
"p32768233_121020_Linux-x86-64.zip"
],
"otherPatchFileNames":[
"p32876425_121020_Linux-x86-64.zip",
"p18759211_121020_Linux-x86-64.zip",
"p19396455_121020_Linux-x86-64.zip",
"p20875898_121020_Linux-x86-64.zip",
"p22037014_121020_Linux-x86-64.zip",
"p22873635_121020_Linux-x86-64.zip",
"p23614158_121020_Linux-x86-64.zip",
"p24701840_121020_Linux-x86-64.zip",
"p25881255_121020_Linux-x86-64.zip",
"p27015449_121020_Linux-x86-64.zip",
"p28125601_121020_Linux-x86-64.zip",
"p28852325_121020_Linux-x86-64.zip",
"p29997937_121020_Linux-x86-64.zip",
"p31335037_121020_Linux-x86-64.zip",
"p32327201_121020_Linux-x86-64.zip",
"p32327208_121020_Generic.zip",
"p17969866_12102210119_Linux-x86-64.zip",
"p20394750_12102210119_Linux-x86-64.zip",
"p24835919_121020_Linux-x86-64.zip",
"p23262847_12102201020_Linux-x86-64.zip",
"p21171382_12102201020_Generic.zip",
"p21091901_12102210720_Linux-x86-64.zip",
"p33013352_12102210720_Linux-x86-64.zip",
"p25031502_12102210720_Linux-x86-64.zip",
"p23711335_12102191015_Generic.zip",
"p19504946_121020_Linux-x86-64.zip"
],
"installationParameters": {
"unixGroupName": "dba",
"unixGroupId": 12345,
"unixUname": "oracle",
"unixUid": 12345,
"oracleHome": "/home/oracle/oracle.12.1.0.2",
"oracleBase": "/home/oracle"
}
}

Example Sample CEV manifest for Oracle Database 12c Release 2 (12.2)

In following example for the October 2021 PSU for Oracle Database 12c Release 2 (12.2), RDS Custom
applies p33261817, then p33192662, then p29213893, and so on. The example sets new values for the
UNIX user and group, and the Oracle home and Oracle base.

{
"mediaImportTemplateVersion":"2020-08-14",
"databaseInstallationFileNames":[
"V839960-01.zip"
],
"opatchFileNames":[
"p6880880_122010_Linux-x86-64.zip"
],
"psuRuPatchFileNames":[
"p33261817_122010_Linux-x86-64.zip"

1024
Amazon Relational Database Service User Guide
Working with CEVs for RDS Custom for Oracle

],
"otherPatchFileNames":[
"p33192662_122010_Linux-x86-64.zip",
"p29213893_122010_Generic.zip",
"p28730253_122010_Linux-x86-64.zip",
"p26352615_12201211019DBOCT2021RU_Linux-x86-64.zip",
"p23614158_122010_Linux-x86-64.zip",
"p24701840_122010_Linux-x86-64.zip",
"p25173124_122010_Linux-x86-64.zip",
"p25881255_122010_Linux-x86-64.zip",
"p27015449_122010_Linux-x86-64.zip",
"p28125601_122010_Linux-x86-64.zip",
"p28852325_122010_Linux-x86-64.zip",
"p29997937_122010_Linux-x86-64.zip",
"p31335037_122010_Linux-x86-64.zip",
"p32327201_122010_Linux-x86-64.zip",
"p32327208_122010_Generic.zip"
],
"installationParameters": {
"unixGroupName": "dba",
"unixGroupId": 12345,
"unixUname": "oracle",
"unixUid": 12345,
"oracleHome": "/home/oracle/oracle.12.2.0.1",
"oracleBase": "/home/oracle"
}
}

Example Sample CEV manifest for Oracle Database 18c

In following example for the October 2021 PSU for Oracle Database 18c, RDS Custom applies
p32126855, then p28730253, then p27539475, and so on. The example sets new values for the UNIX
user and group, and the Oracle home and Oracle base.

{
"mediaImportTemplateVersion":"2020-08-14",
"databaseInstallationFileNames":[
"V978967-01.zip"
],
"opatchFileNames":[
"p6880880_180000_Linux-x86-64.zip"
],
"psuRuPatchFileNames":[
"p32126855_180000_Linux-x86-64.zip"
],
"otherPatchFileNames":[
"p28730253_180000_Linux-x86-64.zip",
"p27539475_1813000DBRU_Linux-x86-64.zip",
"p29213893_180000_Generic.zip",
"p29374604_1813000DBRU_Linux-x86-64.zip",
"p29782284_180000_Generic.zip",
"p28125601_180000_Linux-x86-64.zip",
"p28852325_180000_Linux-x86-64.zip",
"p29997937_180000_Linux-x86-64.zip",
"p31335037_180000_Linux-x86-64.zip",
"p31335142_180000_Generic.zip"
]
"installationParameters": {
"unixGroupName": "dba",
"unixGroupId": 12345,
"unixUname": "oracle",
"unixUid": 12345,
"oracleHome": "/home/oracle/18.0.0.0.ru-2020-10.rur-2020-10.r1",
"oracleBase": "/home/oracle/"

1025
Amazon Relational Database Service User Guide
Working with CEVs for RDS Custom for Oracle

}
}

Example Sample CEV manifest for Oracle Database 19c

In the following example for Oracle Database 19c, RDS Custom applies p32126828, then p29213893,
then p29782284, and so on. The example sets new values for the UNIX user and group, and the Oracle
home and Oracle base.

{
"mediaImportTemplateVersion": "2020-08-14",
"databaseInstallationFileNames": [
"V982063-01.zip"
],
"opatchFileNames": [
"p6880880_190000_Linux-x86-64.zip"
],
"psuRuPatchFileNames": [
"p32126828_190000_Linux-x86-64.zip"
],
"otherPatchFileNames": [
"p29213893_1910000DBRU_Generic.zip",
"p29782284_1910000DBRU_Generic.zip",
"p28730253_190000_Linux-x86-64.zip",
"p29374604_1910000DBRU_Linux-x86-64.zip",
"p28852325_190000_Linux-x86-64.zip",
"p29997937_190000_Linux-x86-64.zip",
"p31335037_190000_Linux-x86-64.zip",
"p31335142_190000_Generic.zip"
],
"installationParameters": {
"unixGroupName": "dba",
"unixGroupId": 12345,
"unixUname": "oracle",
"unixUid": 12345,
"oracleHome": "/home/oracle/oracle.19.0.0.0.ru-2020-04.rur-2020-04.r1.EE.1",
"oracleBase": "/home/oracle"
}
}

Step 6 (Optional): Validate the CEV manifest


Optionally, verify that manifest is a valid JSON file by running the json.tool Python script. For
example, if you change into the directory containing a CEV manifest named manifest.json, run the
following command.

python -m json.tool < manifest.json

Step 7: Add necessary IAM permissions


Make sure that the IAM principal that creates the CEV has the necessary policies described in Step 4:
Grant required permissions to your IAM user or role (p. 1012).

Creating a CEV
You can create a CEV using the AWS Management Console or the AWS CLI. Specify either the
multitenant or non-multitenant architecture. For more information, see Multitenant architecture
considerations (p. 1035).

1026
Amazon Relational Database Service User Guide
Working with CEVs for RDS Custom for Oracle

Make sure that the Amazon S3 bucket containing your installation files is in the same AWS Region as
your CEV. Otherwise, the process to create a CEV fails.

Typically, creating a CEV takes about two hours. After the CEV is created, you can use it to create
an RDS Custom DB instance. For more information, see Creating an RDS Custom for Oracle DB
instance (p. 1035).

Console

To create a CEV

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Custom engine versions.

The Custom engine versions page shows all CEVs that currently exist. If you haven't created any
CEVs, the page is empty.
3. Choose Create custom engine version.
4. In Engine options, do the following:

a. For Engine type, choose Oracle.


b. For Architecture settings, optionally choose Multitenant architecture to create a Multitenant
CEV, which uses the engine custom-oracle-ee-cdb. You can create an RDS Custom for
Oracle CDB with a Multitenant CEV only. If you don't choose this option, your CEV is a non-CDB,
which uses the engine custom-oracle-ee.
Note
The architecture that you choose is a permanent characteristic of your CEV. You can't
modify your CEV to use a different architecture later.
c. For Engine version, choose the major engine version.
5. In Version details, do the following:

a. Enter a valid name in Custom engine version name.

The name format is major-engine-version.customized_string. You can use 1–50


alphanumeric characters, underscores, dashes, and periods. For example, you might enter the
name 19.cdb_cev1.
b. (Optional) Enter a description for your CEV.
6. In Installation media, do the following:

a. (Optional) For AMI ID, enter an AMI that you previously used to create a CEV. To obtain valid
AMI IDs, use either of the following techniques:

• In the console, choose Custom engine versions in the left navigation pane, and choose the
name of a CEV. The AMI ID used by the CEV appears in the Configuration tab.
• In the AWS CLI, use the describe-db-engine-versions command. Search the output for
ImageID.

If you don't enter an AMI ID, RDS Custom uses the most recent available AMI.
b. For S3 location of manifest files, enter the location of the Amazon S3 bucket that you specified
in Step 3: Upload your installation files to Amazon S3 (p. 1017). For example, enter s3://my-
custom-installation-files/806242271698/cev1/.
c. For CEV manifest, enter the JSON manifest that you created in Creating the CEV
manifest (p. 1023).
7. In the KMS key section, select Enter a key ARN to list the available AWS KMS keys. Then select your
KMS key from the list.

1027
Amazon Relational Database Service User Guide
Working with CEVs for RDS Custom for Oracle

An AWS KMS key is required for RDS Custom. For more information, see Step 1: Create or reuse a
symmetric encryption AWS KMS key (p. 1003).
8. (Optional) Choose Add new tag to create a key-value pair for your CEV.
9. Choose Create custom engine version.

If the CEV manifest has an invalid form, the console displays Error validating the CEV manifest. Fix
the problems, and try again.

The Custom engine versions page appears. Your CEV is shown with the status Creating. The process to
create the CEV takes approximately two hours.

AWS CLI

To create a CEV by using the AWS CLI, run the create-custom-db-engine-version command.

The following options are required:

• --engine engine-type, where engine-type is either custom-oracle-ee-cdb for a CDB or


custom-oracle-ee for a non-CDB. You can create CDBs only from a CEV created with custom-
oracle-ee-cdb. You can create non-CDBs only from a CEV created with custom-oracle-ee.
• --engine-version major-engine-version.customized_string
• --kms-key-id
• --manifest manifest_string or --manifest file:file_name

Newline characters aren't permitted in manifest_string. Make sure to escape double quotes (") in
the JSON code by prefixing them with a backslash (\).

The following example shows the manifest_string for 19c from Step 5: Prepare the CEV
manifest (p. 1020). The example sets new values for the Oracle base, Oracle home, and the ID and
name of the UNIX/Linux user and group. If you copy this string, remove all newline characters before
pasting it into your command.

"{\"mediaImportTemplateVersion\": \"2020-08-14\",
\"databaseInstallationFileNames\": [\"V982063-01.zip\"],
\"opatchFileNames\": [\"p6880880_190000_Linux-x86-64.zip\"],
\"psuRuPatchFileNames\": [\"p32126828_190000_Linux-x86-64.zip\"],
\"otherPatchFileNames\": [\"p29213893_1910000DBRU_Generic.zip\",
\"p29782284_1910000DBRU_Generic.zip\",\"p28730253_190000_Linux-x86-64.zip
\",\"p29374604_1910000DBRU_Linux-x86-64.zip\",\"p28852325_190000_Linux-
x86-64.zip\",\"p29997937_190000_Linux-x86-64.zip\",\"p31335037_190000_Linux-
x86-64.zip\",\"p31335142_190000_Generic.zip\"]\"installationParameters\":
{ \"unixGroupName\":\"dba\", \ \"unixUname\":\"oracle\", \ \"oracleHome\":\"/
home/oracle/oracle.19.0.0.0.ru-2020-04.rur-2020-04.r1.EE.1\", \ \"oracleBase
\":\"/home/oracle/\"}}"
• --database-installation-files-s3-bucket-name s3-bucket-name, where s3-bucket-
name is the bucket name that you specified in Step 3: Upload your installation files to Amazon
S3 (p. 1017). The AWS Region in which you run create-custom-db-engine-version must be the
same Region as your Amazon S3 bucket.

You can also specify the following options:

• --description my-cev-description
• --database-installation-files-s3-prefix prefix, where prefix is the folder name that
you specified in Step 3: Upload your installation files to Amazon S3 (p. 1017).

1028
Amazon Relational Database Service User Guide
Working with CEVs for RDS Custom for Oracle

• --image-id ami-id, where ami-id is an AMI ID that want to reuse. To find valid IDs, run the
describe-db-engine-versions command, and then search the output for ImageID. By default,
RDS Custom for Oracle uses the most recent available AMI.

The following example creates a Multitenant CEV named 19.cdb_cev1. The example reuses an existing
AMI rather than use the latest available AMI. Make sure that the name of your CEV starts with the major
engine version number.

Example

For Linux, macOS, or Unix:

aws rds create-custom-db-engine-version \


--engine custom-oracle-ee-cdb \
--engine-version 19.cdb_cev1 \
--database-installation-files-s3-bucket-name us-east-1-123456789012-custom-
installation-files \
--database-installation-files-s3-prefix 123456789012/cev1 \
--kms-key-id my-kms-key \
--description "test cev" \
--manifest manifest_string \
--image-id ami-012a345678901bcde

For Windows:

aws rds create-custom-db-engine-version ^


--engine custom-oracle-ee-cdb ^
--engine-version 19.cdb_cev1 ^
--database-installation-files-s3-bucket-name us-east-1-123456789012-custom-
installation-files ^
--database-installation-files-s3-prefix 123456789012/cev1 ^
--kms-key-id my-kms-key ^
--description "test cev" ^
--manifest manifest_string ^
--image-id ami-012a345678901bcde

Example

Get details about your CEV by using the describe-db-engine-versions command.

aws rds describe-db-engine-versions \


--engine custom-oracle-ee-cdb \
--include-all

The following partial sample output shows the engine, parameter groups, manifest, and other
information.

{
"DBEngineVersions": [
{
"Engine": "custom-oracle-ee-cdb",
"EngineVersion": "19.cdb_cev1",
"DBParameterGroupFamily": "custom-oracle-ee-cdb-19",
"DBEngineDescription": "Containerized Database for Oracle Custom EE",
"DBEngineVersionDescription": "test cev",
"Image": {
"ImageId": "ami-012a345678901bcde",
"Status": "active"
},

1029
Amazon Relational Database Service User Guide
Working with CEVs for RDS Custom for Oracle

"ValidUpgradeTarget": [],
"SupportsLogExportsToCloudwatchLogs": false,
"SupportsReadReplica": true,
"SupportedFeatureNames": [],
"Status": "available",
"SupportsParallelQuery": false,
"SupportsGlobalDatabases": false,
"MajorEngineVersion": "19",
"DatabaseInstallationFilesS3BucketName": "us-east-1-123456789012-custom-
installation-files",
"DatabaseInstallationFilesS3Prefix": "123456789012/cev1",
"DBEngineVersionArn": "arn:aws:rds:us-east-1:123456789012:cev:custom-oracle-ee-
cdb/19.cdb_cev1/abcd12e3-4f5g-67h8-i9j0-k1234l56m789",
"KMSKeyId": "arn:aws:kms:us-
east-1:732027699161:key/1ab2345c-6d78-9ef0-1gh2-3456i7j89k01",
"CreateTime": "2023-03-07T19:47:58.131000+00:00",
"TagList": [],
"SupportsBabelfish": false,
...

Failure to create a CEV


If the process to create a CEV fails, RDS Custom issues RDS-EVENT-0198 with the message Creation
failed for custom engine version major-engine-version.cev_name, and includes details
about the failure. For example, the event prints missing files.

You can't modify a failed CEV. You can only delete it, then try again to create a CEV after fixing the
causes of the failure. For information about troubleshooting the reasons for CEV creation failure, see
Troubleshooting custom engine version creation for RDS Custom for Oracle (p. 1079).

Modifying CEV status


You can modify a CEV using the AWS Management Console or the AWS CLI. You can modify the CEV
description or its availability status. Your CEV has one of the following status values:

• available – You can use this CEV to create a new RDS Custom DB instance or upgrade a DB instance.
This is the default status for a newly created CEV.
• inactive – You can't create or upgrade an RDS Custom instance with this CEV. You can't restore a DB
snapshot to create a new RDS Custom DB instance with this CEV.

You can change the CEV from any supported status to any other supported status. You might change
status to prevent the accidental use of a CEV or make a discontinued CEV eligible for use again. For
example, you might change the status of your CEV from available to inactive, and from inactive
back to available.

Console

To modify a CEV

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Custom engine versions.
3. Choose a CEV whose description or status you want to modify.
4. For Actions, choose Modify.
5. Make any of the following changes:

• For CEV status settings, choose a new availability status.

1030
Amazon Relational Database Service User Guide
Working with CEVs for RDS Custom for Oracle

• For Version description, enter a new description.


6. Choose Modify CEV.

If the CEV is in use, the console displays You can't modify the CEV status. Fix the problems, and try
again.

The Custom engine versions page appears.

AWS CLI

To modify a CEV by using the AWS CLI, run the modify-custom-db-engine-version command. You can
find CEVs to modify by running the describe-db-engine-versions command.

The following options are required:

• --engine custom-oracle-ee
• --engine-version cev, where cev is the name of the custom engine version that you want to
modify
• --status status, where status is the availability status that you want to assign to the CEV

The following example changes a CEV named 19.my_cev1 from its current status to inactive.

Example

For Linux, macOS, or Unix:

aws rds modify-custom-db-engine-version \


--engine custom-oracle-ee \
--engine-version 19.my_cev1 \
--status inactive

For Windows:

aws rds modify-custom-db-engine-version ^


--engine custom-oracle-ee ^
--engine-version 19.my_cev1 ^
--status inactive

Viewing CEV details


You can view details about your CEV manifest and the command used to create your CEV by using the
AWS Management Console or the AWS CLI.

Console

To view CEV details

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Custom engine versions.

The Custom engine versions page shows all CEVs that currently exist. If you haven't created any
CEVs, the page is empty.
3. Choose the name of the CEV that you want to view.

1031
Amazon Relational Database Service User Guide
Working with CEVs for RDS Custom for Oracle

4. Choose Configuration to view the installation parameters specified in your manifest.

5. Choose Manifest to view the installation parameters specified in the --manifest option of the
create-custom-db-engine-version command. You can copy this text, replace values as
needed, and use them in a new command.

AWS CLI

To view details about a CEV by using the AWS CLI, run the describe-db-engine-versions command.

The following options are required:

• --engine custom-oracle-ee
• --engine-version major-engine-version.customized_string

The following example creates a CEV named 19.my_cev1. Make sure that the name of your CEV starts
with the major engine version number.

Example

For Linux, macOS, or Unix:

1032
Amazon Relational Database Service User Guide
Working with CEVs for RDS Custom for Oracle

aws rds describe-db-engine-versions \


--engine custom-oracle-ee \
--engine-version 19.my_cev1

For Windows:

aws rds describe-db-engine-versions ^


--engine custom-oracle-ee ^
--engine-version 19.my_cev1

The following partial sample output shows the engine, parameter groups, manifest, and other
information.

"DBEngineVersions": [
{
"Engine": "custom-oracle-ee",
"MajorEngineVersion": "19",
"EngineVersion": "19.my_cev1",
"DatabaseInstallationFilesS3BucketName": "us-east-1-123456789012-cev-customer-
installation-files",
"DatabaseInstallationFilesS3Prefix": "123456789012/cev1",
"CustomDBEngineVersionManifest": "{\n\"mediaImportTemplateVersion\":
\"2020-08-14\",\n\"databaseInstallationFileNames\": [\n\"V982063-01.zip\"\n],\n
\"installationParameters\": {\n\"oracleBase\":\"/tmp\",\n\"oracleHome\":\"/tmp/Oracle\"\n},
\n\"opatchFileNames\": [\n\"p6880880_190000_Linux-x86-64.zip\"\n],\n\"psuRuPatchFileNames
\": [\n\"p32126828_190000_Linux-x86-64.zip\"\n],\n\"otherPatchFileNames\": [\n
\"p29213893_1910000DBRU_Generic.zip\",\n\"p29782284_1910000DBRU_Generic.zip\",\n
\"p28730253_190000_Linux-x86-64.zip\",\n\"p29374604_1910000DBRU_Linux-x86-64.zip\",
\n\"p28852325_190000_Linux-x86-64.zip\",\n\"p29997937_190000_Linux-x86-64.zip\",\n
\"p31335037_190000_Linux-x86-64.zip\",\n\"p31335142_190000_Generic.zip\"\n]\n}\n",
"DBParameterGroupFamily": "custom-oracle-ee-19",
"DBEngineDescription": "Oracle Database server EE for RDS Custom",
"DBEngineVersionArn": "arn:aws:rds:us-west-2:123456789012:cev:custom-oracle-
ee/19.my_cev1/0a123b45-6c78-901d-23e4-5678f901fg23",
"DBEngineVersionDescription": "test",
"KMSKeyId": "arn:aws:kms:us-east-1:123456789012:key/ab1c2de3-f4g5-6789-h012-
h3ijk4567l89",
"CreateTime": "2022-11-18T09:17:07.693000+00:00",
"ValidUpgradeTarget": [
{
"Engine": "custom-oracle-ee",
"EngineVersion": "19.cev.2021-01.09",
"Description": "test",
"AutoUpgrade": false,
"IsMajorVersionUpgrade": false
}
]

Deleting a CEV
You can delete a CEV using the AWS Management Console or the AWS CLI. Typically, deletion takes a few
minutes.

To delete a CEV, it can't be in use by any of the following:

• An RDS Custom DB instance


• A snapshot of an RDS Custom DB instance
• An automated backup of your RDS Custom DB instance

1033
Amazon Relational Database Service User Guide
Working with CEVs for RDS Custom for Oracle

Console

To delete a CEV

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Custom engine versions.
3. Choose a CEV whose description or status you want to delete.
4. For Actions, choose Delete.

The Delete cev_name? dialog box appears.


5. Enter delete me, and then choose Delete.

In the Custom engine versions page, the banner shows that your CEV is being deleted.

AWS CLI

To delete a CEV by using the AWS CLI, run the delete-custom-db-engine-version command.

The following options are required:

• --engine custom-oracle-ee
• --engine-version cev, where cev is the name of the custom engine version to be deleted

The following example deletes a CEV named 19.my_cev1.

Example

For Linux, macOS, or Unix:

aws rds delete-custom-db-engine-version \


--engine custom-oracle-ee \
--engine-version 19.my_cev1

For Windows:

aws rds delete-custom-db-engine-version ^


--engine custom-oracle-ee ^
--engine-version 19.my_cev1

1034
Amazon Relational Database Service User Guide
Configuring an RDS Custom for Oracle DB instance

Configuring a DB instance for Amazon RDS Custom


for Oracle
You can create an RDS Custom DB instance, and then connect to it using Secure Shell (SSH) or AWS
Systems Manager.

Topics
• Multitenant architecture considerations (p. 1035)
• Creating an RDS Custom for Oracle DB instance (p. 1035)
• RDS Custom service-linked role (p. 1040)
• Connecting to your RDS Custom DB instance using Session Manager (p. 1040)
• Connecting to your RDS Custom DB instance using SSH (p. 1041)
• Logging in to your RDS Custom for Oracle database as SYS (p. 1045)
• Installing additional software components on your RDS Custom for Oracle DB instance (p. 1046)

Multitenant architecture considerations


If you create an Amazon RDS Custom for Oracle DB instance with the multitenant architecture (custom-
oracle-ee-cdb engine type), your database is a container database (CDB). If you don't specify the
multitenant architecture, your database is a traditional non-CDB that uses the custom-oracle-ee
engine type. A non-CDB can't contain pluggable databases (PDBs). For more information, see Database
architecture for Amazon RDS Custom for Oracle (p. 997).

When you create an RDS Custom for Oracle CDB instance, consider the following:

• You can create a multitenant database only from an Oracle Database 19c CEV.
• You can create a CDB instance only if the CEV uses the custom-oracle-ee-cdb engine type.
• By default, your CDB is named RDSCDB, which is also the name of the Oracle System ID (Oracle SID).
You can choose a different name.
• You CDB contains only one initial PDB. The PDB name defaults to ORCL. You can choose a different
name for your initial PDB, but the Oracle SID and the PDB name can’t be the same.
• RDS Custom for Oracle doesn't supply APIs for PDBs. To create additional PDBs, use the Oracle SQL
command CREATE PLUGGABLE DATABASE. RDS Custom for Oracle doesn't restrict the number of
PDBs that you can create. In general, you are responsible for creating and managing PDBs, as in an on-
premises deployment.
• If you create a PDB using Oracle SQL, we recommend that you take a manual snapshot afterward in
case you need to perform point-in-time recovery (PITR).
• You can't rename existing PDBs using Amazon RDS APIs. You also can't rename the CDB using the
modify-db-instance command.
• The open mode for the CDB root is READ WRITE on the primary and MOUNTED on a mounted standby
database. RDS Custom for Oracle attempts to open all PDBs when opening the CDB. If RDS Custom for
Oracle can’t open all PDBs, it issues the event tenant database shutdown.

Creating an RDS Custom for Oracle DB instance


Create an Amazon RDS Custom for Oracle DB instance using either the AWS Management Console or the
AWS CLI. The procedure is similar to the procedure for creating an Amazon RDS DB instance. For more
information, see Creating an Amazon RDS DB instance (p. 300).

1035
Amazon Relational Database Service User Guide
Configuring an RDS Custom for Oracle DB instance

If you included installation parameters in your CEV manifest, then your DB instance uses the Oracle
base, Oracle home, and the ID and name of the UNIX/Linux user and group that you specified. The
oratab file, which is created by Oracle Database during installation, points to the real installation
location rather than to a symbolic link. When RDS Custom for Oracle runs commands, it runs as the
configured OS user rather than the default user rdsdb. For more information, see Step 5: Prepare the
CEV manifest (p. 1020).

Before you attempt to create or connect to an RDS Custom DB instance, complete the tasks in Setting up
your environment for Amazon RDS Custom for Oracle (p. 1002).

Console

To create an RDS Custom for Oracle DB instance

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases.
3. Choose Create database.
4. In Choose a database creation method, select Standard create.
5. In the Engine options section, do the following:

a. For Engine type, choose Oracle.


b. For Database management type, choose Amazon RDS Custom.
c. For Architecture settings, do one of the following:

• Select Multitenant architecture to create a container database (CDB). At creation, your CDB
contains one PDB seed and one initial PDB.
Note
The Multitenant architecture setting is supported only for Oracle Database 19c.
• Clear Multitenant architecture to create a non-CDB. A non-CDB can't contain PDBs.
d. For Edition, choose Oracle Enterprise Edition.
e. For Custom engine version, choose an existing RDS Custom custom engine version (CEV). A
CEV has the following format: major-engine-version.customized_string. An example
identifier is 19.cdb_cev1.

If you chose Multitenant architecture in the previous step, you can only specify CEV that uses
the custom-oracle-ee-cdb engine type. The console filters out CEVs that were created with
the custom-oracle-ee engine type.
6. In Templates, choose Production.
7. In the Settings section, do the following:

a. For DB instance identifier, enter a unique name for your DB instance.


b. For Master username, enter a username. You can retrieve this value from the console later.

When you connect to a non-CDB, the master user is the user for the non-CDB. When you
connect to a CDB, the master user is the user for the PDB. To connect to the CDB root, log in to
the host, start a SQL client, and create an administrative user with SQL commands.
c. Clear Auto generate a password.
8. Choose a DB instance class.

For supported classes, see DB instance class support for RDS Custom for Oracle (p. 999).
9. In the Storage section, do the following:

a. For Storage type, choose an SSD type: io1, gp2, or gp3. You have the following additional
options:

1036
Amazon Relational Database Service User Guide
Configuring an RDS Custom for Oracle DB instance

• For io1 or gp3, choose a rate for Provisioned IOPS. The default is 1000 for io1 and 12000 for
gp3.
• For gp3, choose a rate for Storage throughput. The default is 500 MiBps.
b. For Allocated storage, choose a storage size. The default is 40 GiB.
10. For Connectivity, specify your Virtual private cloud (VPC), DB subnet group, and VPC security
group (firewall).
11. For RDS Custom security, do the following:

a. For IAM instance profile, choose the instance profile for your RDS Custom for Oracle DB
instance.

The IAM instance profile must begin with AWSRDSCustom, for example
AWSRDSCustomInstanceProfileForRdsCustomInstance.
b. For Encryption, choose Enter a key ARN to list the available AWS KMS keys. Then choose your
key from the list.

An AWS KMS key is required for RDS Custom. For more information, see Step 1: Create or reuse
a symmetric encryption AWS KMS key (p. 1003).
12. For Database options, do the following:

a. (Optional) For System ID (SID), enter a value for the Oracle SID, which is also the name of your
CDB. The SID is the name of the Oracle database instance that manages your database files. In
this context, the term "Oracle database instance" refers exclusively to the system global area
(SGA) and Oracle background processes. If you don't specify a SID, the value defaults to RDSCDB.
b. (Optional) For Initial database name, enter a name. The default value is ORCL. In the
multitenant architecture, the initial database name is the PDB name.
Note
The SID and PDB name must be different.
c. For Backup retention period choose a value. You can't choose 0 days.
d. For the remaining sections, specify your preferred RDS Custom DB instance settings. For
information about each setting, see Settings for DB instances (p. 308). The following settings
don't appear in the console and aren't supported:

• Processor features
• Storage autoscaling
• Availability & durability
• Password and Kerberos authentication option in Database authentication (only Password
authentication is supported)
• Database options group in Additional configuration
• Performance Insights
• Log exports
• Enable auto minor version upgrade
• Deletion protection
13. Choose Create database.
Important
When you create an RDS Custom for Oracle DB instance, you might receive the following
error: The service-linked role is in the process of being created. Try again later. If you do,
wait a few minutes and then try again to create the DB instance.

The View credential details button appears on the Databases page.

1037
Amazon Relational Database Service User Guide
Configuring an RDS Custom for Oracle DB instance

To view the master user name and password for the RDS Custom DB instance, choose View
credential details.

To connect to the DB instance as the master user, use the user name and password that appear.
Important
You can't view the master user password again in the console. If you don't record it, you
might have to change it. To change the master user password after the RDS Custom DB
instance is available, log in to the database and run an ALTER USER command. You can't
reset the password using the Modify option in the console.
14. Choose Databases to view the list of RDS Custom DB instances.
15. Choose the RDS Custom DB instance that you just created.

On the RDS console, the details for the new RDS Custom DB instance appear:

• The DB instance has a status of creating until the RDS Custom DB instance is created and ready
for use. When the state changes to available, you can connect to the DB instance. Depending on
the instance class and storage allocated, it can take several minutes for the new DB instance to be
available.
• Role has the value Instance (RDS Custom).
• RDS Custom automation mode has the value Full automation. This setting means that the DB
instance provides automatic monitoring and instance recovery.

AWS CLI

You create an RDS Custom DB instance by using the create-db-instance AWS CLI command.

The following options are required:

• --db-instance-identifier
• --db-instance-class (for a list of supported instance classes, see DB instance class support for
RDS Custom for Oracle (p. 999))
• --engine engine-type (where engine-type is custom-oracle-ee-cdb for a CDB and custom-
oracle-ee for a non-CDB)
• --engine-version cev (where cev is the name of the custom engine version that you specified in
Creating a CEV (p. 1026))
• --kms-key-id my-kms-key
• --backup-retention-period days (where days is a value greater than 0)
• --no-auto-minor-version-upgrade
• --custom-iam-instance-profile AWSRDSCustomInstanceRole-us-east-1 (where region is
the AWS Region where you are creating your DB instance)

The following example creates an RDS Custom DB instance named my-cdb-instance. The database is a
CDB with the nondefault name MYCDB. The nondefault PDB name is MYPDB. The backup retention period
is three days.

Example

For Linux, macOS, or Unix:

aws rds create-db-instance \


--engine custom-oracle-ee-cdb \
--db-instance-identifier my-cdb-instance \
--engine-version 19.cdb_cev1 \

1038
Amazon Relational Database Service User Guide
Configuring an RDS Custom for Oracle DB instance

--db-name MYPDB \
--db-system-id MYCDB \
--allocated-storage 250 \
--db-instance-class db.m5.xlarge \
--db-subnet-group mydbsubnetgroup \
--master-username myawsuser \
--master-user-password mypassword \
--backup-retention-period 3 \
--port 8200 \
--license-model bring-your-own-license \
--kms-key-id my-kms-key \
--no-auto-minor-version-upgrade \
--custom-iam-instance-profile AWSRDSCustomInstanceRole-us-east-1

For Windows:

aws rds create-db-instance ^


--engine custom-oracle-ee-cdb ^
--db-instance-identifier my-cdb-instance ^
--engine-version 19.cdb_cev1 ^
--db-name MYPDB ^
--db-system-id MYCDB ^
--allocated-storage 250 ^
--db-instance-class db.m5.xlarge ^
--db-subnet-group mydbsubnetgroup ^
--master-username myawsuser ^
--master-user-password mypassword ^
--backup-retention-period 3 ^
--port 8200 ^
--license-model bring-your-own-license ^
--kms-key-id my-kms-key ^
--no-auto-minor-version-upgrade ^
--custom-iam-instance-profile AWSRDSCustomInstanceRole-us-east-1

Note
Specify a password other than the prompt shown here as a security best practice.

Get details about your instance by using the describe-db-instances command.

Example

aws rds describe-db-instances --db-instance-identifier my-cdb-instance

The following partial output shows the engine, parameter groups, and other information.

{
"DBInstanceIdentifier": "my-cdb-instance",
"DBInstanceClass": "db.m5.xlarge",
"Engine": "custom-oracle-ee-cdb",
"DBInstanceStatus": "available",
"MasterUsername": "admin",
"DBName": "MYPDB",
"DBSystemID": "MYCDB",
"Endpoint": {
"Address": "my-cdb-instance.abcdefghijkl.us-east-1.rds.amazonaws.com",
"Port": 1521,
"HostedZoneId": "A1B2CDEFGH34IJ"
},
"AllocatedStorage": 100,
"InstanceCreateTime": "2023-04-12T18:52:16.353000+00:00",
"PreferredBackupWindow": "08:46-09:16",

1039
Amazon Relational Database Service User Guide
Configuring an RDS Custom for Oracle DB instance

"BackupRetentionPeriod": 7,
"DBSecurityGroups": [],
"VpcSecurityGroups": [
{
"VpcSecurityGroupId": "sg-0a1bcd2e",
"Status": "active"
}
],
"DBParameterGroups": [
{
"DBParameterGroupName": "default.custom-oracle-ee-cdb-19",
"ParameterApplyStatus": "in-sync"
}
],
...

RDS Custom service-linked role


A service-linked role gives Amazon RDS Custom access to resources in your AWS account. It makes using
RDS Custom easier because you don't have to manually add the necessary permissions. RDS Custom
defines the permissions of its service-linked roles, and unless defined otherwise, only RDS Custom can
assume its roles. The defined permissions include the trust policy and the permissions policy, and that
permissions policy can't be attached to any other IAM entity.

When you create an RDS Custom DB instance, both the Amazon RDS and RDS Custom service-linked
roles are created (if they don't already exist) and used. For more information, see Using service-linked
roles for Amazon RDS (p. 2684).

The first time that you create an RDS Custom for Oracle DB instance, you might receive the following
error: The service-linked role is in the process of being created. Try again later. If you do, wait a few
minutes and then try again to create the DB instance.

Connecting to your RDS Custom DB instance using Session


Manager
After you create your RDS Custom DB instance, you can connect to it using AWS Systems Manager
Session Manager. This is the preferred technique when your DB instance isn't publicly accessible.

Session Manager allows you to access Amazon EC2 instances through a browser-based shell or through
the AWS CLI. For more information, see AWS Systems Manager Session Manager.

Console

To connect to your DB instance using Session Manager

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases, and then choose the RDS Custom DB instance to which
you want to connect.
3. Choose Configuration.
4. Note the Resource ID for your DB instance. For example, the resource ID might be db-
ABCDEFGHIJKLMNOPQRS0123456.
5. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.
6. In the navigation pane, choose Instances.
7. Look for the name of your EC2 instance, and then click the instance ID associated with it. For
example, the instance ID might be i-abcdefghijklm01234.

1040
Amazon Relational Database Service User Guide
Configuring an RDS Custom for Oracle DB instance

8. Choose Connect.
9. Choose Session Manager.
10. Choose Connect.

A window opens for your session.

AWS CLI

You can connect to your RDS Custom DB instance using the AWS CLI. This technique requires the Session
Manager plugin for the AWS CLI. To learn how to install the plugin, see Install the Session Manager
plugin for the AWS CLI.

To find the DB resource ID of your RDS Custom DB instance, use aws rds describe-db-instances.

aws rds describe-db-instances \


--query 'DBInstances[*].[DBInstanceIdentifier,DbiResourceId]' \
--output text

The following sample output shows the resource ID for your RDS Custom instance. The prefix is db-.

db-ABCDEFGHIJKLMNOPQRS0123456

To find the EC2 instance ID of your DB instance, use aws ec2 describe-instances. The following
example uses db-ABCDEFGHIJKLMNOPQRS0123456 for the resource ID.

aws ec2 describe-instances \


--filters "Name=tag:Name,Values=db-ABCDEFGHIJKLMNOPQRS0123456" \
--output text \
--query 'Reservations[*].Instances[*].InstanceId'

The following sample output shows the EC2 instance ID.

i-abcdefghijklm01234

Use the aws ssm start-session command, supplying the EC2 instance ID in the --target
parameter.

aws ssm start-session --target "i-abcdefghijklm01234"

A successful connection looks like the following.

Starting session with SessionId: yourid-abcdefghijklm1234


[ssm-user@ip-123-45-67-89 bin]$

Connecting to your RDS Custom DB instance using SSH


The Secure Shell Protocol (SSH) is a network protocol that supports encrypted communication over an
unsecured network. After you create your RDS Custom DB instance, you can connect to it using an ssh
client. For more information, see Connecting to your Linux instance using SSH.

Your SSH connection technique depends on whether your DB instance is private, meaning that it doesn't
accept connections from the public internet. In this case, you must use SSH tunneling to connect the ssh

1041
Amazon Relational Database Service User Guide
Configuring an RDS Custom for Oracle DB instance

utility to your instance. This technique transports data with a dedicated data stream (tunnel) inside an
existing SSH session. You can configure SSH tunneling using AWS Systems Manager.
Note
Various strategies are supported for accessing private instances. To learn how to connect an ssh
client to private instances using bastion hosts, see Linux Bastion Hosts on AWS. To learn how to
configure port forwarding, see Port Forwarding Using AWS Systems Manager Session Manager.

If your DB instance is in a public subnet and has the publicly available setting, then no SSH tunneling is
required. You can connect with SSH just as would to a public Amazon EC2 instance.

To connect an ssh client to your DB instance, complete the following steps:

1. Step 1: Configure your DB instance to allow SSH connections (p. 1042)


2. Step 2: Retrieve your SSH secret key and EC2 instance ID (p. 1042)
3. Step 3: Connect to your EC2 instance using the ssh utility (p. 1044)

Step 1: Configure your DB instance to allow SSH connections


To make sure that your DB instance can accept SSH connections, do the following:

• Make sure that your DB instance security group permits inbound connections on port 22 for TCP.

To learn how to configure the security group for your DB instance, see Controlling access with security
groups (p. 2680).
• If you don't plan to use SSH tunneling, make sure your DB instance resides in a public subnet and is
publicly accessible.

In the console, the relevant field is Publicly accessible on the Connectivity & security tab of the
database details page. To check your settings in the CLI, run the following command:

aws rds describe-db-instances \


--query 'DBInstances[*].
{DBInstanceIdentifier:DBInstanceIdentifier,PubliclyAccessible:PubliclyAccessible}' \
--output table

To change the accessibility settings for your DB instance, see Modifying an Amazon RDS DB
instance (p. 401).

Step 2: Retrieve your SSH secret key and EC2 instance ID


To connect to the DB instance using SSH, you need the SSH key pair associated with the instance. RDS
Custom creates the SSH key pair on your behalf, naming it with the prefix do-not-delete-rds-
custom-ssh-privatekey-db-. AWS Secrets Manager stores your SSH private key as a secret.

Retrieve your SSH secret key using either AWS Management Console or the AWS CLI. If your instance has
a public DNS, and you don't intend to use SSH tunneling, then also retrieve the DNS name. You specify
the DNS name for public connections.

Console

To retrieve the secret SSH key

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases, and then choose the RDS Custom DB instance to which
you want to connect.

1042
Amazon Relational Database Service User Guide
Configuring an RDS Custom for Oracle DB instance

3. Choose Configuration.
4. Note the Resource ID value. For example, the DB instance resource ID might be db-
ABCDEFGHIJKLMNOPQRS0123456.
5. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.
6. In the navigation pane, choose Instances.
7. Find the name of your EC2 instance, and choose the instance ID associated with it. For example, the
EC2 instance ID might be i-abcdefghijklm01234.
8. In Details, find Key pair name. The pair name includes the DB instance resource ID. For
example, the pair name might be do-not-delete-rds-custom-ssh-privatekey-db-
ABCDEFGHIJKLMNOPQRS0123456-0d726c.
9. If your EC2 instance is public, note the Public IPv4 DNS. For the example, the public Domain Name
System (DNS) address might be ec2-12-345-678-901.us-east-2.compute.amazonaws.com.
10. Open the AWS Secrets Manager console at https://console.aws.amazon.com/secretsmanager/.
11. Choose the secret that has the same name as your key pair.
12. Choose Retrieve secret value.
13. Copy the SSH private key into a text file, and then save the file with the .pem extension. For
example, save the file as /tmp/do-not-delete-rds-custom-ssh-privatekey-db-
ABCDEFGHIJKLMNOPQRS0123456-0d726c.pem.

AWS CLI

To retrieve the SSH private key and save it in a .pem file, you can use the AWS CLI.

1. Find the DB resource ID of your RDS Custom DB instance using aws rds describe-db-
instances.

aws rds describe-db-instances \


--query 'DBInstances[*].[DBInstanceIdentifier,DbiResourceId]' \
--output text

The following sample output shows the resource ID for your RDS Custom instance. The prefix is db-.

db-ABCDEFGHIJKLMNOPQRS0123456

2. Find the EC2 instance ID of your DB instance using aws ec2 describe-instances. The following
example uses db-ABCDEFGHIJKLMNOPQRS0123456 for the resource ID.

aws ec2 describe-instances \


--filters "Name=tag:Name,Values=db-ABCDEFGHIJKLMNOPQRS0123456" \
--output text \
--query 'Reservations[*].Instances[*].InstanceId'

The following sample output shows the EC2 instance ID.

i-abcdefghijklm01234

3. To find the key name, specify the EC2 instance ID. The following example describes EC2 instance
i-0bdc4219e66944afa.

aws ec2 describe-instances \


--instance-ids i-0bdc4219e66944afa \
--output text \
--query 'Reservations[*].Instances[*].KeyName'

1043
Amazon Relational Database Service User Guide
Configuring an RDS Custom for Oracle DB instance

The following sample output shows the key name, which uses the prefix do-not-delete-rds-
custom-ssh-privatekey-.

do-not-delete-rds-custom-ssh-privatekey-db-ABCDEFGHIJKLMNOPQRS0123456-0d726c

4. Save the private key in a .pem file named after the key using aws secretsmanager. The following
example saves the file in your /tmp directory.

aws secretsmanager get-secret-value \


--secret-id do-not-delete-rds-custom-ssh-privatekey-db-
ABCDEFGHIJKLMNOPQRS0123456-0d726c \
--query SecretString \
--output text >/tmp/do-not-delete-rds-custom-ssh-privatekey-db-
ABCDEFGHIJKLMNOPQRS0123456-0d726c.pem

Step 3: Connect to your EC2 instance using the ssh utility


Your connection technique depends on whether you are connecting to a private DB instance or
connecting to a public instance. A private connection requires you to configure SSH tunneling through
AWS Systems Manager.

To connect to an EC2 instance using the ssh utility

1. For private connections, modify your SSH configuration file to proxy commands to AWS Systems
Manager Session Manager. For public connections, skip to Step 2.

Add the following lines to ~/.ssh/config. These lines proxy SSH commands for hosts whose
names begin with i- or mi-.

Host i-* mi-*


ProxyCommand sh -c "aws ssm start-session --target %h --document-name AWS-
StartSSHSession --parameters 'portNumber=%p'"

2. Change to the directory that contains your .pem file. Using chmod, set the permissions to 400.

cd /tmp
chmod 400 do-not-delete-rds-custom-ssh-privatekey-db-
ABCDEFGHIJKLMNOPQRS0123456-0d726c.pem

3. Run the ssh utility, specifying the .pem file and either the public DNS name (for public connections)
or the EC2 instance ID (for private connections). Log in as user ec2-user.

The following example connects to a public instance using the DNS name
ec2-12-345-678-901.us-east-2.compute.amazonaws.com.

ssh -i \
"do-not-delete-rds-custom-ssh-privatekey-db-ABCDEFGHIJKLMNOPQRS0123456-0d726c.pem"
\
[email protected]

The following example connects to a private instance using the EC2 instance ID
i-0bdc4219e66944afa.

ssh -i \
"do-not-delete-rds-custom-ssh-privatekey-db-ABCDEFGHIJKLMNOPQRS0123456-0d726c.pem"
\

1044
Amazon Relational Database Service User Guide
Configuring an RDS Custom for Oracle DB instance

ec2-user@i-0bdc4219e66944afa

Logging in to your RDS Custom for Oracle database as SYS


After you create your RDS Custom DB instance, you can log in to your Oracle database as user SYS, which
gives you SYSDBA privileges. You have the following login options:

• Get the SYS password from Secrets Manager, and specify this password in your SQL client.
• Use OS authentication to log in to your database. In this case, you don't need a password.

Finding the SYS password for your RDS Custom for Oracle database
Your can log in to your Oracle database as SYS or SYSTEM or by specifying the master user name in an
API call. The password for SYS and SYSTEM is stored in Secrets Manager. The secret uses the naming
format do-not-delete-rds-custom-resource_id-uuid. You can find the password using the AWS
Management Console.

Console

To find the SYS password for your database in Secrets Manager

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the RDS console, complete the following steps:

a. In the navigation pane, choose Databases.


b. Choose the name of your RDS Custom for Oracle DB instance.
c. Choose Configuration.
d. Copy the value underneath Resource ID. For example, you resource ID might be db-
ABC12CDE3FGH4I5JKLMNO6PQR7.
3. Open the Secrets Manager console at https://console.aws.amazon.com/secretsmanager/.
4. In the Secrets Manager console, complete the following steps:

a. In the left navigation pane, choose Secrets.


b. Filter the secrets by the resource ID that you copied in step 5.
c. Choose the secret named do-not-delete-rds-custom-resource_id-uuid, where
resource_id is the resource ID that you copied in step 5. For example, if your resource ID is
db-ABC12CDE3FGH4I5JKLMNO6PQR7, your secret will be named do-not-delete-rds-custom-
db-ABC12CDE3FGH4I5JKLMNO6PQR7.
d. In Secret value, choose Retrieve secret value.
e. In Key/value, copy the value for password.
5. Install SQL*Plus on your DB instance and log in to your database as SYS. For more information, see
Step 3: Connect your SQL client to an Oracle DB instance (p. 231).

Logging in to your RDS Custom for Oracle database using OS authentication


The OS user rdsdb owns the Oracle database binaries. You can switch to the rdsdb user and log in to
your RDS Custom for Oracle database without a password.

1. Connect to your DB instance with AWS Systems Manager. For more information, see Connecting to
your RDS Custom DB instance using Session Manager (p. 1040).

1045
Amazon Relational Database Service User Guide
Configuring an RDS Custom for Oracle DB instance

2. In a web browser, go to https://www.oracle.com/database/technologies/instant-client/linux-


x86-64-downloads.html.
3. For the latest database version that appears on the web page, copy the .rpm links (not the .zip links)
for the Instant Client Basic Package and SQL*Plus Package. For example, the following links are for
Oracle Database version 21.9:

• https://download.oracle.com/otn_software/linux/instantclient/219000/oracle-instantclient-
basic-21.9.0.0.0-1.el8.x86_64.rpm
• https://download.oracle.com/otn_software/linux/instantclient/219000/oracle-instantclient-
sqlplus-21.9.0.0.0-1.el8.x86_64.rpm
4. In your SSH session, run the wget command to the download the .rpm files from the links that you
obtained in the previous step. The following example downloads the .rpm files for Oracle Database
version 21.9:

wget https://download.oracle.com/otn_software/linux/instantclient/219000/oracle-
instantclient-basic-21.9.0.0.0-1.el8.x86_64.rpm
wget https://download.oracle.com/otn_software/linux/instantclient/219000/oracle-
instantclient-sqlplus-21.9.0.0.0-1.el8.x86_64.rpm

5. Install the packages by running the yum command as follows:

sudo yum install oracle-instantclient-*.rpm

6. Switch to the rdsdb user.

sudo su - rdsdb

7. Log in to your database using OS authentication.

$ sqlplus / as sysdba

SQL*Plus: Release 21.0.0.0.0 - Production on Wed Apr 12 20:11:08 2023


Version 21.9.0.0.0

Copyright (c) 1982, 2020, Oracle. All rights reserved.

Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.10.0.0.0

Installing additional software components on your RDS Custom


for Oracle DB instance
In a newly created DB instance, your database environment includes Oracle binaries, a database, and a
database listener. You might want to install additional software on the host operating system of the DB
instance. For example, you might want to install Oracle Application Express (APEX), the Oracle Enterprise
Manager (OEM) agent, or the Guardium S-TAP agent. For guidelines and high-level instructions, see the
detailed AWS blog post Install additional software components on Amazon RDS Custom for Oracle.

1046
Amazon Relational Database Service User Guide
Managing an RDS Custom for Oracle DB instance

Managing an Amazon RDS Custom for Oracle DB


instance
Amazon RDS Custom supports a subset of the usual management tasks for Amazon RDS DB instances.
Following, you can find instructions for the supported RDS Custom for Oracle management tasks using
the AWS Management Console and the AWS CLI.

Topics
• Working with container databases (CDBs) in RDS Custom for Oracle (p. 1047)
• Working with high availability features for RDS Custom for Oracle (p. 1048)
• Customizing your RDS Custom environment (p. 1048)
• Modifying your RDS Custom for Oracle DB instance (p. 1052)
• Changing the time zone of an RDS Custom for Oracle DB instance (p. 1055)
• Changing the character set of an RDS Custom for Oracle DB instance (p. 1056)
• Setting the NLS_LANG value in RDS Custom for Oracle (p. 1057)
• Support for Transparent Data Encryption (p. 1057)
• Tagging RDS Custom for Oracle resources (p. 1057)
• Deleting an RDS Custom for Oracle DB instance (p. 1058)

Working with container databases (CDBs) in RDS Custom for


Oracle
You can either create your RDS Custom for Oracle DB instance with the Oracle Multitenant architecture
(custom-oracle-ee-cdb engine type) or with the traditional non-CDB architecture (custom-oracle-
ee engine type). When you create a container database (CDB), it contains one pluggable database (PDB)
and one PDB seed. You can create additional PDBs manually using Oracle SQL.

PDB and CDB names


When you create an RDS Custom for Oracle CDB instance, you specify a name for the initial PDB. By
default, your initial PDB is named ORCL. You can choose a different name.

By default, your CDB is named RDSCDB. You can choose a different name. The CDB name is also the
name of your Oracle system identifier (SID), which uniquely identifies the memory and processes that
manage your CDB. For more information about the Oracle SID, see Oracle System Identifier (SID) in
Oracle Database Concepts.

You can't rename existing PDBs using Amazon RDS APIs. You also can't rename the CDB using the
modify-db-instance command.

PDB management
In the RDS Custom for Oracle shared responsibility model, you are responsible for managing PDBs and
creating any additional PDBs. RDS Custom doesn't restrict the number of PDBs. You can manually create,
modify, and delete PDBs by connecting to the CDB root and running a SQL statement. Create PDBs on an
Amazon EBS data volume to prevent the DB instance from going outside the support perimeter.

To modify your CDBs or PDBs, complete the following steps:

1. Pause automation to prevent interference with RDS Custom actions.


2. Modify your CDB or PDBs.
3. Back up any modified PDBs.

1047
Amazon Relational Database Service User Guide
Managing an RDS Custom for Oracle DB instance

4. Resume RDS Custom automation.

Automatic recovery of the CDB root


RDS Custom keeps the CDB root open in the same way as it keeps a non-CDB open. If the state of the
CDB root changes, the monitoring and recovery automation attempts to recover the CDB root to the
desired state. You receive RDS event notifications when the root CDB is shut down (RDS-EVENT-0004)
or restarted (RDS-EVENT-0006), similar to the non-CDB architecture. RDS Custom attempts to open all
PDBs in READ WRITE mode at DB instance startup. If some PDBs can't be opened, RDS Custom publishes
the following event: tenant database shutdown.

Working with high availability features for RDS Custom for


Oracle
To support replication between RDS Custom for Oracle instances, you can configure high availability
(HA) with Oracle Data Guard. The primary DB instance automatically synchronizes data to the standby
instances.

You can configure your high availability environment in the following ways:

• Configure standby instances in different Availability Zones (AZs) to be resilient to AZ failures.


• Place your standby databases in mounted or read-only mode.
• Fail over or switch over from the primary database to a standby database with no data loss.
• Migrate data by configuring high availability for your on-premises instance, and then failing over or
switching over to the RDS Custom standby database.

To learn how to configure high availability, see the whitepaper Build high availability for Amazon RDS
Custom for Oracle using read replicas. You can perform the following tasks:

• Use a virtual private network (VPN) tunnel to encrypt data in transit for your high availability
instances. Encryption in transit isn't configured automatically by RDS Custom.
• Configure Oracle Fast-Failover Observer (FSFO) to monitor your high availability instances.
• Allow the observer to perform automatic failover when necessary conditions are met.

Customizing your RDS Custom environment


RDS Custom for Oracle includes built-in features that allow you to customize your DB instance
environment without pausing automation. For example, you can use RDS APIs to customize your
environment as follows:

• Create and restore DB snapshots to create a clone environment.


• Create read replicas.
• Modify storage settings.
• Change the CEV to apply release updates

For some customizations, such as changing the time zone or character set, you can't use the RDS APIs. In
these cases, you need to change the environment manually by accessing your Amazon EC2 instance as
the root user or logging in to your Oracle database as SYSDBA.

To customize your instance manually, you must pause and resume RDS Custom automation. This pause
ensures that your customizations don't interfere with RDS Custom automation. In this way, you avoid
breaking the support perimeter, which places the instance in the unsupported-configuration state

1048
Amazon Relational Database Service User Guide
Managing an RDS Custom for Oracle DB instance

until you fix the underlying issues. Pausing and resuming are the only supported automation tasks when
you modify an RDS Custom for Oracle DB instance.

General steps for customizing your RDS Custom environment


To customize your RDS Custom DB instance, complete the following steps:

1. Pause RDS Custom automation for a specified period using the console or CLI.
2. Identify your underlying Amazon EC2 instance.
3. Connect to your underlying Amazon EC2 instance using SSH keys or AWS Systems Manager.
4. Verify your current configuration settings at the database or operating system layer.

You can validate your changes by comparing the initial configuration to the changed configuration.
Depending on the type of customization, use OS tools or database queries.
5. Customize your RDS Custom for Oracle DB instance as needed.
6. Reboot your instance or database, if required.
Note
In an on-premises Oracle CDB, you can preserve a specified open mode for PDBs using a
built-in command or after a startup trigger. This mechanism brings PDBs to a specified state
when the CDB restarts. When opening your CDB, RDS Custom automation discards any user-
specified preserved states and attempts to open all PDBs. If RDS Custom can't open all PDBs,
the following event is issued: The following PDBs failed to open: list-of-PDBs.
7. Verify your new configuration settings by comparing them with the previous settings.
8. Resume RDS Custom automation in either of the following ways:
• Resume automation manually.
• Wait for the pause period to end. In this case, RDS Custom resumes monitoring and instance
recovery automatically.
9. Verify the RDS Custom automation framework

If you followed the preceding steps correctly, RDS Custom starts an automated backup. The status of
the instance in the console shows Available.

For best practices and step-by-step instructions, see the AWS blog posts Make configuration changes
to an Amazon RDS Custom for Oracle instance: Part 1 and Recreate an Amazon RDS Custom for Oracle
database: Part 2.

Pausing and resuming your RDS Custom DB instance


You can pause and resume automation for your DB instance using the console or CLI.

Console

To pause or resume RDS Custom automation

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases, and then choose the RDS Custom DB instance that you
want to modify.
3. Choose Modify. The Modify DB instance page appears.
4. For RDS Custom automation mode, choose one of the following options:

• Paused pauses the monitoring and instance recovery for the RDS Custom DB instance. Enter the
pause duration that you want (in minutes) for Automation mode duration. The minimum value is
60 minutes (default). The maximum value is 1,440 minutes.

1049
Amazon Relational Database Service User Guide
Managing an RDS Custom for Oracle DB instance

• Full automation resumes automation.


5. Choose Continue to check the summary of modifications.

A message indicates that RDS Custom will apply the changes immediately.
6. If your changes are correct, choose Modify DB instance. Or choose Back to edit your changes or
Cancel to cancel your changes.

On the RDS console, the details for the modification appear. If you paused automation, the Status of
your RDS Custom DB instance indicates Automation paused.
7. (Optional) In the navigation pane, choose Databases, and then your RDS Custom DB instance.

In the Summary pane, RDS Custom automation mode indicates the automation status. If
automation is paused, the value is Paused. Automation resumes in num minutes.

AWS CLI
To pause or resume RDS Custom automation, use the modify-db-instance AWS CLI command.
Identify the DB instance using the required parameter --db-instance-identifier. Control the
automation mode with the following parameters:

• --automation-mode specifies the pause state of the DB instance. Valid values are all-paused,
which pauses automation, and full, which resumes it.
• --resume-full-automation-mode-minutes specifies the duration of the pause. The default value
is 60 minutes.

Note
Regardless of whether you specify --no-apply-immediately or --apply-immediately,
RDS Custom applies modifications asynchronously as soon as possible.

In the command response, ResumeFullAutomationModeTime indicates the resume time as a UTC


timestamp. When the automation mode is all-paused, you can use modify-db-instance to resume
automation mode or extend the pause period. No other modify-db-instance options are supported.

The following example pauses automation for my-custom-instance for 90 minutes.

Example
For Linux, macOS, or Unix:

aws rds modify-db-instance \


--db-instance-identifier my-custom-instance \
--automation-mode all-paused \
--resume-full-automation-mode-minutes 90

For Windows:

aws rds modify-db-instance ^


--db-instance-identifier my-custom-instance ^
--automation-mode all-paused ^
--resume-full-automation-mode-minutes 90

The following example extends the pause duration for an extra 30 minutes. The 30 minutes is added to
the original time shown in ResumeFullAutomationModeTime.

Example
For Linux, macOS, or Unix:

1050
Amazon Relational Database Service User Guide
Managing an RDS Custom for Oracle DB instance

aws rds modify-db-instance \


--db-instance-identifier my-custom-instance \
--automation-mode all-paused \
--resume-full-automation-mode-minutes 30

For Windows:

aws rds modify-db-instance ^


--db-instance-identifier my-custom-instance ^
--automation-mode all-paused ^
--resume-full-automation-mode-minutes 30

The following example resumes full automation for my-custom-instance.

Example

For Linux, macOS, or Unix:

aws rds modify-db-instance \


--db-instance-identifier my-custom-instance \
--automation-mode full \

For Windows:

aws rds modify-db-instance ^


--db-instance-identifier my-custom-instance ^
--automation-mode full

In the following partial sample output, the pending AutomationMode value is full.

{
"DBInstance": {
"PubliclyAccessible": true,
"MasterUsername": "admin",
"MonitoringInterval": 0,
"LicenseModel": "bring-your-own-license",
"VpcSecurityGroups": [
{
"Status": "active",
"VpcSecurityGroupId": "0123456789abcdefg"
}
],
"InstanceCreateTime": "2020-11-07T19:50:06.193Z",
"CopyTagsToSnapshot": false,
"OptionGroupMemberships": [
{
"Status": "in-sync",
"OptionGroupName": "default:custom-oracle-ee-19"
}
],
"PendingModifiedValues": {
"AutomationMode": "full"
},
"Engine": "custom-oracle-ee",
"MultiAZ": false,
"DBSecurityGroups": [],
"DBParameterGroups": [
{
"DBParameterGroupName": "default.custom-oracle-ee-19",
"ParameterApplyStatus": "in-sync"

1051
Amazon Relational Database Service User Guide
Managing an RDS Custom for Oracle DB instance

}
],
...
"ReadReplicaDBInstanceIdentifiers": [],
"AllocatedStorage": 250,
"DBInstanceArn": "arn:aws:rds:us-west-2:012345678912:db:my-custom-instance",
"BackupRetentionPeriod": 3,
"DBName": "ORCL",
"PreferredMaintenanceWindow": "fri:10:56-fri:11:26",
"Endpoint": {
"HostedZoneId": "ABCDEFGHIJKLMNO",
"Port": 8200,
"Address": "my-custom-instance.abcdefghijk.us-west-2.rds.amazonaws.com"
},
"DBInstanceStatus": "automation-paused",
"IAMDatabaseAuthenticationEnabled": false,
"AutomationMode": "all-paused",
"EngineVersion": "19.my_cev1",
"DeletionProtection": false,
"AvailabilityZone": "us-west-2a",
"DomainMemberships": [],
"StorageType": "gp2",
"DbiResourceId": "db-ABCDEFGHIJKLMNOPQRSTUVW",
"ResumeFullAutomationModeTime": "2020-11-07T20:56:50.565Z",
"KmsKeyId": "arn:aws:kms:us-west-2:012345678912:key/
aa111a11-111a-11a1-1a11-1111a11a1a1a",
"StorageEncrypted": false,
"AssociatedRoles": [],
"DBInstanceClass": "db.m5.xlarge",
"DbInstancePort": 0,
"DBInstanceIdentifier": "my-custom-instance",
"TagList": []
}

Modifying your RDS Custom for Oracle DB instance


Modifying an RDS Custom for Oracle DB instance is similar to modifying an Amazon RDS instance, but
you can only do the following:

• Change the DB instance class.


• Increase the allocated storage for your DB instance.
• Change the storage type to io1, gp2, or gp3.
• Change the Provisioned IOPS, if you're using the io1 or gp3 storage types.

Topics
• Requirements and limitations when modifying your DB instance storage (p. 1052)
• Requirements and limitations when modifying your DB instance class (p. 1053)
• How RDS Custom creates your DB instance when you modify the instance class (p. 1053)
• Modifying the instance class or storage for your RDS Custom for Oracle DB instance (p. 1054)

Requirements and limitations when modifying your DB instance storage


Consider the following requirements and limitations when you modify the storage for an RDS Custom for
Oracle DB instance:

• The minimum allocated storage for RDS Custom for Oracle is 40 GiB, and the maximum is 64 TiB.
• As with Amazon RDS, you can't decrease the allocated storage. This is a limitation of Amazon EBS
volumes.

1052
Amazon Relational Database Service User Guide
Managing an RDS Custom for Oracle DB instance

• Storage autoscaling isn't supported for RDS Custom DB instances.


• Any storage volumes that you attach manually to your RDS Custom DB instance are outside the
support perimeter.

For more information, see RDS Custom support perimeter (p. 985).
• Magnetic (standard) Amazon EBS storage isn't supported for RDS Custom. You can choose only the io1,
gp2, or gp3 SSD storage types.

For more information about Amazon EBS storage, see Amazon RDS DB instance storage (p. 101).
For general information about storage modification, see Working with storage for Amazon RDS DB
instances (p. 478).

Requirements and limitations when modifying your DB instance class


Consider the following requirements and limitations when you modify the instance class for an RDS
Custom for Oracle DB instance:

• Your DB instance must be in the available state.


• Your DB instance must have a minimum of 100 MiB of free space on the root volume, data volume,
and binary volume.
• You can assign only a single elastic IP (EIP) to your RDS Custom for Oracle DB instance when using
the default elastic network interface (ENI). If you attach multiple ENIs to your DB instance, the modify
operation fails.
• All RDS Custom for Oracle tags must be present.
• If you use RDS Custom for Oracle replication, note the following requirements and limitations:
• For primary DB instances and read replicas, you can change the instance class for only one DB
instance at a time.
• If your RDS Custom for Oracle DB instance has an on-premises primary or replica database, make
sure to manually update private IP addresses on the on-premises DB instance after the modification
completes. This action is necessary to preserve Oracle DataGuard functionality. RDS Custom for
Oracle publishes an event when the modification succeeds.
• You can't modify your RDS Custom for Oracle DB instance class when the primary or read replica DB
instances have FSFO (Fast-Start Failover) configured.

How RDS Custom creates your DB instance when you modify the instance class
When you modify your instance class, RDS Custom creates your DB instance as follows:

• Creates the Amazon EC2 instance.


• Creates the root volume from the latest DB snapshot. RDS Custom for Oracle doesn't retain
information added to the root volume after the latest DB snapshot.
• Creates Amazon CloudWatch alarms.
• Creates an Amazon EC2 SSH key pair if you have deleted the original key pair. Otherwise, RDS Custom
for Oracle retains the original key pair.
• Creates new resources using the tags that are attached to your DB instance when you initiate the
modification. RDS Custom doesn't transfer tags to the new resources when they are attached directly
to underlying resources.
• Transfers the binary and data volumes with the most recent modifications to the new DB instance.
• Transfers the elastic IP address (EIP). If the DB instance is publicly accessible, then RDS Custom
temporarily attaches a public IP address to the new DB instance before transferring the EIP. If the DB
instance isn't publicly accessible, RDS Custom doesn't create public IP addresses.

1053
Amazon Relational Database Service User Guide
Managing an RDS Custom for Oracle DB instance

Modifying the instance class or storage for your RDS Custom for Oracle DB
instance
You can modify the DB instance class or storage using the console, AWS CLI, or RDS API.

Console

To modify an RDS Custom for Oracle DB instance

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases.
3. Choose the DB instance that you want to modify.
4. Choose Modify.
5. Make the following changes as needed:

a. Change the value for DB instance class. For supported classes, see DB instance class support for
RDS Custom for Oracle (p. 999).
b. Enter a new value for Allocated storage. It must be greater than the current value, and from 40
GiB–64 TiB.
c. Change the value for Storage type to General Purpose SSD (gp2), General Purpose SSD (gp3),
or Provisioned IOPS (io1).
d. If you use Provisioned IOPS (io1) or General Purpose SSD (gp3), you can change the
Provisioned IOPS value.
6. Choose Continue.
7. Choose Apply immediately or Apply during the next scheduled maintenance window.
8. Choose Modify DB instance.

AWS CLI
To modify the storage for an RDS Custom for Oracle DB instance, use the modify-db-instance AWS CLI
command. Set the following parameters as needed:

• --db-instance-class – A new instance class. For supported classes, see DB instance class support
for RDS Custom for Oracle (p. 999).
• --allocated-storage – Amount of storage to be allocated for the DB instance, in gibibytes. It must
be greater than the current value, and from 40–65,536 GiB.
• --storage-type – The storage type: gp2, gp3, or io1.
• --iops – Provisioned IOPS for the DB instance, if using the io1 or gp3 storage types.
• --apply-immediately – Use --apply-immediately to apply the storage changes immediately.

Or use --no-apply-immediately (the default) to apply the changes during the next maintenance
window.

The following example changes the DB instance class of my-custom-instance to db.m5.16xlarge. The
command also changes the storage size to 1 TiB, storage type to io1, and Provisioned IOPS to 3000.

Example
For Linux, macOS, or Unix:

aws rds modify-db-instance \


--db-instance-identifier my-custom-instance \
--db-instance-class db.m5.16xlarge \

1054
Amazon Relational Database Service User Guide
Managing an RDS Custom for Oracle DB instance

--storage-type io1 \
--iops 3000 \
--allocated-storage 1024 \
--apply-immediately

For Windows:

aws rds modify-db-instance ^


--db-instance-identifier my-custom-instance ^
--db-instance-class db.m5.16xlarge ^
--storage-type io1 ^
--iops 3000 ^
--allocated-storage 1024 ^
--apply-immediately

Changing the time zone of an RDS Custom for Oracle DB


instance
You change the time zone of an RDS Custom for Oracle DB instance manually. This approach contrasts
with RDS for Oracle, where you use the TIME_ZONE option in a custom DB option group.

You can change time zones for RDS Custom for Oracle DB instances multiple times. However, we
recommend not changing them more than once every 48 hours. We also recommend changing them
only when the latest restorable time is within the last 30 minutes.

If you don't follow these recommendations, cleaning up redo logs might remove more logs than
intended. Redo log timestamps might also be converted incorrectly to UTC, which can prevent the redo
log files from being downloaded and replayed correctly. This in turn can prevent point-in-time recovery
(PITR) from performing correctly.

Changing the time zone of an RDS Custom for Oracle DB instance has the following limitations:

• PITR is supported for recovery times before RDS Custom automation is paused, and after automation
is resumed.

For more information about PITR, see Restoring an RDS Custom for Oracle instance to a point in
time (p. 1067).
• Changing the time zone of an existing read replica causes downtime. We recommend changing the
time zone of the DB instance before creating read replicas.

You can create a read replica from a DB instance with a modified time zone. For more information
about read replicas, see Working with Oracle replicas for RDS Custom for Oracle (p. 1060).

Use the following procedures to change the time zone of an RDS Custom for Oracle DB instance.

Make sure to follow these procedures. If they aren't followed, it can result in these issues:

• Disruption of redo log download and replay


• Incorrect redo log timestamps
• Cleanup of redo logs on the host that doesn't use the redo log retention period

To change the time zone for a primary DB instance

1. Pause RDS Custom automation. For more information, see Pausing and resuming your RDS Custom
DB instance (p. 1049).
2. (Optional) Change the time zone of the DB instance, for example by using the following command.

1055
Amazon Relational Database Service User Guide
Managing an RDS Custom for Oracle DB instance

ALTER DATABASE SET TIME_ZONE = 'US/Pacific';

3. Shut down the DB instance.


4. Log in to the host and change the system time zone.
5. Start the DB instance.
6. Resume RDS Custom automation.

To change the time zone for a primary DB instance and its read replicas

1. Pause RDS Custom automation on the primary DB instance.


2. Pause RDS Custom automation on the read replicas.
3. (Optional) Change the time zone of the primary and read replicas, for example by using the
following command.

ALTER DATABASE SET TIME_ZONE = 'US/Pacific';

4. Shut down the primary DB instance.


5. Shut down the read replicas.
6. Log in to the host and change the system time zone for the read replicas.
7. Change the system time zone for the primary DB instance.
8. Mount the read replicas.
9. Start the primary DB instance.
10. Resume RDS Custom automation on the primary DB instance and then on the read replicas.

Changing the character set of an RDS Custom for Oracle DB


instance
RDS Custom for Oracle defaults to the character set US7ASCII. You might want to specify different
character sets to meet language or multibyte character requirements. When you use RDS Custom for
Oracle, you can pause automation and then change the character set of your database manually.

Changing the character set of an RDS Custom for Oracle DB instance has the following requirements:

• You can only change the character on a newly provisioned RDS Custom instance that has an empty or
starter database with no application data. For all other scenarios, change the character set using DMU
(Database Migration Assistant for Unicode).
• You can only change to a character set supported by RDS for Oracle. For more information, see
Supported DB character sets (p. 1801).

To change the character set of an RDS Custom for Oracle DB instance

1. Pause RDS Custom automation. For more information, see Pausing and resuming your RDS Custom
DB instance (p. 1049).
2. Log in to your database as a user with SYSDBA privileges.
3. Restart the database in restricted mode, change the character set, and then restart the database in
normal mode.

Run the following script in your SQL client:

SHUTDOWN IMMEDIATE;

1056
Amazon Relational Database Service User Guide
Managing an RDS Custom for Oracle DB instance

STARTUP RESTRICT;
ALTER DATABASE CHARACTER SET INTERNAL_CONVERT AL32UTF8;
SHUTDOWN IMMEDIATE;
STARTUP;
SELECT VALUE FROM NLS_DATABASE_PARAMETERS WHERE PARAMETER = 'NLS_CHARACTERSET';

Verify that the output shows the correct character set:

VALUE
--------
AL32UTF8

4. Resume RDS Custom automation. For more information, see Pausing and resuming your RDS
Custom DB instance (p. 1049).

Setting the NLS_LANG value in RDS Custom for Oracle


A locale is a set of information addressing linguistic and cultural requirements that corresponds to
a given language and country. To specify locale behavior for Oracle software, set the NLS_LANG
environment variable on your client host. This variable sets the language, territory, and character set
used by the client application in a database session.

For RDS Custom for Oracle, you can set only the language in the NLS_LANG variable: the territory and
character use defaults. The language is used for Oracle database messages, collation, day names, and
month names. Each supported language has a unique name, for example, American, French, or German.
If language is not specified, the value defaults to American.

After you create your RDS Custom for Oracle database, you can set NLS_LANG on your client host to a
language other than English. To see a list of languages supported by Oracle Database, log in to your RDS
Custom for Oracle database and run the following query:

SELECT VALUE FROM V$NLS_VALID_VALUES WHERE PARAMETER='LANGUAGE' ORDER BY VALUE;

You can set NLS_LANG on the host command line. The following example sets the language to German
for your client application using the Z shell on Linux.

export NLS_LANG=German

Your application reads the NLS_LANG value when it starts and then communicates it to the database
when it connects.

For more information, see Choosing a Locale with the NLS_LANG Environment Variable in the Oracle
Database Globalization Support Guide.

Support for Transparent Data Encryption


RDS Custom supports Transparent Data Encryption (TDE) for RDS Custom for Oracle DB instances.

However, you can't enable TDE using an option in a custom option group as you can in RDS for Oracle.
You turn on TDE manually. For information about using Oracle Transparent Data Encryption, see
Securing stored data using Transparent Data Encryption in the Oracle documentation.

Tagging RDS Custom for Oracle resources


You can tag RDS Custom resources as with Amazon RDS resources, but with some important differences:

1057
Amazon Relational Database Service User Guide
Managing an RDS Custom for Oracle DB instance

• Don't create or modify the AWSRDSCustom tag that's required for RDS Custom automation. If you do,
you might break the automation.
• Tags added to RDS Custom DB instances during creation are propagated to all other related RDS
Custom resources.
• Tags aren't propagated when you add them to RDS Custom resources after DB instance creation.

For general information about resource tagging, see Tagging Amazon RDS resources (p. 461).

Deleting an RDS Custom for Oracle DB instance


To delete an RDS Custom DB instance, do the following:

• Provide the name of the DB instance.


• Clear the option to take a final DB snapshot of the DB instance.
• Choose or clear the option to retain automated backups.

You can delete an RDS Custom DB instance using the console or the CLI. The time required to delete the
DB instance can vary depending on the backup retention period (that is, how many backups to delete)
and how much data is deleted.

Console

To delete an RDS Custom DB instance

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases, and then choose the RDS Custom DB instance that you
want to delete. RDS Custom DB instances show the role Instance (RDS Custom).
3. For Actions, choose Delete.
4. To retain automated backups, choose Retain automated backups.
5. Enter delete me in the box.
6. Choose Delete.

AWS CLI

You delete an RDS Custom DB instance by using the delete-db-instance AWS CLI command. Identify the
DB instance using the required parameter --db-instance-identifier. The remaining parameters
are the same as for an Amazon RDS DB instance, with the following exceptions:

• --skip-final-snapshot is required.
• --no-skip-final-snapshot isn't supported.
• --final-db-snapshot-identifier isn't supported.

The following example deletes the RDS Custom DB instance named my-custom-instance, and retains
automated backups.

Example

For Linux, macOS, or Unix:

aws rds delete-db-instance \


--db-instance-identifier my-custom-instance \

1058
Amazon Relational Database Service User Guide
Managing an RDS Custom for Oracle DB instance

--skip-final-snapshot \
--no-delete-automated-backups

For Windows:

aws rds delete-db-instance ^


--db-instance-identifier my-custom-instance ^
--skip-final-snapshot ^
--no-delete-automated-backups

1059
Amazon Relational Database Service User Guide
Working with RDS Custom for Oracle replicas

Working with Oracle replicas for RDS Custom for


Oracle
You can create Oracle replicas for RDS Custom for Oracle DB instances. Both container databases (CDBs)
and non-CDBs are supported. Creating an RDS Custom for Oracle replica is similar to creating an RDS
for Oracle replica, but with important differences. For general information about creating and managing
Oracle replicas, see Working with DB instance read replicas (p. 438) and Working with read replicas for
Amazon RDS for Oracle (p. 1973).

Topics
• Overview of RDS Custom for Oracle replication (p. 1060)
• Guidelines and limitations for RDS Custom for Oracle replication (p. 1061)
• Promoting an RDS Custom for Oracle replica to a standalone DB instance (p. 1063)

Overview of RDS Custom for Oracle replication


The architecture of RDS Custom for Oracle replication is analogous to RDS for Oracle replication. A
primary DB instance replicates asynchronously to one or more Oracle replicas.

Maximum number of replicas


As with RDS for Oracle, you can create up to five managed Oracle replicas of your RDS Custom for
Oracle primary DB instance. You can also create your own manually configured (external) Oracle
replicas. External replicas don't count toward your DB instance limit. They also lie outside the RDS
Custom support perimeter. For more information about the support perimeter, see RDS Custom support
perimeter (p. 985).

1060
Amazon Relational Database Service User Guide
Working with RDS Custom for Oracle replicas

Replica naming convention


Oracle replica names are based on the database unique name. The format is DB_UNIQUE_NAME_X, with
letters appended sequentially. For example, if your database unique name is ORCL, the first two replicas
are named ORCL_A and ORCL_B. The first six letters, A–F, are reserved for RDS Custom. RDS Custom
copies database parameters from your primary DB instance to the replicas. For more information, see
DB_UNIQUE_NAME in the Oracle documentation.

Replica backup retention


By default, RDS Custom Oracle replicas use the same backup retention period as your primary DB
instance. You can modify the backup retention period to 1–35 days. RDS Custom supports backing
up, restoring, and point-in-time recovery (PITR). For more information about backing up and restoring
RDS Custom DB instances, see Backing up and restoring an Amazon RDS Custom for Oracle DB
instance (p. 1065).
Note
While creating a Oracle replica, RDS Custom temporarily pauses the cleanup of redo log files.
In this way, RDS Custom ensures that it can apply these logs to the new Oracle replica after it
becomes available.

Replica promotion
You can promote managed Oracle replicas in RDS Custom for Oracle using the console, promote-read-
replica AWS CLI command, or PromoteReadReplica API. If you delete your primary DB instance, and
all replicas are healthy, RDS Custom for Oracle promotes your managed replicas to standalone instances
automatically. If a replica has paused automation or is outside the support perimeter, you must fix the
replica before RDS Custom can promote it automatically. You can only promote external Oracle replicas
manually.

Guidelines and limitations for RDS Custom for Oracle replication


When you create RDS Custom for Oracle replicas, not all RDS Oracle replica options are supported.

Topics
• General guidelines for RDS Custom for Oracle replication (p. 1061)
• General limitations for RDS Custom for Oracle replication (p. 1062)
• Networking requirements and limitations for RDS Custom for Oracle replication (p. 1062)
• External replica limitations for RDS Custom for Oracle (p. 1062)
• Replica promotion limitations for RDS Custom for Oracle (p. 1063)
• Replica promotion guidelines for RDS Custom for Oracle (p. 1063)

General guidelines for RDS Custom for Oracle replication


When working with RDS Custom for Oracle, follow these guidelines:

• Don't modify the RDS_DATAGUARD user. This user is reserved for RDS Custom for Oracle automation.
Modifying this user can result in undesired outcomes, such as an inability to create Oracle replicas for
your RDS Custom for Oracle DB instance.
• Don't change the replication user password. It is required to administer the Oracle Data Guard
configuration on the RDS Custom host. If you change the password, RDS Custom for Oracle might put
your Oracle replica outside the support perimeter. For more information, see RDS Custom support
perimeter (p. 985).

The password is stored in AWS Secrets Manager, tagged with the DB resource ID. Each Oracle replica
has its own secret in Secrets Manager. The format for the secret is the following.

1061
Amazon Relational Database Service User Guide
Working with RDS Custom for Oracle replicas

do-not-delete-rds-custom-db-DB_resource_id-6-digit_UUID-dg

• Don't change the DB_UNIQUE_NAME for the primary DB instance. Changing the name causes any
restore operation to become stuck.
• Don't specify the clause STANDBYS=NONE in a CREATE PLUGGABLE DATABASE command in an RDS
Custom CDB. This way, if a failover occurs, your standby CDB contains all PDBs.

General limitations for RDS Custom for Oracle replication


RDS Custom for Oracle replicas have the following limitations:

• You can't create RDS Custom for Oracle replicas in read-only mode. However, you can manually change
the mode of mounted replicas to read-only, and from read-only to mounted. For more information,
see the documentation for the create-db-instance-read-replica AWS CLI command.
• You can't create cross-Region RDS Custom for Oracle replicas.
• You can't change the value of the Oracle Data Guard CommunicationTimeout parameter. This
parameter is set to 15 seconds for RDS Custom for Oracle DB instances.

Networking requirements and limitations for RDS Custom for Oracle replication
Make sure that your network configuration supports RDS Custom for Oracle replicas. Consider the
following:

• Make sure to enable port 1140 for both inbound and outbound communication within your virtual
private cloud (VPC) for the primary DB instance and all of its replicas. This is required for Oracle Data
Guard communication between read replicas.
• RDS Custom for Oracle validates the network while creating a Oracle replica. If the primary DB
instance and the new replica can't connect over the network, RDS Custom for Oracle doesn't create the
replica and places it in the INCOMPATIBLE_NETWORK state.
• For external Oracle replicas, such as those you create on Amazon EC2 or on-premises, use another port
and listener for Oracle Data Guard replication. Trying to use port 1140 could cause conflicts with RDS
Custom automation.
• The /rdsdbdata/config/tnsnames.ora file contains network service names mapped to listener
protocol addresses. Note the following requirements and recommendations:
• Entries in tnsnames.ora prefixed with rds_custom_ are reserved for RDS Custom when handling
Oracle replica operations.

When creating manual entries in tnsnames.ora, don't use this prefix.


• In some cases, you might want to switch over or fail over manually, or use failover technologies such
as Fast-Start Failover (FSFO). If so, make sure to manually synchronize tnsnames.ora entries from
the primary DB instance to all of the standby instances. This recommendation applies to both Oracle
replicas managed by RDS Custom and to external Oracle replicas.

RDS Custom automation updates tnsnames.ora entries on only the primary DB instance. Make
sure also to synchronize when you add or remove a Oracle replica.

If you don't synchronize the tnsnames.ora files and switch over or fail over manually, Oracle Data
Guard on the primary DB instance might not be able to communicate with the Oracle replicas.

External replica limitations for RDS Custom for Oracle


RDS Custom for Oracle external replicas, which include on-premises replicas, have the following
limitations:

1062
Amazon Relational Database Service User Guide
Working with RDS Custom for Oracle replicas

• RDS Custom for Oracle doesn't detect instance role changes upon manual failover, such as FSFO, for
external Oracle replicas.

RDS Custom for Oracle does detect changes for managed replicas. The role change is noted in the
event log. You can also see the new state by using the describe-db-instances AWS CLI command.
• RDS Custom for Oracle doesn't detect high replication lag for external Oracle replicas.

RDS Custom for Oracle does detect lag for managed replicas. High replication lag produces the
Replication has stopped event. You can also see the replication status by using the describe-db-
instances AWS CLI command, but there might be a delay for it to be updated.
• RDS Custom for Oracle doesn't promote external Oracle replicas automatically if you delete your
primary DB instance.

The automatic promotion feature is available only for managed Oracle replicas. For information about
promoting Oracle replicas manually, see the white paper Enabling high availability with Data Guard on
Amazon RDS Custom for Oracle.

Replica promotion limitations for RDS Custom for Oracle


Promoting RDS Custom for Oracle managed Oracle replicas is the same as promoting RDS managed
replicas, with some differences. Note the following limitations for RDS Custom for Oracle replicas:

• You can't promote a replica while RDS Custom for Oracle is backing it up.
• You can't change the backup retention period to 0 when you promote your Oracle replica.
• You can't promote your replica when it isn't in a healthy state.

If you issue delete-db-instance on the primary DB instance, RDS Custom for Oracle validates that
each managed Oracle replica is healthy and available for promotion. A replica might be ineligible for
promotion because automation is paused or it is outside the support perimeter. In such cases, RDS
Custom for Oracle publishes an event explaining the issue so that you can repair your Oracle replica
manually.

Replica promotion guidelines for RDS Custom for Oracle


When promoting a replica, note the following guidelines:

• Don't initiate a failover while RDS Custom for Oracle is promoting your replica. Otherwise, the
promotion workflow could become stuck.
• Don't switch over your primary DB instance while RDS Custom for Oracle is promoting your Oracle
replica. Otherwise, the promotion workflow could become stuck.
• Don't shut down your primary DB instance while RDS Custom for Oracle is promoting your Oracle
replica. Otherwise, the promotion workflow could become stuck.
• Don't try to restart replication with your newly promoted DB instance as a target. After RDS Custom
for Oracle promotes your Oracle replica, it becomes a standalone DB instance and no longer has the
replica role.

For more information, see Troubleshooting replica promotion for RDS Custom for Oracle (p. 1086).

Promoting an RDS Custom for Oracle replica to a standalone DB


instance
Just as with RDS for Oracle, you can promote an RDS Custom for Oracle replica to a standalone DB
instance. When you promote a Oracle replica, RDS Custom for Oracle reboots the DB instance before it

1063
Amazon Relational Database Service User Guide
Working with RDS Custom for Oracle replicas

becomes available. For more information about promoting Oracle replicas, see Promoting a read replica
to be a standalone DB instance (p. 447).

The following steps show the general process for promoting a Oracle replica to a DB instance:

1. Stop any transactions from being written to the primary DB instance.


2. Wait for RDS Custom for Oracle to apply all updates to your Oracle replica.
3. Promote your Oracle replica by choosing the Promote option on the Amazon RDS console, the AWS
CLI command promote-read-replica, or the PromoteReadReplica Amazon RDS API operation.

Promoting a Oracle replica takes a few minutes to complete. During the process, RDS Custom for Oracle
stops replication and reboots your replica. When the reboot completes, the Oracle replica is available as
a standalone DB instance.

Console

To promote an RDS Custom for Oracle replica to a standalone DB instance

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the Amazon RDS console, choose Databases.

The Databases pane appears. Each Oracle replica shows Replica in the Role column.
3. Choose the RDS Custom for Oracle replica that you want to promote.
4. For Actions, choose Promote.
5. On the Promote Oracle replica page, enter the backup retention period and the backup window for
the newly promoted DB instance. You can't set this value to 0.
6. When the settings are as you want them, choose Promote Oracle replica.

AWS CLI

To promote your RDS Custom for Oracle replica to a standalone DB instance, use the AWS CLI promote-
read-replica command.

Example

For Linux, macOS, or Unix:

aws rds promote-read-replica \


--db-instance-identifier my-custom-read-replica \
--backup-retention-period 2 \
--preferred-backup-window 23:00-24:00

For Windows:

aws rds promote-read-replica ^


--db-instance-identifier my-custom-read-replica ^
--backup-retention-period 2 ^
--preferred-backup-window 23:00-24:00

RDS API

To promote your RDS Custom for Oracle replica to be a standalone DB instance, call the Amazon RDS API
PromoteReadReplica operation with the required parameter DBInstanceIdentifier.

1064
Amazon Relational Database Service User Guide
Backing up and restoring an RDS
Custom for Oracle DB instance

Backing up and restoring an Amazon RDS Custom for


Oracle DB instance
Like Amazon RDS, RDS Custom creates and saves automated backups of your RDS Custom for Oracle DB
instance during the backup window of your DB instance. You can also back up your DB instance manually.

The procedure is identical to taking a snapshot of an Amazon RDS DB instance. The first snapshot
of an RDS Custom DB instance contains the data for the full DB instance. Subsequent snapshots are
incremental.

Restore DB snapshots using either the AWS Management Console or the AWS CLI.

Topics
• Creating an RDS Custom for Oracle snapshot (p. 1065)
• Restoring from an RDS Custom for Oracle DB snapshot (p. 1066)
• Restoring an RDS Custom for Oracle instance to a point in time (p. 1067)
• Deleting an RDS Custom for Oracle snapshot (p. 1070)
• Deleting RDS Custom for Oracle automated backups (p. 1070)

Creating an RDS Custom for Oracle snapshot


RDS Custom for Oracle creates a storage volume snapshot of your DB instance, backing up the entire DB
instance and not just individual databases. When your DB instance contains a container database (CDB),
the snapshot of the instance includes the root CDB and all PDBs.

When you create an RDS Custom for Oracle snapshot, specify which RDS Custom DB instance to back up.
Give your snapshot a name so you can restore from it later.

When you create a snapshot, RDS Custom for Oracle creates an Amazon EBS snapshot for every volume
attached to the DB instance. RDS Custom for Oracle uses the EBS snapshot of the root volume to register
a new Amazon Machine Image (AMI). To make snapshots easy to associate with a specific DB instance,
they're tagged with DBSnapshotIdentifier, DbiResourceId, and VolumeType.

Creating a DB snapshot results in a brief I/O suspension. This suspension can last from a few seconds to
a few minutes, depending on the size and class of your DB instance. The snapshot creation time varies
with the size of your database. Because the snapshot includes the entire storage volume, the size of files,
such as temporary files, also affects snapshot creation time. To learn more about creating snapshots, see
Creating a DB snapshot (p. 613).

Create an RDS Custom for Oracle snapshot using the console or the AWS CLI.

Console

To create an RDS Custom snapshot

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases.
3. In the list of RDS Custom DB instances, choose the instance for which you want to take a snapshot.
4. For Actions, choose Take snapshot.

The Take DB snapshot window appears.


5. For Snapshot name, enter the name of the snapshot.

1065
Amazon Relational Database Service User Guide
Backing up and restoring an RDS
Custom for Oracle DB instance

6. Choose Take snapshot.

AWS CLI

You create a snapshot of an RDS Custom DB instance by using the create-db-snapshot AWS CLI
command.

Specify the following options:

• --db-instance-identifier – Identifies which RDS Custom DB instance you are going to back up
• --db-snapshot-identifier – Names your RDS Custom snapshot so you can restore from it later

In this example, you create a DB snapshot called my-custom-snapshot for an RDS Custom DB instance
called my-custom-instance.

Example

For Linux, macOS, or Unix:

aws rds create-db-snapshot \


--db-instance-identifier my-custom-instance \
--db-snapshot-identifier my-custom-snapshot

For Windows:

aws rds create-db-snapshot ^


--db-instance-identifier my-custom-instance ^
--db-snapshot-identifier my-custom-snapshot

Restoring from an RDS Custom for Oracle DB snapshot


When you restore an RDS Custom for Oracle DB instance, you provide the name of the DB snapshot and
a name for the new instance. You can't restore from a snapshot to an existing RDS Custom DB instance. A
new RDS Custom for Oracle DB instance is created when you restore.

The restore process differs in the following ways from restore in Amazon RDS:

• Before restoring a snapshot, RDS Custom for Oracle backs up existing configuration files. These files
are available on the restored instance in the directory /rdsdbdata/config/backup. RDS Custom
for Oracle restores the DB snapshot with default parameters and overwrites the previous database
configuration files with existing ones. Thus, the restored instance doesn't preserve custom parameters
and changes to database configuration files.
• The restored database has the same name as in the snapshot. You can't specify a different name. (For
RDS Custom for Oracle, the default is ORCL.)

Console

To restore an RDS Custom DB instance from a DB snapshot

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Snapshots.
3. Choose the DB snapshot that you want to restore from.
4. For Actions, choose Restore snapshot.

1066
Amazon Relational Database Service User Guide
Backing up and restoring an RDS
Custom for Oracle DB instance

5. On the Restore DB instance page, for DB instance identifier, enter the name for your restored RDS
Custom DB instance.
6. Choose Restore DB instance.

AWS CLI

You restore an RDS Custom DB snapshot by using the restore-db-instance-from-db-snapshot AWS CLI
command.

If the snapshot you are restoring from is for a private DB instance, make sure to specify both the correct
db-subnet-group-name and no-publicly-accessible. Otherwise, the DB instance defaults to
publicly accessible. The following options are required:

• db-snapshot-identifier – Identifies the snapshot from which to restore


• db-instance-identifier – Specifies the name of the RDS Custom DB instance to create from the
DB snapshot
• custom-iam-instance-profile – Specifies the instance profile associated with the underlying
Amazon EC2 instance of an RDS Custom DB instance.

The following code restores the snapshot named my-custom-snapshot for my-custom-instance.

Example

For Linux, macOS, or Unix:

aws rds restore-db-instance-from-db-snapshot \


--db-snapshot-identifier my-custom-snapshot \
--db-instance-identifier my-custom-instance \
--custom-iam-instance-profile AWSRDSCustomInstanceProfileForRdsCustomInstance \
--no-publicly-accessible

For Windows:

aws rds restore-db-instance-from-db-snapshot ^


--db-snapshot-identifier my-custom-snapshot ^
--db-instance-identifier my-custom-instance ^
--custom-iam-instance-profile AWSRDSCustomInstanceProfileForRdsCustomInstance ^
--no-publicly-accessible

Restoring an RDS Custom for Oracle instance to a point in time


You can restore a DB instance to a specific point in time (PITR), creating a new DB instance. To support
PITR, your DB instances must have backup retention set to a nonzero value.

The latest restorable time for an RDS Custom for Oracle DB instance depends on several factors,
but is typically within 5 minutes of the current time. To see the latest restorable time for a DB
instance, use the AWS CLI describe-db-instances command and look at the value returned in the
LatestRestorableTime field for the DB instance. To see the latest restorable time for each DB
instance in the Amazon RDS console, choose Automated backups.

You can restore to any point in time within your backup retention period. To see the earliest restorable
time for each DB instance, choose Automated backups in the Amazon RDS console.

For general information about PITR, see Restoring a DB instance to a specified time (p. 660).

Topics

1067
Amazon Relational Database Service User Guide
Backing up and restoring an RDS
Custom for Oracle DB instance

• PITR considerations for RDS Custom for Oracle (p. 1068)

PITR considerations for RDS Custom for Oracle


In RDS Custom for Oracle, PITR differs in the following important ways from PITR in Amazon RDS:

• The restored database has the same name as in the source DB instance. You can't specify a different
name. The default is ORCL.
• AWSRDSCustomIamRolePolicy requires new permissions. For more information, see Step 2: Add an
access policy to AWSRDSCustomInstanceRoleForRdsCustomInstance (p. 1007).
• All RDS Custom for Oracle DB instances must have backup retention set to a nonzero value.
• If you change the operating system or DB instance time zone, PITR might not work. For information
about changing time zones, see Changing the time zone of an RDS Custom for Oracle DB
instance (p. 1055).
• If you set automation to ALL_PAUSED, RDS Custom pauses the upload of archived redo logs, including
logs created before the latest restorable time (LRT). We recommend that you pause automation for a
brief period.

To illustrate, assume that your LRT is 10 minutes ago. You pause automation. During the pause, RDS
Custom doesn't upload archived redo logs. If your DB instance crashes, you can only recover to a time
before the LRT that existed when you paused. When you resume automation, RDS Custom resumes
uploading logs. The LRT advances. Normal PITR rules apply.
• In RDS Custom, you can manually specify an arbitrary number of hours to retain archived redo logs
before RDS Custom deletes them after upload. Specify the number of hours as follows:
1. Create a text file named /opt/aws/rdscustomagent/config/
redo_logs_custom_configuration.json.
2. Add a JSON object in the following format: {"archivedLogRetentionHours" :
"num_of_hours"}. The number must be an integer in the range 1–840.
• Assume that you plug a non-CDB into a container database (CDB) as a PDB and then attempt PITR. The
operation succeeds only if you previously backed up the PDB. After you create or modify a PDB, we
recommend that you always back it up.
• We recommend that you don't customize database initialization parameters. For example, modifying
the following parameters affects PITR:
• CONTROL_FILE_RECORD_KEEP_TIME affects the rules for uploading and deleting logs.
• LOG_ARCHIVE_DEST_n doesn't support multiple destinations.
• ARCHIVE_LAG_TARGET affects the latest restorable time.
• If you customize database initialization parameters, we strongly recommend that you only customize
the following:
• COMPATIBLE
• MAX_STRING_SIZE
• DB_FILES
• UNDO_TABLESPACE
• ENABLE_PLUGGABLE_DATABASE
• CONTROL_FILES
• AUDIT_TRAIL
• AUDIT_TRAIL_DEST

For all other initialization parameters, RDS Custom restores the default values. If you modify a
parameter that isn't in the preceding list, it might have an adverse effect on PITR and lead to
unpredictable results. For example, CONTROL_FILE_RECORD_KEEP_TIME affects the rules for
uploading and deleting logs.

1068
Amazon Relational Database Service User Guide
Backing up and restoring an RDS
Custom for Oracle DB instance

You can restore an RDS Custom DB instance to a point in time using the AWS Management Console, the
AWS CLI, or the RDS API.

Console

To restore an RDS Custom DB instance to a specified time

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Automated backups.
3. Choose the RDS Custom DB instance that you want to restore.
4. For Actions, choose Restore to point in time.

The Restore to point in time window appears.


5. Choose Latest restorable time to restore to the latest possible time, or choose Custom to choose a
time.

If you chose Custom, enter the date and time to which you want to restore the instance.

Times are shown in your local time zone, which is indicated by an offset from Coordinated Universal
Time (UTC). For example, UTC-5 is Eastern Standard Time/Central Daylight Time.
6. For DB instance identifier, enter the name of the target restored RDS Custom DB instance. The
name must be unique.
7. Choose other options as needed, such as DB instance class.
8. Choose Restore to point in time.

AWS CLI

You restore a DB instance to a specified time by using the restore-db-instance-to-point-in-time AWS CLI
command to create a new RDS Custom DB instance.

Use one of the following options to specify the backup to restore from:

• --source-db-instance-identifier mysourcedbinstance
• --source-dbi-resource-id dbinstanceresourceID
• --source-db-instance-automated-backups-arn backupARN

The custom-iam-instance-profile option is required.

The following example restores my-custom-db-instance to a new DB instance named my-


restored-custom-db-instance, as of the specified time.

Example

For Linux, macOS, or Unix:

aws rds restore-db-instance-to-point-in-time \


--source-db-instance-identifier my-custom-db-instance\
--target-db-instance-identifier my-restored-custom-db-instance \
--custom-iam-instance-profile AWSRDSCustomInstanceProfileForRdsCustomInstance \
--restore-time 2022-10-14T23:45:00.000Z

For Windows:

1069
Amazon Relational Database Service User Guide
Backing up and restoring an RDS
Custom for Oracle DB instance

aws rds restore-db-instance-to-point-in-time ^


--source-db-instance-identifier my-custom-db-instance ^
--target-db-instance-identifier my-restored-custom-db-instance ^
--custom-iam-instance-profile AWSRDSCustomInstanceProfileForRdsCustomInstance ^
--restore-time 2022-10-14T23:45:00.000Z

Deleting an RDS Custom for Oracle snapshot


You can delete DB snapshots managed by RDS Custom for Oracle when you no longer need them. The
deletion procedure is the same for both Amazon RDS and RDS Custom DB instances.

The Amazon EBS snapshots for the binary and root volumes remain in your account for a longer time
because they might be linked to some instances running in your account or to other RDS Custom for
Oracle snapshots. These EBS snapshots are automatically deleted after they're no longer related to any
existing RDS Custom for Oracle resources (DB instances or backups).

Console

To delete a snapshot of an RDS Custom DB instance

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Snapshots.
3. Choose the DB snapshot that you want to delete.
4. For Actions, choose Delete snapshot.
5. Choose Delete on the confirmation page.

AWS CLI

To delete an RDS Custom snapshot, use the AWS CLI command delete-db-snapshot.

The following option is required:

• --db-snapshot-identifier – The snapshot to be deleted

The following example deletes the my-custom-snapshot DB snapshot.

Example

For Linux, macOS, or Unix:

aws rds delete-db-snapshot \


--db-snapshot-identifier my-custom-snapshot

For Windows:

aws rds delete-db-snapshot ^


--db-snapshot-identifier my-custom-snapshot

Deleting RDS Custom for Oracle automated backups


You can delete retained automated backups for RDS Custom for Oracle when they are no longer needed.
The procedure is the same as the procedure for deleting Amazon RDS backups.

1070
Amazon Relational Database Service User Guide
Backing up and restoring an RDS
Custom for Oracle DB instance

Console

To delete a retained automated backup

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Automated backups.
3. Choose Retained.
4. Choose the retained automated backup that you want to delete.
5. For Actions, choose Delete.
6. On the confirmation page, enter delete me and choose Delete.

AWS CLI

You can delete a retained automated backup by using the AWS CLI command delete-db-instance-
automated-backup.

The following option is used to delete a retained automated backup:

• --dbi-resource-id – The resource identifier for the source RDS Custom DB instance.

You can find the resource identifier for the source DB instance of a retained automated backup by
using the AWS CLI command describe-db-instance-automated-backups.

The following example deletes the retained automated backup with source DB instance resource
identifier custom-db-123ABCEXAMPLE.

Example

For Linux, macOS, or Unix:

aws rds delete-db-instance-automated-backup \


--dbi-resource-id custom-db-123ABCEXAMPLE

For Windows:

aws rds delete-db-instance-automated-backup ^


--dbi-resource-id custom-db-123ABCEXAMPLE

1071
Amazon Relational Database Service User Guide
Migrating to RDS Custom for Oracle

Migrating an on-premises database to RDS Custom


for Oracle
Before you migrate an on-premises Oracle database to RDS Custom for Oracle, you need to consider the
following factors:

• The amount of downtime the application can afford


• The size of the source database
• Network connectivity
• A requirement for a fallback plan
• The source and target Oracle database version and DB instance OS types
• Available replication tools, such as AWS Database Migration Service, Oracle GoldenGate, or third-party
replication tools

Based on these factors, you can choose physical migration, logical migration, or a combination. If you
choose physical migration, you can use the following techniques:

RMAN duplication

Active database duplication doesn’t require a backup of your source database. It duplicates the live
source database to the destination host by copying database files over the network to the auxiliary
instance. The RMAN DUPLICATE command copies the required files as image copies or backup sets.
To learn this technique, see the AWS blog post Physical migration of Oracle databases to Amazon
RDS Custom using RMAN duplication.
Oracle Data Guard

In this technique, you back up a primary on-premises database and copy the backups to an Amazon
S3 bucket. You then copy the backups to your RDS Custom for Oracle standby DB instance. After
performing the necessary configuration, you manually switch over your primary database to your
RDS Custom for Oracle standby database. To learn this technique, see the AWS blog post Physical
migration of Oracle databases to Amazon RDS Custom using Data Guard.

For general information about logically importing data into RDS for Oracle, see Importing data into
Oracle on Amazon RDS (p. 1947).

1072
Amazon Relational Database Service User Guide
Upgrading a DB instance for RDS Custom for Oracle

Upgrading a DB instance for Amazon RDS Custom for


Oracle
You can upgrade an Amazon RDS Custom DB instance by modifying it to use a new custom engine
version (CEV). For general information about upgrades, see Upgrading a DB instance engine
version (p. 429).

Topics
• Requirements for RDS Custom for Oracle upgrades (p. 1073)
• Considerations for RDS Custom for Oracle upgrades (p. 1073)
• Viewing valid upgrade targets for RDS Custom for Oracle DB instances (p. 1074)
• Upgrading an RDS Custom DB instance (p. 1075)
• Viewing pending upgrades for RDS Custom DB instances (p. 1075)
• Troubleshooting an upgrade failure for an RDS Custom for Oracle DB instance (p. 1076)

Requirements for RDS Custom for Oracle upgrades


When upgrading your RDS Custom for Oracle DB instance to a target CEV, make sure you meet the
following requirements:

• The target CEV to which you are upgrading must exist.


• The target CEV must use the installation parameter settings that are in the manifest of the current
CEV. For example, you can't upgrade a database that uses the default Oracle home to a CEV that uses a
nondefault Oracle home.
• The target CEV must use a new minor database version, not a new major version. For example, you
can't upgrade from an Oracle Database 12c CEV to an Oracle Database 19c CEV. But you can upgrade
from version 21.0.0.0.ru-2023-04.rur-2023-04.r1 to version 21.0.0.0.ru-2023-07.rur-2023-07.r1.

Considerations for RDS Custom for Oracle upgrades


When planning an upgrade, consider the following:

• We strongly recommend that you upgrade your RDS Custom for Oracle DB instance using CEVs. RDS
Custom for Oracle automation synchronizes the patch metadata with the database binary on your DB
instance.

In special circumstances, RDS Custom supports applying a "one-off" patch directly to the underlying
Amazon EC2 instance directly using OPATCH. A valid use case might be a patch that you want to apply
immediately, but the RDS Custom team is upgrading the CEV feature, causing a delay. To apply a patch
manually, perform the following steps:
1. Pause RDS Custom automation.
2. Apply your patch to the database binaries on the Amazon EC2 instance.
3. Resume RDS Custom automation.

A disadvantage of the preceding technique is that you must apply the patch manually to every
instance that you want to upgrade. In contrast, when you create a new CEV, you can create or upgrade
multiple DB instances using the same CEV.
• When you upgrade your primary DB instance, RDS Custom for Oracle upgrades your read replicas
automatically. You don't have to upgrade read replicas manually.
• When you upgrade your RDS Custom for Oracle DB instance to a new CEV, RDS Custom performs out-
of-place patching that replaces the entire database volume with a new volume that uses your target

1073
Amazon Relational Database Service User Guide
Upgrading a DB instance for RDS Custom for Oracle

database version. Thus, we strongly recommend that you don't use the bin volume for installations or
for storing permanent data or files.
• When you upgrade a container database (CDB), RDS Custom for Oracle checks that all PDBs are open
or could be opened. If these conditions aren't met, RDS Custom stops the check and returns the
database to its original state without attempting the upgrade. If the conditions are met, RDS Custom
patches the CDB root first, and then patches all other PDBs (including PDB$SEED) in parallel.

After the patching process completes, RDS Custom attempts to open all PDBs. If any PDBs fail to open,
you receive the following event: The following PDBs failed to open: list-of-PDBs. If RDS
Custom fails to patch the CDB root or any PDBs, the instance is put into the PATCH_DB_FAILED state.
• You might want to perform a major version upgrade and a conversion of non-CDB to CDB at the same
time. In this case, we recommend that you proceed as follows:
1. Create a new RDS Custom DB instance that uses the Oracle Multitenant architecture.
2. Plug in a non-CDB into your CDB root, creating it as a PDB. Make sure that the non-CDB is the same
major version as your CDB.
3. Convert your PDB by running the noncdb_to_pdb.sql Oracle script.
4. Validate your CDB instance.
5. Upgrade your CDB instance.

Viewing valid upgrade targets for RDS Custom for Oracle DB


instances
You can see existing CEVs on the Custom engine versions page in the AWS Management Console.

You can also use the describe-db-engine-versions AWS CLI command to find valid upgrades for your
DB instances, as shown in the following example. This example assumes that a DB instance was created
using the version 19.my_cev1, and that the upgrade versions 19.my_cev2 and 19.my_cev exist.

aws rds describe-db-engine-versions --engine custom-oracle-ee --engine-version 19.my_cev1

The output resembles the following.

{
"DBEngineVersions": [
{
"Engine": "custom-oracle-ee",
"EngineVersion": "19.my_cev1",
...
"ValidUpgradeTarget": [
{
"Engine": "custom-oracle-ee",
"EngineVersion": "19.my_cev2",
"Description": "19.my_cev2 description",
"AutoUpgrade": false,
"IsMajorVersionUpgrade": false
},
{
"Engine": "custom-oracle-ee",
"EngineVersion": "19.my_cev3",
"Description": "19.my_cev3 description",
"AutoUpgrade": false,
"IsMajorVersionUpgrade": false
}
]
...

1074
Amazon Relational Database Service User Guide
Upgrading a DB instance for RDS Custom for Oracle

Upgrading an RDS Custom DB instance


To upgrade your RDS Custom DB instance, you modify it to use a new CEV.

Read replicas managed by RDS Custom are automatically upgraded after the primary DB instance is
upgraded.

Console

To upgrade an RDS Custom DB instance

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases, and then choose the RDS Custom DB instance that you
want to modify.
3. Choose Modify. The Modify DB instance page appears.
4. For DB engine version, choose the CEV to upgrade to, such as 19.my_cev3.
5. Choose Continue to check the summary of modifications.

Choose Apply immediately to apply the changes immediately.


6. If your changes are correct, choose Modify DB instance. Or choose Back to edit your changes or
Cancel to cancel your changes.

AWS CLI
To upgrade an RDS Custom DB instance, use the modify-db-instance AWS CLI command with the
following parameters:

• --db-instance-identifier – The DB instance to be upgraded


• --engine-version – The new CEV
• --no-apply-immediately | --apply-immediately – Whether to perform the upgrade
immediately or wait until the scheduled maintenance window

The following example upgrades my-custom-instance to version 19.my_cev3.

Example
For Linux, macOS, or Unix:

aws rds modify-db-instance \


--db-instance-identifier my-custom-instance \
--engine-version 19.my_cev3 \
--apply-immediately

For Windows:

aws rds modify-db-instance ^


--db-instance-identifier my-custom-instance ^
--engine-version 19.my_cev3 ^
--apply-immediately

Viewing pending upgrades for RDS Custom DB instances


You can see pending upgrades for your Amazon RDS Custom DB instances by using the describe-db-
instances or describe-pending-maintenance-actions AWS CLI command.

1075
Amazon Relational Database Service User Guide
Upgrading a DB instance for RDS Custom for Oracle

However, this approach doesn't work if you used the --apply-immediately option or if the upgrade is
in progress.

The following describe-db-instances command shows pending upgrades for my-custom-


instance.

aws rds describe-db-instances --db-instance-identifier my-custom-instance

The output resembles the following.

{
"DBInstances": [
{
"DBInstanceIdentifier": "my-custom-instance",
"EngineVersion": "19.my_cev1",
...
"PendingModifiedValues": {
"EngineVersion": "19.my_cev3"
...
}
}
]
}

The following shows use of the describe-pending-maintenance-actions command.

aws rds describe-pending-maintenance-actions

The output resembles the following.

{
"PendingMaintenanceActions": [
{
"ResourceIdentifier": "arn:aws:rds:us-west-2:123456789012:instance:my-custom-
instance",
"PendingMaintenanceActionDetails": [
{
"Action": "db-upgrade",
"Description": "Upgrade to 19.my_cev3"
}
]
}
]
}

Troubleshooting an upgrade failure for an RDS Custom for


Oracle DB instance
If an RDS Custom DB instance upgrade fails, an RDS event is generated and the DB instance status
becomes upgrade-failed.

You can see this status by using the describe-db-instances AWS CLI command, as shown in the following
example.

aws rds describe-db-instances --db-instance-identifier my-custom-instance

The output resembles the following.

1076
Amazon Relational Database Service User Guide
Upgrading a DB instance for RDS Custom for Oracle

{
"DBInstances": [
{
"DBInstanceIdentifier": "my-custom-instance",
"EngineVersion": "19.my_cev1",
...
"PendingModifiedValues": {
"EngineVersion": "19.my_cev3"
...
}
"DBInstanceStatus": "upgrade-failed"
}
]
}

After an upgrade failure, all database actions are blocked except for modifying the DB instance to
perform the following tasks:

• Retrying the same upgrade


• Pausing and resuming RDS Custom automation
• Point-in-time recovery (PITR)
• Deleting the DB instance

Note
If automation has been paused for the RDS Custom DB instance, you can't retry the upgrade
until you resume automation.
The same actions apply to an upgrade failure for an RDS-managed read replica as for the
primary.

For more information, see Troubleshooting upgrades for RDS Custom for Oracle (p. 1085).

1077
Amazon Relational Database Service User Guide
Troubleshooting RDS Custom for Oracle

Troubleshooting DB issues for Amazon RDS Custom


for Oracle
The shared responsibility model of RDS Custom provides OS shell–level access and database
administrator access. RDS Custom runs resources in your account, unlike Amazon RDS, which runs
resources in a system account. With greater access comes greater responsibility. In the following sections,
you can learn how to troubleshoot issues with Amazon RDS Custom DB instances.
Note
This section explains how to troubleshoot RDS Custom for Oracle. For troubleshooting RDS
Custom for SQL Server, see Troubleshooting DB issues for Amazon RDS Custom for SQL
Server (p. 1169).

Topics
• Viewing RDS Custom events (p. 1078)
• Viewing RDS Custom events (p. 1078)
• Troubleshooting custom engine version creation for RDS Custom for Oracle (p. 1079)
• Fixing unsupported configurations in RDS Custom for Oracle (p. 1080)
• Troubleshooting upgrades for RDS Custom for Oracle (p. 1085)
• Troubleshooting replica promotion for RDS Custom for Oracle (p. 1086)

Viewing RDS Custom events


The procedure for viewing events is the same for RDS Custom and Amazon RDS DB instances. For more
information, see Viewing Amazon RDS events (p. 852).

To view RDS Custom event notification using the AWS CLI, use the describe-events command. RDS
Custom introduces several new events. The event categories are the same as for Amazon RDS. For the list
of events, see Amazon RDS event categories and event messages (p. 874).

The following example retrieves details for the events that have occurred for the specified RDS Custom
DB instance.

aws rds describe-events \


--source-identifier my-custom-instance \
--source-type db-instance

Viewing RDS Custom events


The procedure for subscribing to events is the same for RDS Custom and Amazon RDS DB instances. For
more information, see Subscribing to Amazon RDS event notification (p. 860).

To subscribe to RDS Custom event notification using the CLI, use the create-event-subscription
command. Include the following required parameters:

• --subscription-name
• --sns-topic-arn

The following example creates a subscription for backup and recovery events for an RDS Custom DB
instance in the current AWS account. Notifications are sent to an Amazon Simple Notification Service
(Amazon SNS) topic, specified by --sns-topic-arn.

aws rds create-event-subscription \

1078
Amazon Relational Database Service User Guide
Troubleshooting RDS Custom for Oracle

--subscription-name my-instance-events \
--source-type db-instance \
--event-categories '["backup","recovery"]' \
--sns-topic-arn arn:aws:sns:us-east-1:123456789012:interesting-events

Troubleshooting custom engine version creation for RDS


Custom for Oracle
When CEV creation fails, RDS Custom issues RDS-EVENT-0198 with the message Creation failed
for custom engine version major-engine-version.cev_name, and includes details about the
failure. For example, the event prints missing files.

CEV creation might fail because of the following issues:

• The Amazon S3 bucket containing your installation files isn't in the same AWS Region as your CEV.
• When you request CEV creation in an AWS Region for the first time, RDS Custom creates an S3 bucket
for storing RDS Custom resources (such as CEV artifacts, AWS CloudTrail logs, and transaction logs).

CEV creation fails if RDS Custom can't create the S3 bucket. Either the caller doesn't have S3
permissions as described in Step 4: Grant required permissions to your IAM user or role (p. 1012), or
the number of S3 buckets has reached the limit.
• The caller doesn't have permissions to get files from your S3 bucket that contains the installation
media files. These permissions are described in Step 7: Add necessary IAM permissions (p. 1026).
• Your IAM policy has an aws:SourceIp condition. Make sure to follow the recommendations in AWS
Denies access to AWS based on the source IP in the AWS Identity and Access Management User Guide.
Also make sure that the caller has the S3 permissions described in Step 4: Grant required permissions
to your IAM user or role (p. 1012).
• Installation media files listed in the CEV manifest aren't in your S3 bucket.
• The SHA-256 checksums of the installation files are unknown to RDS Custom.

Confirm that the SHA-256 checksums of the provided files match the SHA-256 checksum on the
Oracle website. If the checksums match, contact AWS Support and provide the failed CEV name, file
name, and checksum.
• The OPatch version is incompatible with your patch files. You might get the following message:
OPatch is lower than minimum required version. Check that the version meets
the requirements for all patches, and try again. To apply an Oracle patch, you must use
a compatible version of the OPatch utility. You can find the required version of the Opatch utility in
the readme file for the patch. Download the most recent OPatch utility from My Oracle Support, and
try creating your CEV again.
• The patches specified in the CEV manifest are in the wrong order.

You can view RDS events either on the RDS console (in the navigation pane, choose Events) or by
using the describe-events AWS CLI command. The default duration is 60 minutes. If no events are
returned, specify a longer duration, as shown in the following example.

aws rds describe-events --duration 360

Currently, the MediaImport service that imports files from Amazon S3 to create CEVs isn't integrated
with AWS CloudTrail. Therefore, if you turn on data logging for Amazon RDS in CloudTrail, calls to the
MediaImport service such as the CreateCustomDbEngineVersion event aren't logged.

However, you might see calls from the API gateway that accesses your Amazon S3 bucket. These calls
come from the MediaImport service for the CreateCustomDbEngineVersion event.

1079
Amazon Relational Database Service User Guide
Troubleshooting RDS Custom for Oracle

Fixing unsupported configurations in RDS Custom for Oracle


In the shared responsibility model, it's your responsibility to fix configuration issues that put your RDS
Custom for Oracle DB instance into the unsupported-configuration state. If the issue is with the
AWS infrastructure, you can use the console or the AWS CLI to fix it. If the issue is with the operating
system or the database configuration, you can log in to the host to fix it.
Note
This section explains how to fix unsupported configurations in RDS Custom for Oracle. For
information about RDS Custom for SQL Server, see Fixing unsupported configurations in RDS
Custom for SQL Server (p. 1172).

In the following table, you can find descriptions of the notifications and events that the support
perimeter sends and how to fix them. These notifications and the support perimeter are subject to
change. For background on the support perimeter, see RDS Custom support perimeter (p. 985). For event
descriptions, see Amazon RDS event categories and event messages (p. 874).

Configuration RDS event message Description Action

Database

Database health You need to manually The support perimeter Log in to your host and examine the
recover the database monitors the DB database state.
on EC2 instance instance state. It also
[i- monitors how many ps -eo pid,state,command | grep
xxxxxxxxxxxxxxxxx]. restarts occurred smon
during the previous
The DB instance hour and day.
restarted. Restart your RDS Custom for Oracle
You're notified when DB instance if necessary to get
the instance is in a it running again. Sometimes it's
state where it still necessary to reboot the host.
exists, but you can't
After the restart, the RDS Custom
interact with it.
agent detects that your DB instance is
no longer in an unresponsive state. It
then notifies the support perimeter to
reevaluate your DB instance state.

Oracle Data Guard role The database The support perimeter Restore your Oracle Data Guard
role [LOGICAL monitors the current database role to a supported value.
STANDBY] isn't database role every
supported. Validate 15 seconds and RDS Custom only supports PRIMARY
the Oracle Data sends a CloudWatch and PHYSICAL STANDBY roles. You
Guard configuration notification if the can use the following statement to
for the database on database role has check the role:
Amazon EC2 instance changed.
[i- SELECT DATABASE_ROLE FROM V
xxxxxxxxxxxxxxxxx]. The Oracle Data Guard $DATABASE;
DATABASE_ROLE
parameter must be
If your RDS Custom for Oracle DB
either PRIMARY or
instance is standalone, you can use
PHYSICAL STANDBY.
either of the following statements to
change it back to the PRIMARY role:

ALTER DATABASE COMMIT TO


SWITCHOVER PRIMARY;

1080
Amazon Relational Database Service User Guide
Troubleshooting RDS Custom for Oracle

Configuration RDS event message Description Action


ALTER DATABASE ACTIVATE STANDBY
DATABASE;

If your DB instance is a replica, you can


use the following statement to change
it back to the PHYSICAL STANDBY
role:

ALTER DATABASE CONVERT TO PHYSICAL


STANDBY;

After the support perimeter


determines that the database role
is supported, your RDS Custom for
Oracle DB instance becomes available
within 15 seconds.

Database archive lag The monitored Archive The support perimeter Log in to your host, connect to
target Lag Target database monitors the your RDS Custom for Oracle
parameter on ARCHIVE_LAG_TARGET DB instance, and change the
Amazon EC2 instance database parameter ARCHIVE_LAG_TARGET parameter to
[i- to verify that the a value from 60–7200.
xxxxxxxxxxxxxxxxx] DB instance's latest
has changed from restorable time is For example, use the following SQL
[300] to [0]. within reasonable command.
bounds.
The RDS Custom ALTER SYSTEM SET
instance is using ARCHIVE_LAG_TARGET=300
an unsupported SCOPE=BOTH;
configuration because
of the following [1] Your DB instance becomes available
issue(s): (1) The archive within 30 minutes.
lag target database
parameter on
Amazon EC2 instance
[i-
xxxxxxxxxxxxxxxxx]
is out of desired range
{"lowerbound":60,"upperbound":7200}.

Database log mode The monitored The DB instance log Log in to your host and shut down
log mode of the mode must be set to your RDS Custom for Oracle DB
database on Amazon ARCHIVELOG. instance. Use the following SQL
EC2 instance statement to initiate a consistent
[i- shutdown.
xxxxxxxxxxxxxxxxx]
has changed from SHUTDOWN IMMEDIATE;
[ARCHIVELOG] to
[NOARCHIVELOG].
The RDS Custom agent restarts your
DB instance and sets the log mode to
ARCHIVELOG.

Your DB instance becomes available


within 30 minutes.

1081
Amazon Relational Database Service User Guide
Troubleshooting RDS Custom for Oracle

Configuration RDS event message Description Action

Operating system

RDS Custom agent The monitored state The RDS Custom Log in to your host and make sure that
status of the RDS Custom agent must always the RDS Custom agent is running.
agent on EC2 instance be running. The
[i- agent publishes the You can use the following command to
xxxxxxxxxxxxxxxxx] IamAlive metric to find the status of the agent.
has changed Amazon CloudWatch
from RUNNING to every 30 seconds. service rdscustomagent status
STOPPED. An alarm is triggered
if the metric hasn't
You can use the following command to
been published for 30
start the agent.
seconds.

The support perimeter service rdscustomagent start


also monitors the RDS
Custom agent process When the RDS Custom agent is
state on the host every running again, the IamAlive metric
30 minutes. is published to Amazon CloudWatch,
and the alarm switches to the OK
On RDS Custom state. This switch notifies the support
for Oracle, the DB perimeter that the agent is running.
instance goes outside
the support perimeter
if the RDS Custom
agent stops.

AWS Systems Manager The AWS Systems The SSM agent must For more information, see
agent (SSM agent) Manager agent always be running. The Troubleshooting SSM Agent.
status on EC2 instance RDS Custom agent is
[i- responsible for making
xxxxxxxxxxxxxxxxx] sure that the Systems
is currently Manager agent is
unreachable. Make running.
sure you have correctly
configured the If the SSM agent was
network, agent, and down and restarted,
IAM permissions. the RDS Custom agent
publishes a metric
to CloudWatch. The
RDS Custom agent
has an alarm on the
metric set to trigger
when there has been
a restart in each of
the previous three
minutes.

The support perimeter


also monitors the SSM
agent process state
on the host every 30
minutes.

1082
Amazon Relational Database Service User Guide
Troubleshooting RDS Custom for Oracle

Configuration RDS event message Description Action

sudo configurations The sudo The support perimeter If the overwrite is unsuccessful, you
configurations monitors that can log in to your host and investigate
on EC2 instance certain OS users are why recent changes to the sudo
[i- allowed to run certain configurations aren't supported.
xxxxxxxxxxxxxxxxx] commands on the
have changed. box. It monitors sudo You can use the following command.
configurations against
the supported state. visudo -c -f /etc/
sudoers.d/individual_sudo_files
When the sudo
configurations aren't
After the support perimeter
supported, the
determines that the sudo
RDS Custom tries
configurations are supported, the your
to overwrite them
RDS Custom for Oracle DB instance
back to the previous
becomes available within 30 minutes.
supported state. If
that is successful, the
following notification
is sent:

RDS Custom
successfully overwrote
your configuration.

AWS resources

Amazon EC2 instance The state of the The support perimeter If your EC2 instance is stopped, start
state EC2 instance monitors EC2 it and remount the binary and data
[i- instance state-change volumes.
xxxxxxxxxxxxxxxxx] notifications. The EC2
has changed from instance must always If your EC2 instance is terminated,
[RUNNING] to be running. delete your RDS Custom for Oracle DB
[STOPPING]. instance.

The Amazon
EC2 instance
[i-
xxxxxxxxxxxxxxxxx]
has been terminated
and can't be found.
Delete the database
instance to clean up
resources.

The Amazon
EC2 instance
[i-
xxxxxxxxxxxxxxxxx]
has been stopped.
Start the instance,
and restore the host
configuration. For
more information, see
the troubleshooting
documentation.

1083
Amazon Relational Database Service User Guide
Troubleshooting RDS Custom for Oracle

Configuration RDS event message Description Action

Amazon EC2 instance The attributes of The support perimeter Change the EC2 instance type back to
attributes Amazon EC2 instance monitors the instance the original type using the EC2 console
[i- type of the EC2 or CLI.
xxxxxxxxxxxxxxxxx] instance where the
have changed. RDS Custom DB To change the instance type because
instance is running. of scaling requirements, begin point-
The EC2 instance type in-time recovery and specify the new
must stay the same instance type and class. This action
as when you set it up results in a new RDS Custom DB
during RDS Custom DB instance with a new host and Domain
instance creation. Name System (DNS) name.

Amazon Elastic Block The following RDS Custom creates If you detached any initial EBS
Store (Amazon EBS) Amazon EBS volumes two types of EBS volumes, contact AWS Support.
volumes are attached to volume, besides the
Amazon EC2 instance root volume created If you modified the storage type,
[i- from the Amazon Provisioned IOPS, or storage
xxxxxxxxxxxxxxxxx]: Machine Image (AMI), throughput of an EBS volume, revert
[[vol- and associates them the modification to the original value.
01234abcd56789ef0, with the EC2 instance.
vol- If you modified the storage size of an
0def6789abcd01234]]. The binary volume is EBS volume, contact AWS Support.
where the database
The original Amazon software binaries are (RDS Custom for Oracle only) If you
EBS volumes located. The data attached any additional EBS volumes,
attached to Amazon volumes are where do either of the following:
EC2 instance database files are
• Detach the additional EBS volumes
[i- located. The storage
from the RDS Custom DB instance.
xxxxxxxxxxxxxxxxx] configurations that
have been detached you set when creating • Contact AWS Support.
or modified. You can’t the DB instance are
attach or modify the used to configure the
initial EBS volumes data volumes.
attached to an RDS
Custom instance. The support perimeter
monitors the
following:

• The initial EBS


volumes created
with the DB instance
are still associated.
• The initial EBS
volumes still
have the same
configurations
as initially set:
storage type,
size, Provisioned
IOPS, and storage
throughput.
• No additional
EBS volumes are
attached to the DB
instance.

1084
Amazon Relational Database Service User Guide
Troubleshooting RDS Custom for Oracle

Configuration RDS event message Description Action

EBS-optimized state The EBS-optimized Amazon EC2 instances To turn on the EBS-optimized
attribute of Amazon should be EBS attribute:
EC2 instance optimized.
[i- 1. Stop the EC2 instance.
xxxxxxxxxxxxxxxxx] If the EBS- 2. Set the EBS-optimized attribute
has changed optimized attribute to enabled.
from [enabled] to is turned off
3. Start the EC2 instance.
[disabled]. (disabled), the
support perimeter 4. Remount the binary and data
doesn't put the DB volumes.
instance into the
unsupported-
configuration
state.

Troubleshooting upgrades for RDS Custom for Oracle


Your upgrade of an RDS Custom for Oracle instance might fail. Following, you can find techniques that
you can use during upgrades of RDS Custom DB for Oracle DB instances:

• Examine the upgrade output log files in the /tmp directory on your DB instance. The names of the logs
depend on your DB engine version. For example, you might see logs that contain the strings catupgrd
or catup.
• Examine the alert.log file located in the /rdsdbdata/log/trace directory.
• Run the following grep command in the root directory to track the upgrade OS process. This
command shows where the log files are being written and determine the state of the upgrade process.

ps -aux | grep upg

The following shows sample output.

root 18884 0.0 0.0 235428 8172 ? S< 17:03 0:00 /usr/bin/sudo -u rdsdb /
rdsdbbin/scripts/oracle-control ORCL op_apply_upgrade_sh RDS-UPGRADE/2.upgrade.sh
rdsdb 18886 0.0 0.0 153968 12164 ? S< 17:03 0:00 /usr/bin/perl -T -w /
rdsdbbin/scripts/oracle-control ORCL op_apply_upgrade_sh RDS-UPGRADE/2.upgrade.sh
rdsdb 18887 0.0 0.0 113196 3032 ? S< 17:03 0:00 /bin/sh /rdsdbbin/
oracle/rdbms/admin/RDS-UPGRADE/2.upgrade.sh
rdsdb 18900 0.0 0.0 113196 1812 ? S< 17:03 0:00 /bin/sh /rdsdbbin/
oracle/rdbms/admin/RDS-UPGRADE/2.upgrade.sh
rdsdb 18901 0.1 0.0 167652 20620 ? S< 17:03 0:07 /rdsdbbin/oracle/perl/
bin/perl catctl.pl -n 4 -d /rdsdbbin/oracle/rdbms/admin -l /tmp catupgrd.sql
root 29944 0.0 0.0 112724 2316 pts/0 S+ 18:43 0:00 grep --color=auto upg

• Run the following SQL query to verify the current state of the components to find the database
version and the options installed on the DB instance.

SET LINESIZE 180


COLUMN COMP_ID FORMAT A15
COLUMN COMP_NAME FORMAT A40 TRUNC
COLUMN STATUS FORMAT A15 TRUNC
SELECT COMP_ID, COMP_NAME, VERSION, STATUS FROM DBA_REGISTRY ORDER BY 1;

The output resembles the following.

1085
Amazon Relational Database Service User Guide
Troubleshooting RDS Custom for Oracle

COMP_NAME STATUS PROCEDURE


---------------------------------------- --------------------
--------------------------------------------------
Oracle Database Catalog Views VALID
DBMS_REGISTRY_SYS.VALIDATE_CATALOG
Oracle Database Packages and Types VALID
DBMS_REGISTRY_SYS.VALIDATE_CATPROC
Oracle Text VALID VALIDATE_CONTEXT
Oracle XML Database VALID DBMS_REGXDB.VALIDATEXDB

4 rows selected.

• Run the following SQL query to check for invalid objects that might interfere with the upgrade
process.

SET PAGES 1000 LINES 2000


COL OBJECT FOR A40
SELECT SUBSTR(OWNER,1,12) OWNER,
SUBSTR(OBJECT_NAME,1,30) OBJECT,
SUBSTR(OBJECT_TYPE,1,30) TYPE, STATUS,
CREATED
FROM DBA_OBJECTS
WHERE STATUS <>'VALID'
AND OWNER IN ('SYS','SYSTEM','RDSADMIN','XDB');

Troubleshooting replica promotion for RDS Custom for Oracle


You can promote managed Oracle replicas in RDS Custom for Oracle using the console, promote-read-
replica AWS CLI command, or PromoteReadReplica API. If you delete your primary DB instance, and
all replicas are healthy, RDS Custom for Oracle promotes your managed replicas to standalone instances
automatically. If a replica has paused automation or is outside the support perimeter, you must fix the
replica before RDS Custom can promote it automatically. For more information, see Replica promotion
limitations for RDS Custom for Oracle (p. 1063).

The replica promotion workflow might become stuck in the following situation:

• The primary DB instance is in the state STORAGE_FULL.


• The primary DB can't archive all of its online redo logs.
• A gap exists between the archived redo log files on your Oracle replica and the primary database.

To respond to the stuck workflow, complete the following steps:

1. Synchronize the redo log gap on your Oracle replica DB instance.


2. Force the promotion of your read replica to the latest applied redo log. Run the following commands
in SQL*Plus:

ALTER DATABASE ACTIVATE STANDBY DATABASE;


SHUTDOWN IMMEDIATE
STARTUP

3. Contact AWS Support and request it to move your DB instance to available status.

1086
Amazon Relational Database Service User Guide
Working with RDS Custom for SQL Server

Working with RDS Custom for SQL Server


Following, you can find instructions for creating, managing, and maintaining your RDS Custom for SQL
Server DB instances.

Topics
• RDS Custom for SQL Server workflow (p. 1087)
• Requirements and limitations for Amazon RDS Custom for SQL Server (p. 1089)
• Setting up your environment for Amazon RDS Custom for SQL Server (p. 1099)
• Bring Your Own Media with RDS Custom for SQL Server (p. 1113)
• Working with custom engine versions for RDS Custom for SQL Server (p. 1115)
• Creating and connecting to a DB instance for Amazon RDS Custom for SQL Server (p. 1130)
• Managing an Amazon RDS Custom for SQL Server DB instance (p. 1138)
• Managing a Multi-AZ deployment for RDS Custom for SQL Server (p. 1147)
• Backing up and restoring an Amazon RDS Custom for SQL Server DB instance (p. 1157)
• Migrating an on-premises database to Amazon RDS Custom for SQL Server (p. 1165)
• Upgrading a DB instance for Amazon RDS Custom for SQL Server (p. 1168)
• Troubleshooting DB issues for Amazon RDS Custom for SQL Server (p. 1169)

RDS Custom for SQL Server workflow


The following diagram shows the typical workflow for RDS Custom for SQL Server.

The steps are as follows:

1. Create an RDS Custom for SQL Server DB instance from an engine version offered by RDS Custom.

For more information, see Creating an RDS Custom for SQL Server DB instance (p. 1130).

1087
Amazon Relational Database Service User Guide
RDS Custom for SQL Server workflow

2. Connect your application to the RDS Custom DB instance endpoint.

For more information, see Connecting to your RDS Custom DB instance using AWS Systems
Manager (p. 1133) and Connecting to your RDS Custom DB instance using RDP (p. 1135).
3. (Optional) Access the host to customize your software.
4. Monitor notifications and messages generated by RDS Custom automation.

Creating a DB instance for RDS Custom


You create your RDS Custom DB instance using the create-db-instance command. The procedure
is similar to creating an Amazon RDS instance. However, some of the parameters are different. For
more information, see Creating and connecting to a DB instance for Amazon RDS Custom for SQL
Server (p. 1130).

Database connection
Like an Amazon RDS DB instance, your RDS Custom for SQL Server DB instance resides in a VPC. Your
application connects to the RDS Custom instance using a client such as SQL Server Management Suite
(SSMS), just as in RDS for SQL Server.

RDS Custom customization


You can access the RDS Custom host to install or customize software. To avoid conflicts between your
changes and the RDS Custom automation, you can pause the automation for a specified period. During
this period, RDS Custom doesn't perform monitoring or instance recovery. At the end of the period,
RDS Custom resumes full automation. For more information, see Pausing and resuming RDS Custom
automation (p. 1138).

1088
Amazon Relational Database Service User Guide
RDS Custom for SQL Server requirements and limitations

Requirements and limitations for Amazon RDS


Custom for SQL Server
Following, you can find a summary of the Amazon RDS Custom for SQL Server requirements and
limitations for quick reference. Requirements and limitations also appear in the relevant sections.

Topics
• Region and version availability (p. 1089)
• General requirements for RDS Custom for SQL Server (p. 1089)
• DB instance class support for RDS Custom for SQL Server (p. 1089)
• Limitations for RDS Custom for SQL Server (p. 1090)
• Local time zone for RDS Custom for SQL Server DB instances (p. 1090)

Region and version availability


Feature availability and support varies across specific versions of each database engine, and across AWS
Regions. For more information on version and Region availability of Amazon RDS with Amazon RDS
Custom for SQL Server, see RDS Custom for SQL Server (p. 153).

General requirements for RDS Custom for SQL Server


Make sure to follow these requirements for Amazon RDS Custom for SQL Server:

• Use the instance classes shown in DB instance class support for RDS Custom for SQL Server (p. 1089).
The only storage types supported are solid state drives (SSD) of types gp2, gp3, and io1. The maximum
storage limit is 16 TiB.
• Make sure that you have a symmetric encryption AWS KMS key to create an RDS Custom DB instance.
For more information, see Make sure that you have a symmetric encryption AWS KMS key (p. 1104).
• Make sure that you create an AWS Identity and Access Management (IAM) role and instance profile. For
more information, see Creating your IAM role and instance profile manually (p. 1105).
• Make sure to supply a networking configuration that RDS Custom can use to access other
AWS services. For specific requirements, see Configure networking, instance profile, and
encryption (p. 1101).
• The combined number of RDS Custom and Amazon RDS DB instances can't exceed your quota limit.
For example, if your quota is 40 DB instances, you can have 20 RDS Custom for SQL Server DB
instances and 20 Amazon RDS DB instances.

DB instance class support for RDS Custom for SQL Server


RDS Custom for SQL Server supports the DB instance classes shown in the following table.

SQL Server edition RDS Custom support

Enterprise Edition db.r5.xlarge–db.r5.24xlarge

db.m5.xlarge–db.m5.24xlarge

Standard Edition db.r5.large–db.r5.24xlarge

db.m5.large–db.m5.24xlarge

1089
Amazon Relational Database Service User Guide
RDS Custom for SQL Server requirements and limitations

SQL Server edition RDS Custom support

Web Edition db.r5.large–db.r5.4xlarge

db.m5.large–db.m5.4xlarge

Limitations for RDS Custom for SQL Server


The following limitations apply to RDS Custom for SQL Server:

• You can't create read replicas in Amazon RDS for RDS Custom for SQL Server DB instances. However,
you can configure high availability automatically with a Multi-AZ deployment. For more information,
see Managing a Multi-AZ deployment for RDS Custom for SQL Server (p. 1147).
• You can't modify the default server-level collation of an existing RDS Custom for SQL Server DB
instance. The default server collation is SQL_Latin1_General_CP1_CI_AS.
• Transparent Data Encryption (TDE) for database encryption isn't supported for RDS Custom for SQL
Server. However, you can use KMS for storage-level encryption. For more information on using KMS
with RDS Custom for SQL Server, see Make sure that you have a symmetric encryption AWS KMS
key (p. 1104).
• For an RDS Custom for SQL Server DB instance that wasn't created with a custom engine version
(CEV), changes to the Microsoft Windows operating system or C: drive aren't guaranteed to persist.
For example, you will lose these changes when you scale compute or initiate a snapshot restore
operation. If the RDS Custom for SQL Server DB instance was created with a CEV, then those changes
are persisted.
• Not all options are supported. For example, when you create an RDS Custom for SQL Server DB
instance, you can't do the following:
• Change the number of CPU cores and threads per core on the DB instance class.
• Turn on storage autoscaling.
• Configure Kerberos authentication using the AWS Management Console. However, you can configure
Windows Authentication manually and use Kerberos.
• Specify your own DB parameter group, option group, or character set.
• Turn on Performance Insights.
• Turn on automatic minor version upgrade.
• The maximum DB instance storage is 16 TiB.

Local time zone for RDS Custom for SQL Server DB instances
The time zone of an RDS Custom for SQL Server DB instance is set by default. The current default is
Coordinated Universal Time (UTC). You can set the time zone of your DB instance to a local time zone
instead, to match the time zone of your applications.

You set the time zone when you first create your DB instance. You can create your DB instance by using
the AWS Management Console, the Amazon RDS API CreateDBInstance action, or the AWS CLI create-db-
instance command.

If your DB instance is part of a Multi-AZ deployment, then when you fail over, your time zone remains the
local time zone that you set.

When you request a point-in-time restore, you specify the time to restore to. The time is shown in your
local time zone. For more information, see Restoring a DB instance to a specified time (p. 660).

The following are limitations to setting the local time zone on your DB instance:

1090
Amazon Relational Database Service User Guide
RDS Custom for SQL Server requirements and limitations

• You can configure the time zone for a DB instance during instance creation, but you can't modify the
time zone of an existing RDS Custom for SQL Server DB instance.
• If the time zone is modified for an existing RDS Custom for SQL Server DB instance, RDS Custom
changes the DB instance status to unsupported-configuration, and sends event notifications.
• You can't restore a snapshot from a DB instance in one time zone to a DB instance in a different time
zone.
• We strongly recommend that you don't restore a backup file from one time zone to a different time
zone. If you restore a backup file from one time zone to a different time zone, you must audit your
queries and applications for the effects of the time zone change. For more information, see Importing
and exporting SQL Server databases using native backup and restore (p. 1419).

Supported time zones


You can set your local time zone to one of the values listed in the following table.

Time zones supported for RDS Custom for SQL Server

Time zone Standard time Description Notes


offset

Afghanistan Standard Time (UTC+04:30) Kabul This time zone


doesn't observe
daylight saving time.

Alaskan Standard Time (UTC–09:00) Alaska

Aleutian Standard Time (UTC–10:00) Aleutian Islands

Altai Standard Time (UTC+07:00) Barnaul, Gorno-


Altaysk

Arab Standard Time (UTC+03:00) Kuwait, Riyadh This time zone


doesn't observe
daylight saving time.

Arabian Standard Time (UTC+04:00) Abu Dhabi, Muscat

Arabic Standard Time (UTC+03:00) Baghdad This time zone


doesn't observe
daylight saving time.

Argentina Standard Time (UTC–03:00) City of Buenos Aires This time zone
doesn't observe
daylight saving time.

Astrakhan Standard Time (UTC+04:00) Astrakhan, Ulyanovsk

Atlantic Standard Time (UTC–04:00) Atlantic Time


(Canada)

AUS Central Standard Time (UTC+09:30) Darwin This time zone


doesn't observe
daylight saving time.

Aus Central W. Standard Time (UTC+08:45) Eucla

AUS Eastern Standard Time (UTC+10:00) Canberra, Melbourne,


Sydney

1091
Amazon Relational Database Service User Guide
RDS Custom for SQL Server requirements and limitations

Time zone Standard time Description Notes


offset

Azerbaijan Standard Time (UTC+04:00) Baku

Azores Standard Time (UTC–01:00) Azores

Bahia Standard Time (UTC–03:00) Salvador

Bangladesh Standard Time (UTC+06:00) Dhaka This time zone


doesn't observe
daylight saving time.

Belarus Standard Time (UTC+03:00) Minsk This time zone


doesn't observe
daylight saving time.

Bougainville Standard Time (UTC+11:00) Bougainville Island

Canada Central Standard Time (UTC–06:00) Saskatchewan This time zone


doesn't observe
daylight saving time.

Cape Verde Standard Time (UTC–01:00) Cabo Verde Is. This time zone
doesn't observe
daylight saving time.

Caucasus Standard Time (UTC+04:00) Yerevan

Cen. Australia Standard Time (UTC+09:30) Adelaide

Central America Standard Time (UTC–06:00) Central America This time zone
doesn't observe
daylight saving time.

Central Asia Standard Time (UTC+06:00) Astana This time zone


doesn't observe
daylight saving time.

Central Brazilian Standard Time (UTC–04:00) Cuiaba

Central Europe Standard Time (UTC+01:00) Belgrade, Bratislava,


Budapest, Ljubljana,
Prague

Central European Standard (UTC+01:00) Sarajevo, Skopje,


Time Warsaw, Zagreb

Central Pacific Standard Time (UTC+11:00) Solomon Islands, New This time zone
Caledonia doesn't observe
daylight saving time.

Central Standard Time (UTC–06:00) Central Time (US and


Canada)

Central Standard Time (Mexico) (UTC–06:00) Guadalajara, Mexico


City, Monterrey

Chatham Islands Standard Time (UTC+12:45) Chatham Islands

1092
Amazon Relational Database Service User Guide
RDS Custom for SQL Server requirements and limitations

Time zone Standard time Description Notes


offset

China Standard Time (UTC+08:00) Beijing, Chongqing, This time zone


Hong Kong, Urumqi doesn't observe
daylight saving time.

Cuba Standard Time (UTC–05:00) Havana

Dateline Standard Time (UTC–12:00) International Date This time zone


Line West doesn't observe
daylight saving time.

E. Africa Standard Time (UTC+03:00) Nairobi This time zone


doesn't observe
daylight saving time.

E. Australia Standard Time (UTC+10:00) Brisbane This time zone


doesn't observe
daylight saving time.

E. Europe Standard Time (UTC+02:00) Chisinau

E. South America Standard Time (UTC–03:00) Brasilia

Easter Island Standard Time (UTC–06:00) Easter Island

Eastern Standard Time (UTC–05:00) Eastern Time (US and


Canada)

Eastern Standard Time (Mexico) (UTC–05:00) Chetumal

Egypt Standard Time (UTC+02:00) Cairo

Ekaterinburg Standard Time (UTC+05:00) Ekaterinburg

Fiji Standard Time (UTC+12:00) Fiji

FLE Standard Time (UTC+02:00) Helsinki, Kyiv, Riga,


Sofia, Tallinn, Vilnius

Georgian Standard Time (UTC+04:00) Tbilisi This time zone


doesn't observe
daylight saving time.

GMT Standard Time (UTC) Dublin, Edinburgh, This time zone


Lisbon, London isn't the same as
Greenwich Mean
Time. This time zone
does observe daylight
saving time.

Greenland Standard Time (UTC–03:00) Greenland

Greenwich Standard Time (UTC) Monrovia, Reykjavik This time zone


doesn't observe
daylight saving time.

GTB Standard Time (UTC+02:00) Athens, Bucharest

Haiti Standard Time (UTC–05:00) Haiti

1093
Amazon Relational Database Service User Guide
RDS Custom for SQL Server requirements and limitations

Time zone Standard time Description Notes


offset

Hawaiian Standard Time (UTC–10:00) Hawaii

India Standard Time (UTC+05:30) Chennai, Kolkata, This time zone


Mumbai, New Delhi doesn't observe
daylight saving time.

Iran Standard Time (UTC+03:30) Tehran

Israel Standard Time (UTC+02:00) Jerusalem

Jordan Standard Time (UTC+02:00) Amman

Kaliningrad Standard Time (UTC+02:00) Kaliningrad

Kamchatka Standard Time (UTC+12:00) Petropavlovsk-


Kamchatsky – Old

Korea Standard Time (UTC+09:00) Seoul This time zone


doesn't observe
daylight saving time.

Libya Standard Time (UTC+02:00) Tripoli

Line Islands Standard Time (UTC+14:00) Kiritimati Island

Lord Howe Standard Time (UTC+10:30) Lord Howe Island

Magadan Standard Time (UTC+11:00) Magadan This time zone


doesn't observe
daylight saving time.

Magallanes Standard Time (UTC–03:00) Punta Arenas

Marquesas Standard Time (UTC–09:30) Marquesas Islands

Mauritius Standard Time (UTC+04:00) Port Louis This time zone


doesn't observe
daylight saving time.

Middle East Standard Time (UTC+02:00) Beirut

Montevideo Standard Time (UTC–03:00) Montevideo

Morocco Standard Time (UTC+01:00) Casablanca

Mountain Standard Time (UTC–07:00) Mountain Time (US


and Canada)

Mountain Standard Time (UTC–07:00) Chihuahua, La Paz,


(Mexico) Mazatlan

Myanmar Standard Time (UTC+06:30) Yangon (Rangoon) This time zone


doesn't observe
daylight saving time.

N. Central Asia Standard Time (UTC+07:00) Novosibirsk

Namibia Standard Time (UTC+02:00) Windhoek

1094
Amazon Relational Database Service User Guide
RDS Custom for SQL Server requirements and limitations

Time zone Standard time Description Notes


offset

Nepal Standard Time (UTC+05:45) Kathmandu This time zone


doesn't observe
daylight saving time.

New Zealand Standard Time (UTC+12:00) Auckland, Wellington

Newfoundland Standard Time (UTC–03:30) Newfoundland

Norfolk Standard Time (UTC+11:00) Norfolk Island

North Asia East Standard Time (UTC+08:00) Irkutsk

North Asia Standard Time (UTC+07:00) Krasnoyarsk

North Korea Standard Time (UTC+09:00) Pyongyang

Omsk Standard Time (UTC+06:00) Omsk

Pacific SA Standard Time (UTC–03:00) Santiago

Pacific Standard Time (UTC–08:00) Pacific Time (US and


Canada)

Pacific Standard Time (Mexico) (UTC–08:00) Baja California

Pakistan Standard Time (UTC+05:00) Islamabad, Karachi This time zone


doesn't observe
daylight saving time.

Paraguay Standard Time (UTC–04:00) Asuncion

Romance Standard Time (UTC+01:00) Brussels,


Copenhagen, Madrid,
Paris

Russia Time Zone 10 (UTC+11:00) Chokurdakh

Russia Time Zone 11 (UTC+12:00) Anadyr,


Petropavlovsk-
Kamchatsky

Russia Time Zone 3 (UTC+04:00) Izhevsk, Samara

Russian Standard Time (UTC+03:00) Moscow, St. This time zone


Petersburg, doesn't observe
Volgograd daylight saving time.

SA Eastern Standard Time (UTC–03:00) Cayenne, Fortaleza This time zone


doesn't observe
daylight saving time.

SA Pacific Standard Time (UTC–05:00) Bogota, Lima, Quito, This time zone
Rio Branco doesn't observe
daylight saving time.

SA Western Standard Time (UTC–04:00) Georgetown, La Paz, This time zone


Manaus, San Juan doesn't observe
daylight saving time.

1095
Amazon Relational Database Service User Guide
RDS Custom for SQL Server requirements and limitations

Time zone Standard time Description Notes


offset

Saint Pierre Standard Time (UTC–03:00) Saint Pierre and


Miquelon

Sakhalin Standard Time (UTC+11:00) Sakhalin

Samoa Standard Time (UTC+13:00) Samoa

Sao Tome Standard Time (UTC+01:00) Sao Tome

Saratov Standard Time (UTC+04:00) Saratov

SE Asia Standard Time (UTC+07:00) Bangkok, Hanoi, This time zone


Jakarta doesn't observe
daylight saving time.

Singapore Standard Time (UTC+08:00) Kuala Lumpur, This time zone


Singapore doesn't observe
daylight saving time.

South Africa Standard Time (UTC+02:00) Harare, Pretoria This time zone
doesn't observe
daylight saving time.

Sri Lanka Standard Time (UTC+05:30) Sri Jayawardenepura This time zone
doesn't observe
daylight saving time.

Sudan Standard Time (UTC+02:00) Khartoum

Syria Standard Time (UTC+02:00) Damascus

Taipei Standard Time (UTC+08:00) Taipei This time zone


doesn't observe
daylight saving time.

Tasmania Standard Time (UTC+10:00) Hobart

Tocantins Standard Time (UTC–03:00) Araguaina

Tokyo Standard Time (UTC+09:00) Osaka, Sapporo, This time zone


Tokyo doesn't observe
daylight saving time.

Tomsk Standard Time (UTC+07:00) Tomsk

Tonga Standard Time (UTC+13:00) Nuku'alofa This time zone


doesn't observe
daylight saving time.

Transbaikal Standard Time (UTC+09:00) Chita

Turkey Standard Time (UTC+03:00) Istanbul

Turks And Caicos Standard Time (UTC–05:00) Turks and Caicos

Ulaanbaatar Standard Time (UTC+08:00) Ulaanbaatar This time zone


doesn't observe
daylight saving time.

1096
Amazon Relational Database Service User Guide
RDS Custom for SQL Server requirements and limitations

Time zone Standard time Description Notes


offset

US Eastern Standard Time (UTC–05:00) Indiana (East)

US Mountain Standard Time (UTC–07:00) Arizona This time zone


doesn't observe
daylight saving time.

UTC UTC Coordinated Universal This time zone


Time doesn't observe
daylight saving time.

UTC–02 (UTC–02:00) Coordinated Universal This time zone


Time–02 doesn't observe
daylight saving time.

UTC–08 (UTC–08:00) Coordinated Universal


Time–08

UTC–09 (UTC–09:00) Coordinated Universal


Time–09

UTC–11 (UTC–11:00) Coordinated Universal This time zone


Time–11 doesn't observe
daylight saving time.

UTC+12 (UTC+12:00) Coordinated Universal This time zone


Time+12 doesn't observe
daylight saving time.

UTC+13 (UTC+13:00) Coordinated Universal


Time+13

Venezuela Standard Time (UTC–04:00) Caracas This time zone


doesn't observe
daylight saving time.

Vladivostok Standard Time (UTC+10:00) Vladivostok

Volgograd Standard Time (UTC+04:00) Volgograd

W. Australia Standard Time (UTC+08:00) Perth This time zone


doesn't observe
daylight saving time.

W. Central Africa Standard Time (UTC+01:00) West Central Africa This time zone
doesn't observe
daylight saving time.

W. Europe Standard Time (UTC+01:00) Amsterdam,


Berlin, Bern, Rome,
Stockholm, Vienna

W. Mongolia Standard Time (UTC+07:00) Hovd

West Asia Standard Time (UTC+05:00) Ashgabat, Tashkent This time zone
doesn't observe
daylight saving time.

1097
Amazon Relational Database Service User Guide
RDS Custom for SQL Server requirements and limitations

Time zone Standard time Description Notes


offset

West Bank Standard Time (UTC+02:00) Gaza, Hebron

West Pacific Standard Time (UTC+10:00) Guam, Port Moresby This time zone
doesn't observe
daylight saving time.

Yakutsk Standard Time (UTC+09:00) Yakutsk

1098
Amazon Relational Database Service User Guide
Setting up your RDS Custom for SQL Server environment

Setting up your environment for Amazon RDS


Custom for SQL Server
Before you create and manage a DB instance for Amazon RDS Custom for SQL Server DB instance, make
sure to perform the following tasks.

Contents
• Prerequisites for setting up RDS Custom for SQL Server (p. 1099)
• Download and install the AWS CLI (p. 1100)
• Grant required permissions to your IAM principal (p. 1100)
• Configure networking, instance profile, and encryption (p. 1101)
• Configuring with AWS CloudFormation (p. 1101)
• Resources created by CloudFormation (p. 1102)
• Downloading the template file (p. 1102)
• Configuring resources using CloudFormation (p. 1102)
• Configuring manually (p. 1104)
• Make sure that you have a symmetric encryption AWS KMS key (p. 1104)
• Creating your IAM role and instance profile manually (p. 1105)
• Create the AWSRDSCustomSQLServerInstanceRole IAM role (p. 1105)
• Add an access policy to AWSRDSCustomSQLServerInstanceRole (p. 1105)
• Create your RDS Custom for SQL Server instance profile (p. 1109)
• Add AWSRDSCustomSQLServerInstanceRole to your RDS Custom for SQL
Server instance profile (p. 1109)
• Configuring your VPC manually (p. 1109)
• Configure your VPC security group (p. 1110)
• Configure endpoints for dependent AWS services (p. 1110)
• Configure the instance metadata service (p. 1112)

Prerequisites for setting up RDS Custom for SQL Server


Before creating an RDS Custom for SQL Server DB instance, make sure that your environment meets the
requirements described in this topic. As part of this setup process, make sure to configure the following
prerequisites:

• Configure the specified AWS Identity and Access Management (IAM) users and roles.

These are either used to create an RDS Custom DB instance or passed as a parameter in a creation
request.
• Confirm there aren't any service control policies (SCPs) restricting account level permissions.

If the account that you're using is part of an AWS Organization, it might have service control policies
(SCPs) restricting account level permissions. Make sure that the SCPs don't restrict the permissions on
users and roles that you create using the following procedures.

For more information about SCPs, see Service control policies (SCPs) in the AWS Organizations User
Guide. Use the describe-organization AWS CLI command to check whether your account is part of an
AWS Organization.

For more information about AWS Organizations, see What is AWS Organizations in the AWS
Organizations User Guide.

1099
Amazon Relational Database Service User Guide
Setting up your RDS Custom for SQL Server environment

Note
For a step-by-step tutorial on how to set up prerequisites and launch Amazon RDS Custom for
SQL Server, see the blog post Get started with Amazon RDS Custom for SQL Server using an
CloudFormation template (Network setup)

For each task, you can find descriptions following for the requirements and limitations specific to that
task. For example, when you create your RDS Custom for SQL Server DB instance, use one of the SQL
Server instances listed in DB instance class support for RDS Custom for SQL Server (p. 1089).

For general requirements that apply to RDS Custom for SQL Server, see General requirements for RDS
Custom for SQL Server (p. 1089).

Download and install the AWS CLI


AWS provides you with a command-line interface to use RDS Custom features. You can use either version
1 or version 2 of the AWS CLI.

For information about downloading and installing the AWS CLI, see Installing or updating the latest
version of the AWS CLI.

Skip this step if either of the following is true:

• You plan to access RDS Custom only from the AWS Management Console.
• You have already downloaded the AWS CLI for Amazon RDS or a different RDS Custom DB engine.

Grant required permissions to your IAM principal


You use an IAM role or IAM user (referred to as the IAM principal) for creating an RDS Custom for SQL
Server DB instance using the console or CLI. This IAM principal must have either of the following policies
for successful DB instance creation:

• The AdministratorAccess policy


• The AmazonRDSFullAccess policy with the following additional permissions:

iam:SimulatePrincipalPolicy
cloudtrail:CreateTrail
cloudtrail:StartLogging
s3:CreateBucket
s3:PutBucketPolicy
s3:PutBucketObjectLockConfiguration
s3:PutBucketVersioning
kms:CreateGrant
kms:DescribeKey

For more information about the kms:CreateGrant permission, see AWS KMS key
management (p. 2589).

The following sample JSON policy grants the required permissions.

{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ValidateIamRole",
"Effect": "Allow",
"Action": "iam:SimulatePrincipalPolicy",
"Resource": "*"

1100
Amazon Relational Database Service User Guide
Setting up your RDS Custom for SQL Server environment

},
{
"Sid": "CreateCloudTrail",
"Effect": "Allow",
"Action": [
"cloudtrail:CreateTrail",
"cloudtrail:StartLogging"
],
"Resource": "arn:aws:cloudtrail:*:*:trail/do-not-delete-rds-custom-*"
},
{
"Sid": "CreateS3Bucket",
"Effect": "Allow",
"Action": [
"s3:CreateBucket",
"s3:PutBucketPolicy",
"s3:PutBucketObjectLockConfiguration",
"s3:PutBucketVersioning"
],
"Resource": "arn:aws:s3:::do-not-delete-rds-custom-*"
},
{
"Sid": "CreateKmsGrant",
"Effect": "Allow",
"Action": [
"kms:CreateGrant",
"kms:DescribeKey"
],
"Resource": "*"
}
]
}

Also, the IAM principal requires the iam:PassRole permission on the IAM role. That must be attached
to the instance profile passed in the custom-iam-instance-profile parameter in the request
to create the RDS Custom DB instance. The instance profile and its attached role are created later in
Configure networking, instance profile, and encryption (p. 1101).

Make sure that the previously listed permissions aren't restricted by service control policies (SCPs),
permission boundaries, or session policies associated with the IAM principal.

Configure networking, instance profile, and encryption


You can configure your IAM instance profile role, virtual private cloud (VPC), and AWS KMS symmetric
encryption key by using either of the following processes:

• Configuring with AWS CloudFormation (p. 1101) (recommended)


• Configuring manually (p. 1104)

If your account is part of an AWS Organization, make sure that the permissions required by the instance
profile role aren’t restricted by service control policies (SCPs).

The following networking configurations are designed to work best with DB instances that aren't publicly
accessible. That is, you can’t connect directly to the DB instance from outside the VPC.

Configuring with AWS CloudFormation


To simplify setup, you can use an AWS CloudFormation template file to create a CloudFormation stack.
To learn how to create stacks, see Creating a stack on the AWS CloudFormation console in the AWS
CloudFormation User Guide.

1101
Amazon Relational Database Service User Guide
Setting up your RDS Custom for SQL Server environment

For a tutorial on how to launch Amazon RDS Custom for SQL Server using an AWS CloudFormation
template, see Get started with Amazon RDS Custom for SQL Server using an AWS CloudFormation
template in the AWS Database Blog .

Topics
• Resources created by CloudFormation (p. 1102)
• Downloading the template file (p. 1102)
• Configuring resources using CloudFormation (p. 1102)

Resources created by CloudFormation

Successfully creating the CloudFormation stack creates the following resources in your AWS account:

• Symmetric encryption KMS key for encryption of data managed by RDS Custom.
• Instance profile and associated IAM role for attaching to RDS Custom instances.
• VPC with the CIDR range specified as the CloudFormation parameter. The default value is
10.0.0.0/16.
• Two private subnets with the CIDR range specified in the parameters, and two different Availability
Zones in the AWS Region. The default values for the subnet CIDRs are 10.0.128.0/20 and
10.0.144.0/20.
• DHCP option set for the VPC with domain name resolution to an Amazon Domain Name System (DNS)
server.
• Route table to associate with two private subnets and no access to the internet.
• Network access control list (ACL) to associate with two private subnets and access restricted to HTTPS.
• VPC security group to be associated with the RDS Custom instance. Access is restricted for outbound
HTTPS to AWS service endpoints that are required by RDS Custom.
• VPC security group to be associated with VPC endpoints that are created for AWS service endpoints
that are required by RDS Custom.
• DB subnet group in which RDS Custom instances are created.
• VPC endpoints for each of the AWS service endpoints that are required by RDS Custom.

Use the following procedures to create the CloudFormation stack for RDS Custom for SQL Server.

Downloading the template file

To download the template file

1. Open the context (right-click) menu for the link custom-sqlserver-onboard.zip and choose Save
Link As.
2. Save and extract the file to your computer.

Configuring resources using CloudFormation

To configure resources using CloudFormation

1. Open the CloudFormation console at https://console.aws.amazon.com/cloudformation.


2. To start the Create Stack wizard, choose Create Stack.

The Create stack page appears.


3. For Prerequisite - Prepare template, choose Template is ready.
4. For Specify template, do the following:

1102
Amazon Relational Database Service User Guide
Setting up your RDS Custom for SQL Server environment

a. For Template source, choose Upload a template file.


b. For Choose file, navigate to and then choose the correct file.
5. Choose Next.

The Specify stack details page appears.


6. For Stack name, enter rds-custom-sqlserver.
7. For Parameters, do the following:

a. To keep the default options, choose Next.


b. To change options, choose the appropriate CIDR block range for the VPC and two of its subnets,
and then choose Next.

Read the description of each parameter carefully before changing parameters.


8. On the Configure stack options page, choose Next.
9. On the Review rds-custom-sqlserver page, do the following:

a. For Capabilities, select the I acknowledge that AWS CloudFormation might create IAM
resources with custom names check box.
b. Choose Create stack.
10. (Optional): You can update the SQS permissions in the instance profile role.

If you want to deploy only a Single-AZ DB instance, you can edit the CloudFormation template
file to remove SQS permissions. SQS permissions are only required for a Multi-AZ deployment and
allow RDS Custom for SQL Server to call Amazon SQS to perform specific actions. Because they are
not required for a Single-AZ deployment, you may opt to remove these permissions to follow the
principle of least privilege.

If you want to configure a Multi-AZ deployment, you don't need to remove the SQS permissions.
Note
If you remove the SQS permissions and later choose to modify to a Multi-AZ deployment,
the Multi-AZ creation will fail. You would need to re-add the SQS permissions before
modifying to a Multi-AZ deployment.

To make this optional change to the CloudFormation template, open the CloudFormation console
at https://console.aws.amazon.com/cloudformation, and edit the template file by removing the
following lines:

{
"Sid": "SendMessageToSQSQueue",
"Effect": "Allow",
"Action": [
"SQS:SendMessage",
"SQS:ReceiveMessage",
"SQS:DeleteMessage",
"SQS:GetQueueUrl"

],
"Resource": [
{
"Fn::Sub": "arn:${AWS::Partition}:sqs:${AWS::Region}:${AWS::AccountId}:do-
not-delete-rds-custom-*"
}
],
"Condition": {
"StringLike": {
"aws:ResourceTag/AWSRDSCustom": "custom-sqlserver"

1103
Amazon Relational Database Service User Guide
Setting up your RDS Custom for SQL Server environment

}
}
}

CloudFormation creates the resources that RDS Custom for SQL Server requires. If the stack creation
fails, read through the Events tab to see which resource creation failed and its status reason.

The Outputs tab for this CloudFormation stack in the console should have information about all
resources to be passed as parameters for creating an RDS Custom for SQL Server DB instance. Make
sure to use the VPC security group and DB subnet group created by CloudFormation for RDS Custom DB
instances. By default, RDS tries to attach the default VPC security group, which might not have the access
that you need.
Note
When you delete a CloudFormation stack, all of the resources created by the stack are deleted
except the KMS key. The KMS key goes into a pending-deletion state and is deleted after
30 days. To keep the KMS key, perform a CancelKeyDeletion operation during the 30-day grace
period.

If you used CloudFormation to create resources, you can skip Configuring manually (p. 1104).

Configuring manually
If you choose to configure resources manually, perform the following tasks.
Note
To simplify setup, you can use the AWS CloudFormation template file to create a
CloudFormation stack rather than a manual configuration. For more information, see
Configuring with AWS CloudFormation (p. 1101).

Topics
• Make sure that you have a symmetric encryption AWS KMS key (p. 1104)
• Creating your IAM role and instance profile manually (p. 1105)
• Configuring your VPC manually (p. 1109)

Make sure that you have a symmetric encryption AWS KMS key

A symmetric encryption AWS KMS key is required for RDS Custom. When you create an RDS Custom for
SQL Server DB instance, make sure to supply the KMS key identifier. For more information, see Creating
and connecting to a DB instance for Amazon RDS Custom for SQL Server (p. 1130).

You have the following options:

• If you have an existing customer managed KMS key in your AWS account, you can use it with RDS
Custom. No further action is necessary.
• If you already created a customer managed symmetric encryption KMS key for a different RDS Custom
engine, you can reuse the same KMS key. No further action is necessary.
• If you don't have an existing customer managed symmetric encryption KMS key in your account, create
a KMS key by following the instructions in Creating keys in the AWS Key Management Service Developer
Guide.
• If you're creating a CEV or RDS Custom DB instance, and your KMS key is in a different AWS account,
make sure to use the AWS CLI. You can't use the AWS console with cross-account KMS keys.

Important
RDS Custom doesn't support AWS managed KMS keys.

1104
Amazon Relational Database Service User Guide
Setting up your RDS Custom for SQL Server environment

Make sure that your symmetric encryption key grants access to the kms:Decrypt and
kms:GenerateDataKey operations to the AWS Identity and Access Management (IAM) role in your IAM
instance profile. If you have a new symmetric encryption key in your account, no changes are required.
Otherwise, make sure that your symmetric encryption key's policy grants access to these operations.

For more information, see Step 3: Configure IAM and your Amazon VPC (p. 1003).

Creating your IAM role and instance profile manually


To use RDS Custom for SQL Server, create an IAM instance profile and IAM role as described following.

To create the IAM instance profile and IAM roles for RDS Custom for SQL Server

1. Create the IAM role named AWSRDSCustomSQLServerInstanceRole with a trust policy that lets
Amazon EC2 assume this role.
2. Add an access policy to AWSRDSCustomSQLServerInstanceRole.
3. Create an IAM instance profile for RDS Custom for SQL Server that is named
AWSRDSCustomSQLServerInstanceProfile.
4. Add AWSRDSCustomSQLServerInstanceRole to the instance profile.

Create the AWSRDSCustomSQLServerInstanceRole IAM role


The following example creates the AWSRDSCustomSQLServerInstanceRole role. The trust policy lets
Amazon EC2 assume the role.

aws iam create-role \


--role-name AWSRDSCustomSQLServerInstanceRole \
--assume-role-policy-document '{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Effect": "Allow",
"Principal": {
"Service": "ec2.amazonaws.com"
}
}
]
}'

Add an access policy to AWSRDSCustomSQLServerInstanceRole


When you embed an inline policy in a role, the inline policy is used as part of the role's access
(permissions) policy. You create the AWSRDSCustomSQLServerIamRolePolicy policy, which lets
Amazon EC2 get and receive messages and perform various actions.

Make sure that the permissions in the access policy aren't restricted by SCPs or permission boundaries
associated with the instance profile role.

The following example creates the access policy named AWSRDSCustomSQLServerIamRolePolicy,


and adds it to the AWSRDSCustomSQLServerInstanceRole role. This example assumes that
the '$REGION', $ACCOUNT_ID, and '$CUSTOMER_KMS_KEY_ID' variables have been set.
'$CUSTOMER_KMS_KEY_ID' is the ID, not the Amazon Resource Name (ARN), of the KMS key that you
defined in Make sure that you have a symmetric encryption AWS KMS key (p. 1104).

aws iam put-role-policy \


--role-name AWSRDSCustomSQLServerInstanceRole \
--policy-name AWSRDSCustomSQLServerIamRolePolicy \
--policy-document '{

1105
Amazon Relational Database Service User Guide
Setting up your RDS Custom for SQL Server environment

"Version": "2012-10-17",
"Statement": [
{
"Sid": "ssmAgent1",
"Effect": "Allow",
"Action": [
"ssm:GetDeployablePatchSnapshotForInstance",
"ssm:ListAssociations",
"ssm:PutInventory",
"ssm:PutConfigurePackageResult",
"ssm:UpdateInstanceInformation",
"ssm:GetManifest"
],
"Resource": "*"
},
{
"Sid": "ssmAgent2",
"Effect": "Allow",
"Action": [
"ssm:ListInstanceAssociations",
"ssm:PutComplianceItems",
"ssm:UpdateAssociationStatus",
"ssm:DescribeAssociation",
"ssm:UpdateInstanceAssociationStatus"
],
"Resource": "arn:aws:ec2:'$REGION':'$ACCOUNT_ID':instance/*",
"Condition": {
"StringLike": {
"aws:ResourceTag/AWSRDSCustom": "custom-sqlserver"
}
}
},
{
"Sid": "ssmAgent3",
"Effect": "Allow",
"Action": [
"ssm:UpdateAssociationStatus",
"ssm:DescribeAssociation",
"ssm:GetDocument",
"ssm:DescribeDocument"
],
"Resource": "arn:aws:ssm:*:*:document/*"
},
{
"Sid": "ssmAgent4",
"Effect": "Allow",
"Action": [
"ssmmessages:CreateControlChannel",
"ssmmessages:CreateDataChannel",
"ssmmessages:OpenControlChannel",
"ssmmessages:OpenDataChannel"
],
"Resource": "*"
},
{
"Sid": "ssmAgent5",
"Effect": "Allow",
"Action": [
"ec2messages:AcknowledgeMessage",
"ec2messages:DeleteMessage",
"ec2messages:FailMessage",
"ec2messages:GetEndpoint",
"ec2messages:GetMessages",
"ec2messages:SendReply"
],
"Resource": "*"

1106
Amazon Relational Database Service User Guide
Setting up your RDS Custom for SQL Server environment

},
{
"Sid": "ssmAgent6",
"Effect": "Allow",
"Action": [
"ssm:GetParameters",
"ssm:GetParameter"
],
"Resource": "arn:aws:ssm:*:*:parameter/*"
},
{
"Sid": "ssmAgent7",
"Effect": "Allow",
"Action": [
"ssm:UpdateInstanceAssociationStatus",
"ssm:DescribeAssociation"
],
"Resource": "arn:aws:ssm:*:*:association/*"
},
{
"Sid": "eccSnapshot1",
"Effect": "Allow",
"Action": "ec2:CreateSnapshot",
"Resource": [
"arn:aws:ec2:'$REGION':'$ACCOUNT_ID':volume/*"
],
"Condition": {
"StringLike": {
"aws:ResourceTag/AWSRDSCustom": "custom-sqlserver"
}
}
},
{
"Sid": "eccSnapshot2",
"Effect": "Allow",
"Action": "ec2:CreateSnapshot",
"Resource": [
"arn:aws:ec2:'$REGION'::snapshot/*"
],
"Condition": {
"StringLike": {
"aws:RequestTag/AWSRDSCustom": "custom-sqlserver"
}
}
},
{
"Sid": "eccCreateTag",
"Effect": "Allow",
"Action": "ec2:CreateTags",
"Resource": "*",
"Condition": {
"StringLike": {
"aws:RequestTag/AWSRDSCustom": "custom-sqlserver",
"ec2:CreateAction": [
"CreateSnapshot"
]
}
}
},
{
"Sid": "s3BucketAccess",
"Effect": "Allow",
"Action": [
"s3:putObject",
"s3:getObject",
"s3:getObjectVersion",

1107
Amazon Relational Database Service User Guide
Setting up your RDS Custom for SQL Server environment

"s3:AbortMultipartUpload"
],
"Resource": [
"arn:aws:s3:::do-not-delete-rds-custom-*/*"
]
},
{
"Sid": "customerKMSEncryption",
"Effect": "Allow",
"Action": [
"kms:Decrypt",
"kms:GenerateDataKey*"
],
"Resource": [
"arn:aws:kms:'$REGION':'$ACCOUNT_ID':key/'$CUSTOMER_KMS_KEY_ID'"
]
},
{
"Sid": "readSecretsFromCP",
"Effect": "Allow",
"Action": [
"secretsmanager:GetSecretValue",
"secretsmanager:DescribeSecret"
],
"Resource": [
"arn:aws:secretsmanager:'$REGION':'$ACCOUNT_ID':secret:do-not-delete-
rds-custom-*"
],
"Condition": {
"StringLike": {
"aws:ResourceTag/AWSRDSCustom": "custom-sqlserver"
}
}
},
{
"Sid": "publishCWMetrics",
"Effect": "Allow",
"Action": "cloudwatch:PutMetricData",
"Resource": "*",
"Condition": {
"StringEquals": {
"cloudwatch:namespace": "rdscustom/rds-custom-sqlserver-agent"
}
}
},
{
"Sid": "putEventsToEventBus",
"Effect": "Allow",
"Action": "events:PutEvents",
"Resource": "arn:aws:events:'$REGION':'$ACCOUNT_ID':event-bus/default"
},
{
"Sid": "cwlOperations1",
"Effect": "Allow",
"Action": [
"logs:PutRetentionPolicy",
"logs:PutLogEvents",
"logs:DescribeLogStreams",
"logs:CreateLogStream",
"logs:CreateLogGroup"
],
"Resource": "arn:aws:logs:'$REGION':'$ACCOUNT_ID':log-group:rds-custom-
instance-*"
},
{
"Condition": {

1108
Amazon Relational Database Service User Guide
Setting up your RDS Custom for SQL Server environment

"StringLike": {
"aws:ResourceTag/AWSRDSCustom": "custom-sqlserver"
}
},
"Action": [
"SQS:SendMessage",
"SQS:ReceiveMessage",
"SQS:DeleteMessage",
"SQS:GetQueueUrl"
],
"Resource": [
"arn:aws:sqs:'$REGION':'$ACCOUNT_ID':do-not-delete-rds-custom-*"
],
"Effect": "Allow",
"Sid": "SendMessageToSQSQueue"
}
]
}'

Create your RDS Custom for SQL Server instance profile


Create your instance profile as follows, naming it AWSRDSCustomSQLServerInstanceProfile.

aws iam create-instance-profile \


--instance-profile-name AWSRDSCustomSQLServerInstanceProfile

Add AWSRDSCustomSQLServerInstanceRole to your RDS Custom for SQL Server instance profile
Add the AWSRDSCustomInstanceRoleForRdsCustomInstance role to the
AWSRDSCustomSQLServerInstanceProfile profile.

aws iam add-role-to-instance-profile \


--instance-profile-name AWSRDSCustomSQLServerInstanceProfile \
--role-name AWSRDSCustomSQLServerInstanceRole

Configuring your VPC manually


Your RDS Custom DB instance is in a virtual private cloud (VPC) based on the Amazon VPC service, just
like an Amazon EC2 instance or Amazon RDS instance. You provide and configure your own VPC. Thus,
you have full control over your instance networking setup.

RDS Custom sends communication from your DB instance to other AWS services. To make sure that RDS
Custom can communicate, it validates network connectivity to the following AWS services:

• Amazon CloudWatch
• Amazon CloudWatch Logs
• Amazon CloudWatch Events
• Amazon EC2
• Amazon EventBridge
• Amazon S3
• AWS Secrets Manager
• AWS Systems Manager

If RDS Custom can't communicate with the necessary services, it publishes the following event:

Database instance in incompatible-network. SSM Agent connection not available. Amazon RDS
can't connect to the dependent AWS services.

1109
Amazon Relational Database Service User Guide
Setting up your RDS Custom for SQL Server environment

To avoid incompatible-network errors, make sure that VPC components involved in communication
between your RDS Custom DB instance and AWS services satisfy the following requirements:

• The DB instance can make outbound connections on port 443 to other AWS services.
• The VPC allows incoming responses to requests originating from your RDS Custom DB instance.
• RDS Custom can correctly resolve the domain names of endpoints for each AWS service.

RDS Custom relies on AWS Systems Manager connectivity for its automation. For information about how
to configure VPC endpoints, see Creating VPC endpoints for Systems Manager. For the list of endpoints
in each Region, see AWS Systems Manager endpoints and quotas in the Amazon Web Services General
Reference.

If you already configured a VPC for a different RDS Custom DB engine, you can reuse that VPC and skip
this process.

Topics
• Configure your VPC security group (p. 1110)
• Configure endpoints for dependent AWS services (p. 1110)
• Configure the instance metadata service (p. 1112)

Configure your VPC security group

A security group acts as a virtual firewall for a VPC instance, controlling both inbound and outbound
traffic. An RDS Custom DB instance has a default security group that protects the instance. Make sure
that your security group permits traffic between RDS Custom and other AWS services.

To configure your security group for RDS Custom

1. Sign in to the AWS Management Console and open the Amazon VPC console at https://
console.aws.amazon.com/vpc.
2. Allow RDS Custom to use the default security group, or create your own security group.

For detailed instructions, see Provide access to your DB instance in your VPC by creating a security
group (p. 177).
3. Make sure that your security group permits outbound connections on port 443. RDS Custom needs
this port to communicate with dependent AWS services.
4. If you have a private VPC and use VPC endpoints, make sure that the security group associated with
the DB instance allows outbound connections on port 443 to VPC endpoints. Also make sure that
the security group associated with the VPC endpoint allows inbound connections on port 443 from
the DB instance.

If incoming connections aren't allowed, the RDS Custom instance can't connect to the AWS Systems
Manager and Amazon EC2 endpoints. For more information, see Create a Virtual Private Cloud
endpoint in the AWS Systems Manager User Guide.

For more information about security groups, see Security groups for your VPC in the Amazon VPC
Developer Guide.

Configure endpoints for dependent AWS services

Make sure that your VPC allows outbound traffic to the following AWS services with which the DB
instance communicates:

• Amazon CloudWatch

1110
Amazon Relational Database Service User Guide
Setting up your RDS Custom for SQL Server environment

• Amazon CloudWatch Logs


• Amazon CloudWatch Events
• Amazon EC2
• Amazon EventBridge
• Amazon S3
• AWS Secrets Manager
• AWS Systems Manager

We recommend that you add endpoints for every service to your VPC using the following instructions.
However, you can use any solution that lets your VPC communicate with AWS service endpoints. For
example, you can use Network Address Translation (NAT) or AWS Direct Connect.

To configure endpoints for AWS services with which RDS Custom works

1. Open the Amazon VPC console at https://console.aws.amazon.com/vpc/.


2. On the navigation bar, use the Region selector to choose the AWS Region.
3. In the navigation pane, choose Endpoints. In the main pane, choose Create Endpoint.
4. For Service category, choose AWS services.
5. For Service Name, choose the endpoint shown in the table.
6. For VPC, choose your VPC.
7. For Subnets, choose a subnet from each Availability Zone to include.

The VPC endpoint can span multiple Availability Zones. AWS creates an elastic network interface
for the VPC endpoint in each subnet that you choose. Each network interface has a Domain Name
System (DNS) host name and a private IP address.
8. For Security group, choose or create a security group.

You can use security groups to control access to your endpoint, much as you use a firewall. For more
information about security groups, see Security groups for your VPC in the Amazon VPC User Guide.
9. Optionally, you can attach a policy to the VPC endpoint. Endpoint policies can control access to the
AWS service to which you are connecting. The default policy allows all requests to pass through the
endpoint. If you're using a custom policy, make sure that requests from the DB instance are allowed
in the policy.
10. Choose Create endpoint.

The following table explains how to find the list of endpoints that your VPC needs for outbound
communications.

Service Endpoint format Notes and links

AWS Systems Use the following endpoint formats: For the list of endpoints in each Region,
Manager see AWS Systems Manager endpoints
• ssm.region.amazonaws.com and quotas in the Amazon Web Services
• ssmmessages.region.amazonaws.com General Reference.

AWS Secrets Manager Use the endpoint format For the list of endpoints in each Region,
secretsmanager.region.amazonaws.com. see AWS Secrets Manager endpoints
and quotas in the Amazon Web Services
General Reference.

1111
Amazon Relational Database Service User Guide
Setting up your RDS Custom for SQL Server environment

Service Endpoint format Notes and links

Amazon CloudWatch Use the following endpoint formats: For the list of endpoints in every Region,
see:
• For CloudWatch metrics, use
monitoring.region.amazonaws.com • Amazon CloudWatch endpoints and
• For CloudWatch Events, use quotas in the Amazon Web Services
events.region.amazonaws.com General Reference
• For CloudWatch Logs, use • Amazon CloudWatch Logs endpoints
logs.region.amazonaws.com and quotas in the Amazon Web Services
General Reference
• Amazon CloudWatch Events endpoints
and quotas in the Amazon Web Services
General Reference

Amazon EC2 Use the following endpoint formats: For the list of endpoints in each Region,
see Amazon Elastic Compute Cloud
• ec2.region.amazonaws.com endpoints and quotas in the Amazon
• ec2messages.region.amazonaws.com Web Services General Reference.

Amazon S3 Use the endpoint format For the list of endpoints in each Region,
s3.region.amazonaws.com. see Amazon Simple Storage Service
endpoints and quotas in the Amazon
Web Services General Reference.

To learn more about gateway endpoints


for Amazon S3, see Endpoints for
Amazon S3 in the Amazon VPC Developer
Guide.

To learn how to create an access point,


see Creating access points in the Amazon
VPC Developer Guide.

To learn how to create a gateway


endpoints for Amazon S3, see Gateway
VPC endpoints.

Configure the instance metadata service

Make sure that your instance can do the following:

• Access the instance metadata service using Instance Metadata Service Version 2 (IMDSv2).
• Allow outbound communications through port 80 (HTTP) to the IMDS link IP address.
• Request instance metadata from http://169.254.169.254, the IMDSv2 link.

For more information, see Use IMDSv2 in the Amazon EC2 User Guide for Linux Instances.

1112
Amazon Relational Database Service User Guide
Bring Your Own Media with RDS Custom for SQL Server

Bring Your Own Media with RDS Custom for SQL


Server
RDS Custom for SQL Server supports two licensing models: License Included (LI) and Bring Your Own
Media (BYOM).

With BYOM, you can do the following:

1. Provide and install your own Microsoft SQL Server binaries with supported cumulative updates (CU)
on an AWS EC2 Windows AMI.
2. Save the AMI as a golden image, which is a template that you can use to create a custom engine
version (CEV).
3. Create a CEV from your golden image.
4. Create new RDS Custom for SQL Server DB instances by using your CEV.

Amazon RDS then manages your DB instances for you.


Note
If you also have a License Included (LI) RDS Custom for SQL Server DB instance, you can't use
the SQL Server software from this DB instance with BYOM. You must bring your own SQL Server
binaries to BYOM.

Requirements for BYOM for RDS Custom for SQL Server


The same general requirements for custom engine versions with RDS Custom for SQL Server also apply
to BYOM. For more information, see Requirements for RDS Custom for SQL Server CEVs (p. 1119).

When using BYOM, make you sure that you meet the following additional requirements:

• Use only SQL Server 2019 Enterprise and Standard edition. These are the only supported editions.
• Grant the SQL Server sysadmin (SA) server role privilege to NT AUTHORITY\SYSTEM.
• Keep the Windows Server OS configured with UTC time.

Amazon EC2 Windows instances are set to the UTC time zone by default. For more information about
viewing and changing the time for a Windows instance, see Set the time for a Windows instance.
• Open TCP port 1433 and UDP port 1434 to allow SSM connections.

Limitations of BYOM for RDS Custom for SQL Server


The same general limitations for RDS Custom for SQL Server also apply to BYOM. For more information,
see Requirements and limitations for Amazon RDS Custom for SQL Server (p. 1089).

With BYOM, the following additional limitations apply:

• Only the default SQL Server instance (MSSQLSERVER) is supported. Named SQL Server instances
aren't supported. RDS Custom for SQL Server detects and monitors only the default SQL Server
instance.
• Only a single installation of SQL Server is supported on each AMI. Multiple installations of different
SQL Server versions aren't supported.
• SQL Server Web edition isn't supported with BYOM.
• Evaluation versions of SQL Server editions aren't supported with BYOM. When you install SQL Server,
don't select the checkbox for using an evaluation version.

1113
Amazon Relational Database Service User Guide
Bring Your Own Media with RDS Custom for SQL Server

• Feature availability and support varies across specific versions of each database engine, and
across AWS Regions. For more information, see Region availability for RDS Custom for SQL Server
CEVs (p. 1118) and Version support for RDS Custom for SQL Server CEVs (p. 1119).

Creating an RDS Custom for SQL Server DB instance with BYOM


To prepare and create an RDS Custom for SQL Server DB instance with BYOM, see Preparing a CEV using
Bring Your Own Media (BYOM) (p. 1117).

1114
Amazon Relational Database Service User Guide
Working with CEVs for RDS Custom for SQL Server

Working with custom engine versions for RDS


Custom for SQL Server
A custom engine version (CEV) for RDS Custom for SQL Server is an Amazon Machine Image (AMI) that
includes Microsoft SQL Server.

The basic steps of the CEV workflow are as follows:

1. Choose an AWS EC2 Windows AMI to use as a base image for a CEV. You have the option to use pre-
installed Microsoft SQL Server, or bring your own media to install SQL Server yourself.
2. Install other software on the operating system (OS) and customize the configuration of the OS and
SQL Server to meet your enterprise needs.
3. Save the AMI as a golden image
4. Create a custom engine version (CEV) from your golden image.
5. Create new RDS Custom for SQL Server DB instances by using your CEV.

Amazon RDS then manages these DB instances for you.

A CEV allows you to maintain your preferred baseline configuration of the OS and database. Using
a CEV ensures that the host configuration, such as any third-party agent installation or other OS
customizations, are persisted on RDS Custom for SQL Server DB instances. With a CEV, you can quickly
deploy fleets of RDS Custom for SQL Server DB instances with the same configuration.

Topics
• Preparing to create a CEV for RDS Custom for SQL Server (p. 1115)
• Creating a CEV for RDS Custom for SQL Server (p. 1120)
• Modifying a CEV for RDS Custom for SQL Server (p. 1124)
• Viewing CEV details for Amazon RDS Custom for SQL Server (p. 1126)
• Deleting a CEV for RDS Custom for SQL Server (p. 1128)

Preparing to create a CEV for RDS Custom for SQL Server


You can create a CEV using an Amazon Machine Image (AMI) that contains pre-installed, License Included
(LI) Microsoft SQL Server, or with an AMI on which you install your own SQL Server installation media
(BYOM).

Contents
• Preparing a CEV using pre-installed SQL Server (LI) (p. 1115)
• Preparing a CEV using Bring Your Own Media (BYOM) (p. 1117)
• Region availability for RDS Custom for SQL Server CEVs (p. 1118)
• Version support for RDS Custom for SQL Server CEVs (p. 1119)
• Requirements for RDS Custom for SQL Server CEVs (p. 1119)
• Limitations for RDS Custom for SQL Server CEVs (p. 1119)

Preparing a CEV using pre-installed SQL Server (LI)


The following steps to create a CEV using pre-installed Microsoft SQL Server (LI) use an AMI with SQL
Server CU20 Release number 2023.05.10 as an example. When you create a CEV, choose an AMI with

1115
Amazon Relational Database Service User Guide
Working with CEVs for RDS Custom for SQL Server

the most recent release number. This ensures that you are using a supported version of Windows Server
and SQL Server with the latest Cumulative Update (CU).

To create a CEV using pre-installed Microsoft SQL Server (LI)

1. Choose the latest available AWS EC2 Windows Amazon Machine Image (AMI) with License Included
(LI) Microsoft Windows Server and SQL Server.

a. Search for CU20 within the Windows AMI version history.


b. Note the Release number. For SQL Server 2019 CU20, the release number is 2023.05.10.

c. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.


d. In the left navigation panel of the Amazon EC2 console choose Images, then AMIs.
e. Choose Public images.
f. Enter 2023.05.10 into the search box. A list of AMIs appears.
g. Enter Windows_Server-2019-English-Full-SQL_2019 into the search box to filter the
results. The following results should appear.

h. Choose the AMI with the SQL Server edition that you want to use.
2. Create or launch an EC2 instance from your chosen AMI.
3. Log in to the EC2 instance and install additional software or customize the OS and database
configuration to meet your requirements.
4. Run Sysprep on the EC2 instance. For more information prepping an AMI using Sysprep, see Create a
standardized Amazon Machine Image (AMI) using Sysprep.

1116
Amazon Relational Database Service User Guide
Working with CEVs for RDS Custom for SQL Server

5. Save the AMI that contains your installed SQL Server version, other software, and customizations.
This will be your golden image.
6. Create a new CEV by providing the AMI ID of the image that you created. For detailed steps on
creating a CEV, see Creating a CEV for RDS Custom for SQL Server (p. 1120).
7. Create a new RDS Custom for SQL Server DB instance using the CEV. For detailed steps, see Create
an RDS Custom for SQL Server DB instance from a CEV (p. 1122).

Preparing a CEV using Bring Your Own Media (BYOM)


The following steps use an AMI with Windows Server 2019 Release number 2023.05.10 as an example.
When creating a CEV, choose an AMI with the most recent release number. This ensures that you are
using the latest supported version of Windows Server.

To create a CEV using BYOM

1. Choose the latest available AWS EC2 Windows Amazon Machine Image (AMI) with Microsoft
Windows Server.

a. View the monthly AMI updates table within the Windows AMI version history.
b. Note the latest available Release number. For example, the release number for Windows Server
2019 might be 2023.05.10. Although the Changes column may show SQL Server CUs
installed, the release number also includes an AMI for Windows Server 2019, without SQL
Server pre-installed. You can use this AMI for BYOM.

c. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.


d. In the left navigation panel of the Amazon EC2 console, choose Images, then AMIs.
e. Choose Public images.
f. Enter Windows_Server-2019-English-Full-Base-2023.05.10 into the search box. The
following results should appear:

1117
Amazon Relational Database Service User Guide
Working with CEVs for RDS Custom for SQL Server

g. Choose the AMI with the supported Windows Server version that you want to use.
2. Create or launch an EC2 instance from your chosen AMI.
3. Log in to the EC2 instance and copy your SQL Server installation media to it.
4. Install SQL Server. Make sure that you do the following:

a. Review Requirements for BYOM for RDS Custom for SQL Server (p. 1113).
b. Set the instance root directory to the default C:\Program Files\Microsoft SQL Server\.
Don't change this directory.
c. Set the SQL Server Database Engine Account Name to either NT Service\MSSQLSERVER or NT
AUTHORITY\NETWORK SERVICE.
d. Set the SQL Server Startup mode to Manual.
e. Choose SQL Server Authentication mode as Mixed.
f. Leave the current settings for the default Data directories and TempDB locations.
5. Grant the SQL Server sysadmin (SA) server role privilege to NT AUTHORITY\SYSTEM:

USE [master]
GO
EXEC master..sp_addsrvrolemember @loginame = N'NT AUTHORITY\SYSTEM' , @rolename =
N'sysadmin'
GO

6. Install additional software or customize the OS and database configuration to meet your
requirements.
7. Run Sysprep on the EC2 instance. For more information, see Create a standardized Amazon Machine
Image (AMI) using Sysprep.
8. Save the AMI that contains your installed SQL Server version, other software, and customizations.
This will be your golden image.
9. Create a new CEV by providing the AMI ID of the image that you created. For detailed steps, see
Creating a CEV for RDS Custom for SQL Server (p. 1120).
10. Create a new RDS Custom for SQL Server DB instance using the CEV. For detailed steps, see Create
an RDS Custom for SQL Server DB instance from a CEV (p. 1122).

Region availability for RDS Custom for SQL Server CEVs


Custom engine version (CEV) support for RDS Custom for SQL Server is available in the following AWS
Regions:

• US East (Ohio)
• US East (N. Virginia)
• US West (Oregon)
• Asia Pacific (Mumbai)
• Asia Pacific (Seoul)
• Asia Pacific (Singapore)

1118
Amazon Relational Database Service User Guide
Working with CEVs for RDS Custom for SQL Server

• Asia Pacific (Sydney)


• Asia Pacific (Tokyo)
• Canada (Central)
• Europe (Frankfurt)
• Europe (Ireland)
• Europe (London)
• Europe (Stockholm)
• South America (São Paulo)

Version support for RDS Custom for SQL Server CEVs


CEV creation for RDS Custom for SQL Server is supported for the following AWS EC2 Windows AMIs:

• For CEVs using pre-installed media, AWS EC2 Windows AMIs with License Included (LI) Microsoft
Windows Server 2019 and SQL Server 2019
• For CEVs using bring your own media (BYOM), AWS EC2 Windows AMIs with Microsoft Windows Server
2019

CEV creation for RDS Custom for SQL Server is supported for the following operating system (OS) and
database editions:

• For CEVs using pre-installed media, SQL Server 2019 with CU17, CU18, or CU20, for Enterprise,
Standard, and Web editions
• For CEVs using bring your own media (BYOM), SQL Server 2019 with CU17, CU18, or CU20, for
Enterprise and Standard editions
• For CEVs using pre-installed media or bring your own media (BYOM), Windows Server 2019 is the only
supported OS

Requirements for RDS Custom for SQL Server CEVs


The following requirements apply to creating a CEV for RDS Custom for SQL Server:

• The AMI used to create a CEV must be based on an OS and database configuration supported by RDS
Custom for SQL Server. For more information on supported configurations, see Requirements and
limitations for Amazon RDS Custom for SQL Server (p. 1089).
• The CEV must have a unique name. You can't create a CEV with the same name as an existing CEV.
• You must name the CEV using a naming pattern of SQL Server major version + minor version +
customized string. The major version + minor version must match the SQL Server version provided with
the AMI. For example, you can name an AMI with SQL Server 2019 CU17 as 15.00.4249.2.my_cevtest.
• You must prepare an AMI using Sysprep. For more information about prepping an AMI using Sysprep,
see Create a standardized Amazon Machine Image (AMI) using Sysprep.
• You are responsible for maintaining the life cycle of the AMI. An RDS Custom for SQL Server DB
instance created from a CEV doesn't store a copy of the AMI. It maintains a pointer to the AMI that you
used to create the CEV. The AMI must exist for an RDS Custom for SQL Server DB instance to remain
operable.

Limitations for RDS Custom for SQL Server CEVs


The following limitations apply to custom engine versions with RDS Custom for SQL Server:

• You can't delete a CEV if there are resources, such as DB instances or DB snapshots, associated with it.

1119
Amazon Relational Database Service User Guide
Working with CEVs for RDS Custom for SQL Server

• To create an RDS Custom for SQL Server DB instance, a CEV must have a status of pending-
validation, available, failed, or validating. You can't create an RDS Custom for SQL Server
DB instance using a CEV if the CEV status is incompatible-image-configuration.
• To modify a RDS Custom for SQL Server DB instance to use a new CEV, the CEV must have a status of
available.
• You can't create an AMI or CEV from an existing RDS Custom for SQL Server DB instance.
• You can't modify an existing CEV to use a different AMI. However, you can modify an RDS Custom for
SQL Server DB instance to use a different CEV. For more information, see Modifying an RDS Custom for
SQL Server DB instance (p. 1141).
• Cross-Region copy of CEVs isn't supported.
• Cross-account copy of CEVs isn't supported.
• SQL Server Transparent Data Encryption (TDE) isn't supported.
• You can't restore or recover a CEV after you delete it. However, you can create a new CEV from the
same AMI.
• A RDS Custom for SQL Server DB instance stores your SQL Server database files in the D:\drive. The
AMI associated with a CEV should store the Microsoft SQL Server system database files in the C:\ drive.
• An RDS Custom for SQL Server DB instance retains your configuration changes made to SQL Server.
Any configuration changes to the OS on a running RDS Custom for SQL Server DB instance created
from a CEV aren't retained. If you need to make a permanent configuration change to the OS and have
it retained as your new baseline configuration, create a new CEV and modify the DB instance to use the
new CEV.
Important
Modifying an RDS Custom for SQL Server DB instance to use a new CEV is an offline
operation. You can perform the modification immediately or schedule it to occur during a
weekly maintenance window.
• When you modify a CEV, Amazon RDS doesn't push those modifications to any associated RDS Custom
for SQL Server DB instances. You must modify each RDS Custom for SQL Server DB instance to use
the new or updated CEV. For more information, see Modifying an RDS Custom for SQL Server DB
instance (p. 1141).
• Important
If an AMI used by a CEV is deleted, any modifications that may require host replacement, for
example, scale compute, will fail. The RDS Custom for SQL Server DB instance will then be
placed outside of the RDS support perimeter. We recommend that you avoid deleting any AMI
that's associated to a CEV.

Creating a CEV for RDS Custom for SQL Server


You can create a custom engine version (CEV) using the AWS Management Console or the AWS CLI. You
can then use the CEV to create an RDS Custom for SQL Server DB instance.

Make sure that the Amazon Machine Image (AMI) is in the same AWS account and Region as your CEV.
Otherwise, the process to create a CEV fails.

For more information, see Creating and connecting to a DB instance for Amazon RDS Custom for SQL
Server (p. 1130).
Important
The steps to create a CEV are the same for AMIs created with pre-installed SQL Server and those
created using bring your own media (BYOM).

1120
Amazon Relational Database Service User Guide
Working with CEVs for RDS Custom for SQL Server

Console

To create a CEV

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Custom engine versions.

The Custom engine versions page shows all CEVs that currently exist. If you haven't created any
CEVs, the table is empty.
3. Choose Create custom engine version.
4. For Engine type, choose Microsoft SQL Server.
5. For Edition, choose SQL Server Enterprise, Standard, or Web Edition.
6. For Major version, choose the major engine version that's installed on your AMI.
7. In Version details, enter a valid name in Custom engine version name.

The name format is major-engine-version.minor-engine-version.customized_string.


You can use 1–50 alphanumeric characters, underscores, dashes, and periods. For example, you
might enter the name 15.00.4249.2.my_cevtest.

Optionally, enter a description for your CEV.


8. For Installation Media, browse to or enter the AMI ID that you'd like to create the CEV from.
9. In the Tags section, add any tags to identify the CEV.
10. Choose Create custom engine version.

The Custom engine versions page appears. Your CEV is shown with the status pending-validation

AWS CLI
To create a CEV by using the AWS CLI, run the create-custom-db-engine-version command.

The following options are required:

• --engine
• --engine-version
• --image-id

You can also specify the following options:

• --kms-key-id
• --description
• --region
• --tags

The following example creates a CEV named 15.00.4249.2.my_cevtest. Make sure that the name of
your CEV begins with the major engine version number.

Example
For Linux, macOS, or Unix:

aws rds create-custom-db-engine-version \


--engine custom-sqlserver-ee \
--engine-version 15.00.4249.2.my_cevtest \

1121
Amazon Relational Database Service User Guide
Working with CEVs for RDS Custom for SQL Server

--image-id ami-0r93cx31t5r596482 \
--kms-key-id my-kms-key \
--description "Custom SQL Server EE 15.00.4249.2 cev test"

The following partial output shows the engine, parameter groups, and other information.

"DBEngineVersions": [
{
"Engine": "custom-sqlserver-ee",
"MajorEngineVersion": "15.00",
"EngineVersion": "15.00.4249.2.my_cevtest",
"DBEngineDescription": "Microsoft SQL Server Enterprise Edition for RDS Custom for SQL
Server",
"DBEngineVersionArn": "arn:aws:rds:us-east-1:<my-account-id>:cev:custom-sqlserver-
ee/15.00.4249.2.my_cevtest/a1234a1-123c-12rd-bre1-1234567890",
"DBEngineVersionDescription": "Custom SQL Server EE 15.00.4249.2 cev test",
"KMSKeyId": "arn:aws:kms:us-east-1:<your-account-id>:key/<my-kms-key-id>",

"Image": [
"ImageId": "ami-0r93cx31t5r596482",
"Status": "pending-validation"
],
"CreateTime": "2022-11-20T19:30:01.831000+00:00",
"SupportsLogExportsToCloudwatchLogs": false,
"SupportsReadReplica": false,
"Status": "pending-validation",
"SupportsParallelQuery": false,
"SupportsGlobalDatabases": false,
"TagList": []
}
]

If the process to create a CEV fails, RDS Custom for SQL Server issues RDS-EVENT-0198 with the
message Creation failed for custom engine version major-engine-version.cev_name.
The message includes details about the failure, for example, the event prints missing files. To find
troubleshooting ideas for CEV creation issues, see Troubleshooting CEV errors for RDS Custom for SQL
Server (p. 1170).

Create an RDS Custom for SQL Server DB instance from a CEV


After you successfully create a CEV, the CEV status shows pending-validation. You can now create a
new RDS Custom for SQL Server DB instance using the CEV. To create a new RDS Custom for SQL Server
DB instance from a CEV, see Creating an RDS Custom for SQL Server DB instance (p. 1130).

Lifecycle of a CEV
The CEV lifecycle includes the following statuses.

CEV status Description Troubleshooting suggestions

pending- A CEV was created If there are no existing tasks, create a new RDS
validation and is pending the Custom for SQL Server DB instance from the
validation of the CEV. When creating the RDS Custom for SQL
associated AMI. A Server DB instance, the system attempts to
CEV will remain validate the associated AMI for a CEV.
in pending-
validation until
an RDS Custom
for SQL Server DB

1122
Amazon Relational Database Service User Guide
Working with CEVs for RDS Custom for SQL Server

CEV status Description Troubleshooting suggestions


instance is created
from it.

validating A creation task for Wait for the creation task of the existing
the RDS Custom RDS Custom for SQL Server DB instance
for SQL Server DB to complete. You can use the RDS EVENTS
instance based console to review detailed event messages for
on a new CEV troubleshooting.
is in progress.
When creating
the RDS Custom
for SQL Server
DB instance, the
system attempts
to validate the
associated AMI of
a CEV.

available The CEV was The CEV doesn't require any additional
successfully validation. It can be used to create additional
validated. A RDS Custom for SQL Server DB instances or
CEV will enter modify existing ones.
the available
status once an
RDS Custom
for SQL Server
DB instance has
been successfully
created from it.

inactive The CEV has been You can't create or upgrade an RDS Custom DB
modified to an instance with this CEV. Also, you can't restore
inactive state. a DB snapshot to create a new RDS Custom
DB instance with this CEV. For information
about how to change the state to ACTIVE,
see Modifying a CEV for RDS Custom for SQL
Server (p. 1124).

failed The create DB Troubleshoot the root cause for why the
instance step system couldn't create the DB instance. View
failed for this CEV the detailed error message and try to create
before it could a new DB instance again. Ensure that the
validate the AMI. underlying AMI used by the CEV is in an
Alternatively, available state.
the underlying
AMI used by the
CEV isn't in an
available state.

1123
Amazon Relational Database Service User Guide
Working with CEVs for RDS Custom for SQL Server

CEV status Description Troubleshooting suggestions

incompatible- There was an error View the technical details of the error.
image- validating the AMI. You can't attempt to validate the AMI
configuration with this CEV again. Review the following:
recommendations:

• Ensure your CEV is named using the required


naming pattern of SQL Server major version
+ minor version + customized string.
• Ensure the SQL Server version in the CEV
name matches the version provided with the
AMI.
• Ensure the OS build version meets the
minimum required build version.
• Ensure the OS major version meets the
minimum required major version.

Create a new CEV using the correct


information.

If needed, create a new EC2 instance using a


supported AMI and run the Sysprep process on
it.

Modifying a CEV for RDS Custom for SQL Server


You can modify a CEV using the AWS Management Console or the AWS CLI. You can modify the CEV
description or its availability status. Your CEV has one of the following status values:

• available – You can use this CEV to create a new RDS Custom DB instance or upgrade a DB instance.
This is the default status for a newly created CEV.
• inactive – You can't create or upgrade an RDS Custom DB instance with this CEV. You can't restore a
DB snapshot to create a new RDS Custom DB instance with this CEV.

You can change the CEV status from available to inactive or from inactive to available. You
might change the status to INACTIVE to prevent the accidental use of a CEV or to make a discontinued
CEV eligible for use again.

Console

To modify a CEV

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Custom engine versions.
3. Choose a CEV whose description or status you want to modify.
4. For Actions, choose Modify.
5. Make any of the following changes:

• For CEV status settings, choose a new availability status.


• For Version description, enter a new description.
6. Choose Modify CEV.

1124
Amazon Relational Database Service User Guide
Working with CEVs for RDS Custom for SQL Server

If the CEV is in use, the console displays You can't modify the CEV status. Fix the problems, then try
again.

The Custom engine versions page appears.

AWS CLI

To modify a CEV by using the AWS CLI, run the modify-custom-db-engine-version command. You can
find CEVs to modify by running the describe-db-engine-versions command.

The following options are required:

• --engine
• --engine-version cev, where cev is the name of the custom engine version that you want to
modify
• --status status, where status is the availability status that you want to assign to the CEV

The following example changes a CEV named 15.00.4249.2.my_cevtest from its current status to
inactive.

Example

For Linux, macOS, or Unix:

aws rds modify-custom-db-engine-version \


--engine custom-sqlserver-ee \
--engine-version 15.00.4249.2.my_cevtest \
--status inactive

For Windows:

aws rds modify-custom-db-engine-version ^


--engine custom-sqlserver-ee ^
--engine-version 15.00.4249.2.my_cevtest ^
--status inactive

Modifying an RDS Custom for SQL Server DB instance to use a new CEV
You can modify an existing RDS Custom for SQL Server DB instance to use a different CEV. The changes
that you can make include:

• Changing the CEV


• Changing the DB instance class
• Changing the backup retention period and backup window
• Changing the maintenance window

Console

To modify an RDS Custom for SQL Server DB instance

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.

1125
Amazon Relational Database Service User Guide
Working with CEVs for RDS Custom for SQL Server

2. In the navigation pane, choose Databases.


3. Choose the DB instance that you want to modify.
4. Choose Modify.
5. Make the following changes as needed:

a. For DB engine version, choose a different CEV.


b. Change the value for DB instance class. For supported classes, see DB instance class support for
RDS Custom for SQL Server (p. 1089).
c. Change the value for Backup retention period.
d. For Backup window, set values for the Start time and Duration.
e. For DB instance maintenance window, set values for the Start day, Start time, and Duration.
6. Choose Continue.
7. Choose Apply immediately or Apply during the next scheduled maintenance window.
8. Choose Modify DB instance.
Note
When modifying a DB instance from one CEV to an another CEV, for example, when
upgrading a minor version, the SQL Server system databases, including their data and
configurations, are persisted from the current RDS Custom for SQL Server DB instance.

AWS CLI

To modify a DB instance to use a different CEV by using the AWS CLI, run the modify-db-instance
command.

The following options are required:

• --db-instance-identifier
• --engine-version cev, where cev is the name of the custom engine version that you want the DB
instance to change to.

The following example modifies a DB instance named my-cev-db-instance to use a CEV named
15.00.4249.2.my_cevtest_new and applies the change immediately.

Example

For Linux, macOS, or Unix:

aws rds modify-db-instance \


--db-instance-identifier my-cev-db-instance \
--engine-version 15.00.4249.2.my_cevtest_new \
--apply-immediately

For Windows:

aws rds modify-db-instance ^


--db-instance-identifier my-cev-db-instance ^
--engine-version 15.00.4249.2.my_cevtest_new ^
--apply-immediately

Viewing CEV details for Amazon RDS Custom for SQL Server
You can view details about your CEV by using the AWS Management Console or the AWS CLI.

1126
Amazon Relational Database Service User Guide
Working with CEVs for RDS Custom for SQL Server

Console

To view CEV details

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Custom engine versions.

The Custom engine versions page shows all CEVs that currently exist. If you haven't created any
CEVs, the page is empty.
3. Choose the name of the CEV that you want to view.
4. Choose Configuration to view the details.

AWS CLI
To view details about a CEV by using the AWS CLI, run the describe-db-engine-versions command.

You can also specify the following options:

• --include-all, to view all CEVs with any lifecycle state. Without the --include-all option, only
the CEVs in an available lifecycle state will be returned.

aws rds describe-db-engine-versions --engine custom-sqlserver-ee --engine-version


15.00.4249.2.my_cevtest --include-all
{
"DBEngineVersions": [
{
"Engine": "custom-sqlserver-ee",
"MajorEngineVersion": "15.00",

1127
Amazon Relational Database Service User Guide
Working with CEVs for RDS Custom for SQL Server

"EngineVersion": "15.00.4249.2.my_cevtest",
"DBParameterGroupFamily": "custom-sqlserver-ee-15.0",
"DBEngineDescription": "Microsoft SQL Server Enterprise Edition for custom
RDS",
"DBEngineVersionArn": "arn:aws:rds:us-east-1:{my-account-id}:cev:custom-
sqlserver-ee/15.00.4249.2.my_cevtest/a1234a1-123c-12rd-bre1-1234567890",
"DBEngineVersionDescription": "Custom SQL Server EE 15.00.4249.2 cev test",
"Image": {
"ImageId": "ami-0r93cx31t5r596482",
"Status": "pending-validation"
},
"DBEngineMediaType": "AWS Provided",
"CreateTime": "2022-11-20T19:30:01.831000+00:00",
"ValidUpgradeTarget": [],
"SupportsLogExportsToCloudwatchLogs": false,
"SupportsReadReplica": false,
"SupportedFeatureNames": [],
"Status": "pending-validation",
"SupportsParallelQuery": false,
"SupportsGlobalDatabases": false,
"TagList": [],
"SupportsBabelfish": false
}
]
}

You can use filters to view CEVs with a certain lifecycle status. For example, to view CEVs that have a
lifecycle status of either pending-validation, available, or failed:

aws rds describe-db-engine-versions engine custom-sqlserver-ee


region us-west-2 include-all query 'DBEngineVersions[?Status == pending-
validation ||
Status == available || Status == failed]'

Deleting a CEV for RDS Custom for SQL Server


You can delete a CEV using the AWS Management Console or the AWS CLI. Typically, this task takes a few
minutes.

Before deleting a CEV, make sure it isn't being used by any of the following:

• An RDS Custom DB instance


• A snapshot of an RDS Custom DB instance
• An automated backup of your RDS Custom DB instance

Console

To delete a CEV

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Custom engine versions.
3. Choose a CEV whose description or status you want to delete.
4. For Actions, choose Delete.

The Delete cev_name? dialog box appears.


5. Enter delete me, and then choose Delete.

1128
Amazon Relational Database Service User Guide
Working with CEVs for RDS Custom for SQL Server

In the Custom engine versions page, the banner shows that your CEV is being deleted.

AWS CLI

To delete a CEV by using the AWS CLI, run the delete-custom-db-engine-version command.

The following options are required:

• --engine custom-sqlserver-ee
• --engine-version cev, where cev is the name of the custom engine version to be deleted

The following example deletes a CEV named 15.00.4249.2.my_cevtest.

Example

For Linux, macOS, or Unix:

aws rds delete-custom-db-engine-version \


--engine custom-sqlserver-ee \
--engine-version 15.00.4249.2.my_cevtest

For Windows:

aws rds delete-custom-db-engine-version ^


--engine custom-sqlserver-ee ^
--engine-version 15.00.4249.2.my_cevtest

1129
Amazon Relational Database Service User Guide
Creating and connecting to an RDS
Custom for SQL Server DB instance

Creating and connecting to a DB instance for Amazon


RDS Custom for SQL Server
You can create an RDS Custom DB instance, and then connect to it using AWS Systems Manager or
Remote Desktop Protocol (RDP).
Important
Before you can create or connect to an RDS Custom for SQL Server DB instance, make sure
to complete the tasks in Setting up your environment for Amazon RDS Custom for SQL
Server (p. 1099).
You can tag RDS Custom DB instances when you create them, but don't create or modify the
AWSRDSCustom tag that's required for RDS Custom automation. For more information, see
Tagging RDS Custom for SQL Server resources (p. 1144).
The first time that you create an RDS Custom for SQL Server DB instance, you might receive the
following error: The service-linked role is in the process of being created. Try again later. If you
do, wait a few minutes and then try again to create the DB instance.

Topics
• Creating an RDS Custom for SQL Server DB instance (p. 1130)
• RDS Custom service-linked role (p. 1133)
• Connecting to your RDS Custom DB instance using AWS Systems Manager (p. 1133)
• Connecting to your RDS Custom DB instance using RDP (p. 1135)

Creating an RDS Custom for SQL Server DB instance


Create an Amazon RDS Custom for SQL Server DB instance using either the AWS Management Console
or the AWS CLI. The procedure is similar to the procedure for creating an Amazon RDS DB instance.

For more information, see Creating an Amazon RDS DB instance (p. 300).

Console

To create an RDS Custom for SQL Server DB instance

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases.
3. Choose Create database.
4. Choose Standard create for the database creation method.
5. For Engine options, choose Microsoft SQL Server for the engine type.
6. For Database management type, choose Amazon RDS Custom.
7. In the Edition section, choose the DB engine edition that you want to use. For RDS Custom for SQL
Server, the choices are Enterprise, Standard, and Web.
8. (Optional) If you intend to create the DB instance from a CEV, check the Use custom engine version
(CEV) check box. Select your CEV in the drop-down list.
9. For Database version, keep the SQL Server 2019 default value.
10. For Templates, choose Production.
11. In the Settings section, enter a unique name for the DB instance identifier.
12. To enter your master password, do the following:

a. In the Settings section, open Credential Settings.


b. Clear the Auto generate a password check box.

1130
Amazon Relational Database Service User Guide
Creating and connecting to an RDS
Custom for SQL Server DB instance

c. Change the Master username value and enter the same password in Master password and
Confirm password.

By default, the new RDS Custom DB instance uses an automatically generated password for the
master user.
13. In the DB instance size section, choose a value for DB instance class.

For supported classes, see DB instance class support for RDS Custom for SQL Server (p. 1089).
14. Choose Storage settings.
15. For RDS Custom security, do the following:

a. For IAM instance profile, choose the instance profile for your RDS Custom for SQL Server DB
instance.

The IAM instance profile must begin with AWSRDSCustom, for example
AWSRDSCustomInstanceProfileForRdsCustomInstance.
b. For Encryption, choose Enter a key ARN to list the available AWS KMS keys. Then choose your
key from the list.

An AWS KMS key is required for RDS Custom. For more information, see Make sure that you
have a symmetric encryption AWS KMS key (p. 1104).
16. For the remaining sections, specify your preferred RDS Custom DB instance settings. For information
about each setting, see Settings for DB instances (p. 308). The following settings don't appear in the
console and aren't supported:

• Processor features
• Storage autoscaling
• Availability & durability
• Password and Kerberos authentication option in Database authentication (only Password
authentication is supported)
• Database options group in Additional configuration
• Performance Insights
• Log exports
• Enable auto minor version upgrade
• Deletion protection

Backup retention period is supported, but you can't choose 0 days.


17. Choose Create database.

The View credential details button appears on the Databases page.

To view the master user name and password for the RDS Custom DB instance, choose View
credential details.

To connect to the DB instance as the master user, use the user name and password that appear.
Important
You can't view the master user password again. If you don't record it, you might have
to change it. To change the master user password after the RDS Custom DB instance is
available, modify the DB instance. For more information about modifying a DB instance, see
Managing an Amazon RDS Custom for SQL Server DB instance (p. 1138).
18. Choose Databases to view the list of RDS Custom DB instances.
19. Choose the RDS Custom DB instance that you just created.

1131
Amazon Relational Database Service User Guide
Creating and connecting to an RDS
Custom for SQL Server DB instance

On the RDS console, the details for the new RDS Custom DB instance appear:

• The DB instance has a status of creating until the RDS Custom DB instance is created and ready
for use. When the state changes to available, you can connect to the DB instance. Depending on
the instance class and storage allocated, it can take several minutes for the new DB instance to be
available.
• Role has the value Instance (RDS Custom).
• RDS Custom automation mode has the value Full automation. This setting means that the DB
instance provides automatic monitoring and instance recovery.

AWS CLI
You create an RDS Custom DB instance by using the create-db-instance AWS CLI command.

The following options are required:

• --db-instance-identifier
• --db-instance-class (for a list of supported instance classes, see DB instance class support for
RDS Custom for SQL Server (p. 1089))
• --engine (custom-sqlserver-ee, custom-sqlserver-se, or custom-sqlserver-web)
• --kms-key-id
• --custom-iam-instance-profile

The following example creates an RDS Custom for SQL Server DB instance named my-custom-
instance. The backup retention period is 3 days.
Note
To create a DB instance from a custom engine version (CEV), supply an existing CEV
name to the --engine-version parameter. For example, --engine-version
15.00.4249.2.my_cevtest

Example
For Linux, macOS, or Unix:

aws rds create-db-instance \


--engine custom-sqlserver-ee \
--engine-version 15.00.4073.23.v1 \
--db-instance-identifier my-custom-instance \
--db-instance-class db.m5.xlarge \
--allocated-storage 20 \
--db-subnet-group mydbsubnetgroup \
--master-username myuser \
--master-user-password mypassword \
--backup-retention-period 3 \
--no-multi-az \
--port 8200 \
--kms-key-id mykmskey \
--custom-iam-instance-profile AWSRDSCustomInstanceProfileForRdsCustomInstance

For Windows:

aws rds create-db-instance ^


--engine custom-sqlserver-ee ^
--engine-version 15.00.4073.23.v1 ^
--db-instance-identifier my-custom-instance ^
--db-instance-class db.m5.xlarge ^

1132
Amazon Relational Database Service User Guide
Creating and connecting to an RDS
Custom for SQL Server DB instance

--allocated-storage 20 ^
--db-subnet-group mydbsubnetgroup ^
--master-username myuser ^
--master-user-password mypassword ^
--backup-retention-period 3 ^
--no-multi-az ^
--port 8200 ^
--kms-key-id mykmskey ^
--custom-iam-instance-profile AWSRDSCustomInstanceProfileForRdsCustomInstance

Note
Specify a password other than the prompt shown here as a security best practice.

Get details about your instance by using the describe-db-instances command.

aws rds describe-db-instances --db-instance-identifier my-custom-instance

The following partial output shows the engine, parameter groups, and other information.

{
"DBInstances": [
{
"PendingModifiedValues": {},
"Engine": "custom-sqlserver-ee",
"MultiAZ": false,
"DBSecurityGroups": [],
"DBParameterGroups": [
{
"DBParameterGroupName": "default.custom-sqlserver-ee-15",
"ParameterApplyStatus": "in-sync"
}
],
"AutomationMode": "full",
"DBInstanceIdentifier": "my-custom-instance",
"TagList": []
}
]
}

RDS Custom service-linked role


A service-linked role gives Amazon RDS Custom access to resources in your AWS account. It makes using
RDS Custom easier because you don't have to manually add the necessary permissions. RDS Custom
defines the permissions of its service-linked roles, and unless defined otherwise, only RDS Custom can
assume its roles. The defined permissions include the trust policy and the permissions policy, and that
permissions policy can't be attached to any other IAM entity.

When you create an RDS Custom DB instance, both the Amazon RDS and RDS Custom service-linked
roles are created (if they don't already exist) and used. For more information, see Using service-linked
roles for Amazon RDS (p. 2684).

The first time that you create an RDS Custom for SQL Server DB instance, you might receive the
following error: The service-linked role is in the process of being created. Try again later. If you do, wait a
few minutes and then try again to create the DB instance.

Connecting to your RDS Custom DB instance using AWS Systems


Manager
After you create your RDS Custom DB instance, you can connect to it using AWS Systems Manager
Session Manager. Session Manager is a Systems Manager capability that you can use to manage Amazon

1133
Amazon Relational Database Service User Guide
Creating and connecting to an RDS
Custom for SQL Server DB instance

EC2 instances through a browser-based shell or through the AWS CLI. For more information, see AWS
Systems Manager Session Manager.

Console

To connect to your DB instance using Session Manager

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases, and then choose the RDS Custom DB instance to which
you want to connect.
3. Choose Configuration.
4. Note the Resource ID value for your DB instance. For example, the resource ID might be db-
ABCDEFGHIJKLMNOPQRS0123456.
5. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.
6. In the navigation pane, choose Instances.
7. Look for the name of your EC2 instance, and then choose the instance ID associated with it. For
example, the instance ID might be i-abcdefghijklm01234.
8. Choose Connect.
9. Choose Session Manager.
10. Choose Connect.

A window opens for your session.

AWS CLI
You can connect to your RDS Custom DB instance using the AWS CLI. This technique requires the Session
Manager plugin for the AWS CLI. To learn how to install the plugin, see Install the Session Manager
plugin for the AWS CLI.

To find the DB resource ID of your RDS Custom DB instance, use describe-db-instances.

aws rds describe-db-instances \


--query 'DBInstances[*].[DBInstanceIdentifier,DbiResourceId]' \
--output text

The following sample output shows the resource ID for your RDS Custom instance. The prefix is db-.

db-ABCDEFGHIJKLMNOPQRS0123456

To find the EC2 instance ID of your DB instance, use aws ec2 describe-instances. The following
example uses db-ABCDEFGHIJKLMNOPQRS0123456 for the resource ID.

aws ec2 describe-instances \


--filters "Name=tag:Name,Values=db-ABCDEFGHIJKLMNOPQRS0123456" \
--output text \
--query 'Reservations[*].Instances[*].InstanceId'

The following sample output shows the EC2 instance ID.

i-abcdefghijklm01234

Use the aws ssm start-session command, supplying the EC2 instance ID in the --target
parameter.

1134
Amazon Relational Database Service User Guide
Creating and connecting to an RDS
Custom for SQL Server DB instance

aws ssm start-session --target "i-abcdefghijklm01234"

A successful connection looks like the following.

Starting session with SessionId: yourid-abcdefghijklm1234


[ssm-user@ip-123-45-67-89 bin]$

Connecting to your RDS Custom DB instance using RDP


After you create your RDS Custom DB instance, you can connect to this instance using an RDP client. The
procedure is the same as for connecting to an Amazon EC2 instance. For more information, see Connect
to your Windows instance.

To connect to the DB instance, you need the key pair associated with the instance. RDS
Custom creates the key pair for you. The pair name uses the prefix do-not-delete-rds-
custom-DBInstanceIdentifier. AWS Secrets Manager stores your private key as a secret.

Complete the task in the following steps:

1. Configure your DB instance to allow RDP connections (p. 1135).


2. Retrieve your secret key (p. 1136).
3. Connect to your EC2 instance using the RDP utility (p. 1137).

Configure your DB instance to allow RDP connections


To allow RDP connections, configure your VPC security group and set a firewall rule on the host.

Configure your VPC security group

Make sure that the VPC security group associated with your DB instance permits inbound connections on
port 3389 for Transmission Control Protocol (TCP). To learn how to configure your VPC security group,
see Configure your VPC security group (p. 1110).

Set the firewall rule on the host

To permit inbound connections on port 3389 for TCP, set a firewall rule on the host. The following
examples show how to do this.

We recommend that you use the specific -Profile value: Public, Private, or Domain. Using Any
refers to all three values. You can also specify a combination of values separated by a comma. For more
information about setting firewall rules, see Set-NetFirewallRule in the Microsoft documentation.

To use Systems Manager Session Manager to set a firewall rule

1. Connect to Session Manager as shown in Connecting to your RDS Custom DB instance using AWS
Systems Manager (p. 1133).
2. Run the following command.

Set-NetFirewallRule -DisplayName "Remote Desktop - User Mode (TCP-In)" -Direction


Inbound -LocalAddress Any -Profile Any

To use Systems Manager CLI commands to set a firewall rule

1. Use the following command to open RDP on the host.

1135
Amazon Relational Database Service User Guide
Creating and connecting to an RDS
Custom for SQL Server DB instance

OPEN_RDP_COMMAND_ID=$(aws ssm send-command --region $AWS_REGION \


--instance-ids $RDS_CUSTOM_INSTANCE_EC2_ID \
--document-name "AWS-RunPowerShellScript" \
--parameters '{"commands":["Set-NetFirewallRule -DisplayName \"Remote Desktop -
User Mode (TCP-In)\" -Direction Inbound -LocalAddress Any -Profile Any"]}' \
--comment "Open RDP port" | jq -r ".Command.CommandId")

2. Use the command ID returned in the output to get the status of the previous command. To use the
following query to return the command ID, make sure that you have the jq plug-in installed.

aws ssm list-commands \


--region $AWS_REGION \
--command-id $OPEN_RDP_COMMAND_ID

Retrieve your secret key


Retrieve your secret key using either AWS Management Console or the AWS CLI.

Console

To retrieve the secret key

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases, and then choose the RDS Custom DB instance to which
you want to connect.
3. Choose the Configuration tab.
4. Note the DB instance ID for your DB instance, for example, my-custom-instance.
5. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.
6. In the navigation pane, choose Instances.
7. Look for the name of your EC2 instance, and then choose the instance ID associated with it.

In this example, the instance ID is i-abcdefghijklm01234.


8. In Details, find Key pair name. The pair name includes the DB identifier. In this example, the pair
name is do-not-delete-rds-custom-my-custom-instance-0d726c.
9. In the instance summary, find Public IPv4 DNS. For the example, the public DNS might be
ec2-12-345-678-901.us-east-2.compute.amazonaws.com.
10. Open the AWS Secrets Manager console at https://console.aws.amazon.com/secretsmanager/.
11. Choose the secret that has the same name as your key pair.
12. Choose Retrieve secret value.

AWS CLI

To retrieve the private key

1. Get the list of your RDS Custom DB instances by calling the aws rds describe-db-instances
command.

aws rds describe-db-instances \


--query 'DBInstances[*].[DBInstanceIdentifier,DbiResourceId]' \
--output text

1136
Amazon Relational Database Service User Guide
Creating and connecting to an RDS
Custom for SQL Server DB instance

2. Choose the DB instance identifier from the sample output, for example do-not-delete-rds-
custom-my-custom-instance.
3. Find the EC2 instance ID of your DB instance by calling the aws ec2 describe-instances
command. The following example uses the EC2 instance name to describe the DB instance.

aws ec2 describe-instances \


--filters "Name=tag:Name,Values=do-not-delete-rds-custom-my-custom-instance" \
--output text \
--query 'Reservations[*].Instances[*].InstanceId'

The following sample output shows the EC2 instance ID.

i-abcdefghijklm01234

4. Find the key name by specifying the EC2 instance ID, as shown in the following example.

aws ec2 describe-instances \


--instance-ids i-abcdefghijklm01234 \
--output text \
--query 'Reservations[*].Instances[*].KeyName'

The following sample output shows the key name, which uses the prefix do-not-delete-rds-
custom-DBInstanceIdentifier.

do-not-delete-rds-custom-my-custom-instance-0d726c

Connect to your EC2 instance using the RDP utility


Follow the procedure in Connect to your Windows instance using RDP in the Amazon EC2 User Guide for
Windows Instances. This procedure assumes that you created a .pem file that contains your private key.

1137
Amazon Relational Database Service User Guide
Managing an RDS Custom for SQL Server DB instance

Managing an Amazon RDS Custom for SQL Server DB


instance
Amazon RDS Custom for SQL Server supports a subset of the usual management tasks for Amazon
RDS DB instances. Following, you can find instructions for the supported RDS Custom for SQL Server
management tasks using the AWS Management Console and the AWS CLI.

Topics
• Pausing and resuming RDS Custom automation (p. 1138)
• Modifying an RDS Custom for SQL Server DB instance (p. 1141)
• Modifying the storage for an RDS Custom for SQL Server DB instance (p. 1142)
• Tagging RDS Custom for SQL Server resources (p. 1144)
• Deleting an RDS Custom for SQL Server DB instance (p. 1144)
• Starting and stopping an RDS Custom for SQL Server DB instance (p. 1146)

Pausing and resuming RDS Custom automation


RDS Custom automatically provides monitoring and instance recovery for an RDS Custom for SQL Server
DB instance. If you need to customize the instance, do the following:

1. Pause RDS Custom automation for a specified period. The pause ensures that your customizations
don't interfere with RDS Custom automation.
2. Customize the RDS Custom for SQL Server DB instance as needed.
3. Do either of the following:
• Resume automation manually.
• Wait for the pause period to end. In this case, RDS Custom resumes monitoring and instance
recovery automatically.

Important
Pausing and resuming automation are the only supported automation tasks when modifying an
RDS Custom for SQL Server DB instance.

Console

To pause or resume RDS Custom automation

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases, and then choose the RDS Custom DB instance that you
want to modify.
3. Choose Modify. The Modify DB instance page appears.
4. For RDS Custom automation mode, choose one of the following options:

• Paused pauses the monitoring and instance recovery for the RDS Custom DB instance. Enter the
pause duration that you want (in minutes) for Automation mode duration. The minimum value is
60 minutes (default). The maximum value is 1,440 minutes.
• Full automation resumes automation.
5. Choose Continue to check the summary of modifications.

A message indicates that RDS Custom will apply the changes immediately.

1138
Amazon Relational Database Service User Guide
Managing an RDS Custom for SQL Server DB instance

6. If your changes are correct, choose Modify DB instance. Or choose Back to edit your changes or
Cancel to cancel your changes.

On the RDS console, the details for the modification appear. If you paused automation, the Status of
your RDS Custom DB instance indicates Automation paused.
7. (Optional) In the navigation pane, choose Databases, and then your RDS Custom DB instance.

In the Summary pane, RDS Custom automation mode indicates the automation status. If
automation is paused, the value is Paused. Automation resumes in num minutes.

AWS CLI

To pause or resume RDS Custom automation, use the modify-db-instance AWS CLI command.
Identify the DB instance using the required parameter --db-instance-identifier. Control the
automation mode with the following parameters:

• --automation-mode specifies the pause state of the DB instance. Valid values are all-paused,
which pauses automation, and full, which resumes it.
• --resume-full-automation-mode-minutes specifies the duration of the pause. The default value
is 60 minutes.

Note
Regardless of whether you specify --no-apply-immediately or --apply-immediately,
RDS Custom applies modifications asynchronously as soon as possible.

In the command response, ResumeFullAutomationModeTime indicates the resume time as a UTC


timestamp. When the automation mode is all-paused, you can use modify-db-instance to resume
automation mode or extend the pause period. No other modify-db-instance options are supported.

The following example pauses automation for my-custom-instance for 90 minutes.

Example

For Linux, macOS, or Unix:

aws rds modify-db-instance \


--db-instance-identifier my-custom-instance \
--automation-mode all-paused \
--resume-full-automation-mode-minutes 90

For Windows:

aws rds modify-db-instance ^


--db-instance-identifier my-custom-instance ^
--automation-mode all-paused ^
--resume-full-automation-mode-minutes 90

The following example extends the pause duration for an extra 30 minutes. The 30 minutes is added to
the original time shown in ResumeFullAutomationModeTime.

Example

For Linux, macOS, or Unix:

aws rds modify-db-instance \


--db-instance-identifier my-custom-instance \

1139
Amazon Relational Database Service User Guide
Managing an RDS Custom for SQL Server DB instance

--automation-mode all-paused \
--resume-full-automation-mode-minutes 30

For Windows:

aws rds modify-db-instance ^


--db-instance-identifier my-custom-instance ^
--automation-mode all-paused ^
--resume-full-automation-mode-minutes 30

The following example resumes full automation for my-custom-instance.

Example

For Linux, macOS, or Unix:

aws rds modify-db-instance \


--db-instance-identifier my-custom-instance \
--automation-mode full \

For Windows:

aws rds modify-db-instance ^


--db-instance-identifier my-custom-instance ^
--automation-mode full

In the following partial sample output, the pending AutomationMode value is full.

{
"DBInstance": {
"PubliclyAccessible": true,
"MasterUsername": "admin",
"MonitoringInterval": 0,
"LicenseModel": "bring-your-own-license",
"VpcSecurityGroups": [
{
"Status": "active",
"VpcSecurityGroupId": "0123456789abcdefg"
}
],
"InstanceCreateTime": "2020-11-07T19:50:06.193Z",
"CopyTagsToSnapshot": false,
"OptionGroupMemberships": [
{
"Status": "in-sync",
"OptionGroupName": "default:custom-oracle-ee-19"
}
],
"PendingModifiedValues": {
"AutomationMode": "full"
},
"Engine": "custom-oracle-ee",
"MultiAZ": false,
"DBSecurityGroups": [],
"DBParameterGroups": [
{
"DBParameterGroupName": "default.custom-oracle-ee-19",
"ParameterApplyStatus": "in-sync"
}
],

1140
Amazon Relational Database Service User Guide
Managing an RDS Custom for SQL Server DB instance

...
"ReadReplicaDBInstanceIdentifiers": [],
"AllocatedStorage": 250,
"DBInstanceArn": "arn:aws:rds:us-west-2:012345678912:db:my-custom-instance",
"BackupRetentionPeriod": 3,
"DBName": "ORCL",
"PreferredMaintenanceWindow": "fri:10:56-fri:11:26",
"Endpoint": {
"HostedZoneId": "ABCDEFGHIJKLMNO",
"Port": 8200,
"Address": "my-custom-instance.abcdefghijk.us-west-2.rds.amazonaws.com"
},
"DBInstanceStatus": "automation-paused",
"IAMDatabaseAuthenticationEnabled": false,
"AutomationMode": "all-paused",
"EngineVersion": "19.my_cev1",
"DeletionProtection": false,
"AvailabilityZone": "us-west-2a",
"DomainMemberships": [],
"StorageType": "gp2",
"DbiResourceId": "db-ABCDEFGHIJKLMNOPQRSTUVW",
"ResumeFullAutomationModeTime": "2020-11-07T20:56:50.565Z",
"KmsKeyId": "arn:aws:kms:us-west-2:012345678912:key/
aa111a11-111a-11a1-1a11-1111a11a1a1a",
"StorageEncrypted": false,
"AssociatedRoles": [],
"DBInstanceClass": "db.m5.xlarge",
"DbInstancePort": 0,
"DBInstanceIdentifier": "my-custom-instance",
"TagList": []
}

Modifying an RDS Custom for SQL Server DB instance


Modifying an RDS Custom for SQL Server DB instance is similar to doing this for Amazon RDS, but the
changes that you can make are limited to the following:

• Changing the DB instance class


• Changing the backup retention period and backup window
• Changing the maintenance window
• Upgrading the DB engine version when a new version becomes available
• Changing the allocated storage, provisioned IOPS, and storage type
• Changing the database port
• Changing the DB instance identifier
• Changing the master credentials
• Allowing and removing Multi-AZ deployments
• Allowing public access
• Changing the security groups
• Changing subnet groups

The following limitations apply to modifying an RDS Custom for SQL Server DB instance:

• Custom DB option and parameter groups aren't supported.


• Any storage volumes that you attach manually to your RDS Custom DB instance are outside the
support perimeter.

For more information, see RDS Custom support perimeter (p. 985).

1141
Amazon Relational Database Service User Guide
Managing an RDS Custom for SQL Server DB instance

Console

To modify an RDS Custom for SQL Server DB instance

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases.
3. Choose the DB instance that you want to modify.
4. Choose Modify.
5. Make the following changes as needed:

a. For DB engine version, choose the new version.


b. Change the value for DB instance class. For supported classes, see DB instance class support for
RDS Custom for SQL Server (p. 1089)
c. Change the value for Backup retention period.
d. For Backup window, set values for the Start time and Duration.
e. For DB instance maintenance window, set values for the Start day, Start time, and Duration.
6. Choose Continue.
7. Choose Apply immediately or Apply during the next scheduled maintenance window.
8. Choose Modify DB instance.

AWS CLI

To modify an RDS Custom for SQL Server DB instance, use the modify-db-instance AWS CLI command.
Set the following parameters as needed:

• --db-instance-class – For supported classes, see DB instance class support for RDS Custom for
SQL Server (p. 1089)
• --engine-version – The version number of the database engine to which you're upgrading.
• --backup-retention-period – How long to retain automated backups, from 0–35 days.
• --preferred-backup-window – The daily time range during which automated backups are created.
• --preferred-maintenance-window – The weekly time range (in UTC) during which system
maintenance can occur.
• --apply-immediately – Use --apply-immediately to apply the storage changes immediately.

Or use --no-apply-immediately (the default) to apply the changes during the next maintenance
window.

Modifying the storage for an RDS Custom for SQL Server DB


instance
Modifying storage for an RDS Custom for SQL Server DB instance is similar to modifying storage for an
Amazon RDS DB instance, but you can only do the following:

• Increase the allocated storage size.


• Change the storage type. You can use available storage types such as General Purpose or Provisioned
IOPS. Provisioned IOPS is supported for gp3 and io1 storage types.
• Change the provisioned IOPS, if you're using the volume types that supports provisioned IOPS, such as
io1 or gp3.

1142
Amazon Relational Database Service User Guide
Managing an RDS Custom for SQL Server DB instance

The following limitations apply to modifying the storage for an RDS Custom for SQL Server DB instance:

• The minimum allocated storage size for RDS Custom for SQL Server is 20 GiB, and the maximum
supported storage size is 16 TiB.
• As with Amazon RDS, you can't decrease the allocated storage. This is a limitation of Amazon Elastic
Block Store (Amazon EBS) volumes. For more information, see Working with storage for Amazon RDS
DB instances (p. 478)
• Storage autoscaling isn't supported for RDS Custom for SQL Server DB instances.
• Any storage volumes that you manually attach to your RDS Custom DB instance are not considered
for storage scaling. Only the RDS-provided default data volumes, i.e., the D drive, are considered for
storage scaling.

For more information, see RDS Custom support perimeter (p. 985).
• Scaling storage usually doesn't cause any outage or performance degradation of the DB instance. After
you modify the storage size for a DB instance, the status of the DB instance is storage-optimization.
• Storage optimization can take several hours. You can't make further storage modifications for either
six (6) hours or until storage optimization has completed on the instance, whichever is longer. For more
information, see Working with storage for Amazon RDS DB instances (p. 478)

For more information about storage, see Amazon RDS DB instance storage (p. 101).

For general information about storage modification, see Working with storage for Amazon RDS DB
instances (p. 478).

Console

To modify the storage for an RDS Custom for SQL Server DB instance

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases.
3. Choose the DB instance that you want to modify.
4. Choose Modify.
5. Make the following changes as needed:

a. Enter a new value for Allocated storage. It must be greater than the current value, and from 20
GiB–16 TiB.
b. Change the value for Storage type. You can use available storage types like General Purpose or
Provisioned IOPS. Provisioned IOPS is supported for gp3 and io1 storage types.
c. If you are specifying volume types that support provisioned IOPS, you can define the
Provisioned IOPS value.
6. Choose Continue.
7. Choose Apply immediately or Apply during the next scheduled maintenance window.
8. Choose Modify DB instance.

AWS CLI

To modify the storage for an RDS Custom for SQL Server DB instance, use the modify-db-instance AWS
CLI command. Set the following parameters as needed:

• --allocated-storage – Amount of storage to be allocated for the DB instance, in gibibytes. It must


be greater than the current value, and from 20–16,384 GiB.
• --storage-type – The storage type, for example, gp3, gp2, or io1.

1143
Amazon Relational Database Service User Guide
Managing an RDS Custom for SQL Server DB instance

• --iops – Provisioned IOPS for the DB instance. You can specify this only for storage types that
support provisioned IOPS, like io1.
• --apply-immediately – Use --apply-immediately to apply the storage changes immediately.

Or use --no-apply-immediately (the default) to apply the changes during the next maintenance
window.

The following example changes the storage size of my-custom-instance to 200 GiB, storage type to io1,
and Provisioned IOPS to 3000.

Example

For Linux, macOS, or Unix:

aws rds modify-db-instance \


--db-instance-identifier my-custom-instance \
--storage-type io1 \
--iops 3000 \
--allocated-storage 200 \
--apply-immediately

For Windows:

aws rds modify-db-instance ^


--db-instance-identifier my-custom-instance ^
--storage-type io1 ^
--iops 3000 ^
--allocated-storage 200 ^
--apply-immediately

Tagging RDS Custom for SQL Server resources


You can tag RDS Custom resources as with Amazon RDS resources, but with some important differences:

• Don't create or modify the AWSRDSCustom tag that's required for RDS Custom automation. If you do,
you might break the automation.
• Tags added to RDS Custom DB instances during creation are propagated to all other related RDS
Custom resources.
• Tags aren't propagated when you add them to RDS Custom resources after DB instance creation.

For general information about resource tagging, see Tagging Amazon RDS resources (p. 461).

Deleting an RDS Custom for SQL Server DB instance


To delete an RDS Custom for SQL Server DB instance, do the following:

• Provide the name of the DB instance.


• Choose or clear the option to take a final DB snapshot of the DB instance.
• Choose or clear the option to retain automated backups.

You can delete an RDS Custom for SQL Server DB instance using the console or the CLI. The time
required to delete the DB instance can vary depending on the backup retention period (that is, how many
backups to delete), how much data is deleted, and whether a final snapshot is taken.

1144
Amazon Relational Database Service User Guide
Managing an RDS Custom for SQL Server DB instance

Note
You can't create a final DB snapshot of your DB instance if it has a status of creating, failed,
incompatible-create, incompatible-restore, or incompatible-network. For more
information, see Viewing Amazon RDS DB instance status (p. 684).
Important
When you choose to take a final snapshot, we recommend that you avoid writing data to your
DB instance while the DB instance deletion is in progress. Once the DB instance deletion is
initiated, data changes are not guaranteed to be captured by the final snapshot.

Console

To delete an RDS Custom DB instance

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases, and then choose the RDS Custom for SQL Server DB
instance that you want to delete. RDS Custom for SQL Server DB instances show the role Instance
(RDS Custom for SQL Server).
3. For Actions, choose Delete.
4. To take a final snapshot, choose Create final snapshot, and provide a name for the Final snapshot
name.
5. To retain automated backups, choose Retain automated backups.
6. Enter delete me in the box.
7. Choose Delete.

AWS CLI

You delete an RDS Custom for SQL Server DB instance by using the delete-db-instance AWS CLI
command. Identify the DB instance using the required parameter --db-instance-identifier. The
remaining parameters are the same as for an Amazon RDS DB instance.

The following example deletes the RDS Custom for SQL Server DB instance named my-custom-
instance, takes a final snapshot, and retains automated backups.

Example

For Linux, macOS, or Unix:

aws rds delete-db-instance \


--db-instance-identifier my-custom-instance \
--no-skip-final-snapshot \
--final-db-snapshot-identifier my-custom-instance-final-snapshot \
--no-delete-automated-backups

For Windows:

aws rds delete-db-instance ^


--db-instance-identifier my-custom-instance ^
--no-skip-final-snapshot ^
--final-db-snapshot-identifier my-custom-instance-final-snapshot ^
--no-delete-automated-backups

To take a final snapshot, the --final-db-snapshot-identifier option is required and must be


specified.

1145
Amazon Relational Database Service User Guide
Managing an RDS Custom for SQL Server DB instance

To skip the final snapshot, specify the --skip-final-snapshot option instead of the --no-skip-
final-snapshot and --final-db-snapshot-identifier options in the command.

To delete automated backups, specify the --delete-automated-backups option instead of the --


no-delete-automated-backups option in the command.

Starting and stopping an RDS Custom for SQL Server DB


instance
You can start and stop your RDS Custom for SQL Server DB instance. The same general requirements
and limitations for RDS for SQL Server DB instances apply to stopping and starting your RDS Custom
for SQL Server DB instances. For more information, see Stopping an Amazon RDS DB instance
temporarily (p. 381).

The following considerations also apply to starting and stopping your RDS Custom for SQL Server DB
instance:

• Modifying an EC2 instance attribute of an RDS Custom for SQL Server DB instance while the DB
instance is STOPPED isn't supported.
• You can stop and start an RDS Custom for SQL Server DB instance only if it's configured for a
single Availability Zone. You can't stop an RDS Custom for SQL Server DB instance in a Multi-AZ
configuration.
• A SYSTEM snapshot will be created when you stop an RDS Custom for SQL Server DB instance. The
snapshot will be automatically deleted when you start the RDS Custom for SQL Server DB instance
again.
• If you delete your EC2 instance while your RDS Custom for SQL Server DB instance is stopped, the C:
drive will be replaced when you start the RDS Custom for SQL Server DB instance again.
• The C:\ drive, hostname, and your custom configurations are persisted when you stop an RDS Custom
for SQL Server DB instance, as long as you don't modify the instance type.
• The following actions will result in RDS Custom placing the DB instance outside the support perimeter,
and you're still charged for DB instance hours:
• Starting the underlying EC2 instance while Amazon RDS is stopped. To resolve, you can call the
start-db-instance Amazon RDS API, or stop the EC2 so the RDS Custom instance returns to
STOPPED.
• Stopping underlying EC2 instance when the RDS Custom for SQL Server DB instance is ACTIVE.

For more details about stopping and starting DB instances, see Stopping an Amazon RDS DB instance
temporarily (p. 381), and Starting an Amazon RDS DB instance that was previously stopped (p. 384).

1146
Amazon Relational Database Service User Guide
Managing a Multi-AZ deployment
for RDS Custom for SQL Server

Managing a Multi-AZ deployment for RDS Custom for


SQL Server
In a Multi-AZ DB instance deployment for RDS Custom for SQL Server, Amazon RDS automatically
provisions and maintains a synchronous standby replica in a different Availability Zone (AZ). The primary
DB instance is synchronously replicated across Availability Zones to a standby replica to provide data
redundancy.
Important
A Multi-AZ deployment for RDS Custom for SQL Server is different than Multi-AZ for RDS for
SQL Server. Unlike Multi-AZ for RDS for SQL Server, you must set up prerequisites for RDS
Custom for SQL Server before creating your Multi-AZ DB instance because RDS Custom runs
inside your own account, which requires permissions.
If you don't complete the prerequisites, your Multi-AZ DB instance might fail to run, or
automatically revert to a Single-AZ DB instance. For more information about prerequisites, see
Prerequisites for a Multi-AZ deployment with RDS Custom for SQL Server (p. 1149).

Running a DB instance with high availability can enhance availability during planned system
maintenance. In the event of planned database maintenance or unplanned service disruption, Amazon
RDS automatically fails over to the up-to-date secondary DB instance. This functionality lets database
operations resume quickly without manual intervention. The primary and standby instances use the
same endpoint, whose physical network address transitions to the secondary replica as part of the
failover process. You don't have to reconfigure your application when a failover occurs.

You can create an RDS Custom for SQL Server Multi-AZ deployment by specifying Multi-AZ when
creating an RDS Custom DB instance. You can use the console to convert existing RDS Custom for SQL

1147
Amazon Relational Database Service User Guide
Managing a Multi-AZ deployment
for RDS Custom for SQL Server

Server DB instances to Multi-AZ deployments by modifying the DB instance and specifying the Multi-AZ
option. You can also specify a Multi-AZ DB instance deployment with the AWS CLI or Amazon RDS API.

The RDS console shows the Availability Zone of the standby replica (the secondary AZ). You can also use
the describe-db-instances CLI command or the DescribeDBInstances API operation to find the
secondary AZ.

RDS Custom for SQL Server DB instances with Multi-AZ deployment can have increased write and
commit latency compared to a Single-AZ deployment. This increase can happen because of the
synchronous data replication between DB instances. You might have a change in latency if your
deployment fails over to the standby replica, although AWS is engineered with low-latency network
connectivity between Availability Zones.
Note
For production workloads, we recommend that you use a DB instance class with Provisioned
IOPS (input/output operations per second) for fast, consistent performance. For more
information about DB instance classes, see Requirements and limitations for Amazon RDS
Custom for SQL Server (p. 1089).

Topics
• Region and version availability (p. 1148)
• Limitations for a Multi-AZ deployment with RDS Custom for SQL Server (p. 1148)
• Prerequisites for a Multi-AZ deployment with RDS Custom for SQL Server (p. 1149)
• Creating an RDS Custom for SQL Server Multi-AZ deployment (p. 1149)
• Modifying an RDS Custom for SQL Server Single-AZ deployment to a Multi-AZ deployment (p. 1149)
• Modifying an RDS Custom for SQL Server Multi-AZ deployment to a Single-AZ deployment (p. 1153)
• Failover process for an RDS Custom for SQL Server Multi-AZ deployment (p. 1154)
• Time to live (TTL) settings with applications using an RDS Custom for SQL Server Multi-AZ
deployment (p. 1156)

Region and version availability


Multi-AZ deployments for RDS Custom for SQL Server are supported for the following SQL Server
editions:

• SQL Server 2019 Enterprise Edition


• SQL Server 2019 Standard Edition
• SQL Server 2019 Web Edition

Multi-AZ deployments for RDS Custom for SQL Server are supported for the following SQL Server
versions:

• SQL Server 2019 CU18 (15.0.4261.1)


• SQL Server 2019 CU17 (15.0.4249.2)

Multi-AZ deployments for RDS Custom for SQL Server are available in all Regions where RDS Custom for
SQL Server is available. For more information on Region availability of Multi-AZ deployments for RDS
Custom for SQL Server, see RDS Custom for SQL Server (p. 153).

Limitations for a Multi-AZ deployment with RDS Custom for


SQL Server
Multi-AZ deployments with RDS Custom for SQL Server have the following limitations:

1148
Amazon Relational Database Service User Guide
Managing a Multi-AZ deployment
for RDS Custom for SQL Server

• Cross-Region Multi-AZ deployments aren't supported.


• You can’t configure the secondary DB instance to accept database read activity.
• When you use a Custom Engine Version (CEV) with a Multi-AZ deployment, your secondary DB instance
will also use the same CEV. The secondary DB instance can't use a different CEV.

Prerequisites for a Multi-AZ deployment with RDS Custom for


SQL Server
If you have an existing RDS Custom for SQL Server Single-AZ deployment, the following additional
prerequisites are required before modifying it to a Multi-AZ deployment. You can choose to complete
the prerequisites manually or with the provided CloudFormation template. The latest CloudFormation
template contains the prerequisites for both Single-AZ and Multi-AZ deployments.
Important
To simplify setup, we recommend that you use the latest AWS CloudFormation template file
provided in the network setup instructions to create the prerequisites. For more information, see
Configuring with AWS CloudFormation (p. 1101).
Note
When you modify an existing RDS Custom for SQL Server Single-AZ deployment to a Multi-AZ
deployment, you must complete these prerequisites. If you don't complete the prerequisites,
the Multi-AZ setup will fail. To complete the prerequisites, follow the steps in Modifying an RDS
Custom for SQL Server Single-AZ deployment to a Multi-AZ deployment (p. 1149).

• Update the RDS security group inbound and outbound rules to allow port 1120.
• Add a rule in your private network Access Control List (ACL) that allows TCP ports 0-65535 for the DB
instance VPC.
• Create new Amazon SQS VPC endpoints that allow the RDS Custom for SQL Server DB instance to
communicate with SQS.
• Update the SQS permissions in the instance profile role.

Creating an RDS Custom for SQL Server Multi-AZ deployment


To create an RDS Custom for SQL Server Multi-AZ deployment, follow the steps in Creating and
connecting to a DB instance for Amazon RDS Custom for SQL Server (p. 1130).
Important
To simplify setup, we recommend that you use the latest AWS CloudFormation template file
provided in the network setup instructions. For more information, see Configuring with AWS
CloudFormation (p. 1101).

Creating a Multi-AZ deployment takes a few minutes to complete.

Modifying an RDS Custom for SQL Server Single-AZ deployment


to a Multi-AZ deployment
You can modify an existing RDS Custom for SQL Server DB instance from a Single-AZ deployment to a
Multi-AZ deployment. When you modify the DB instance,Amazon RDS performs several actions:

• Takes a snapshot of the primary DB instance.


• Creates new volumes for the standby replica from the snapshot. These volumes initialize in the
background, and maximum volume performance is achieved after the data is fully initialized.
• Turns on synchronous block-level replication between the primary and secondary DB instances.

1149
Amazon Relational Database Service User Guide
Managing a Multi-AZ deployment
for RDS Custom for SQL Server

Important
We recommend that you avoid modifying your RDS Custom for SQL Server DB instance from a
Single-AZ to a Multi-AZ deployment on a production DB instance during periods of peak activity.

AWS uses a snapshot to create the standby instance to avoid downtime when you convert from Single-
AZ to Multi-AZ, but performance might be impacted during and after converting to Multi-AZ. This impact
can be significant for workloads that are sensitive to write latency. While this capability allows large
volumes to quickly be restored from snapshots, it can cause increase in the latency of I/O operations
because of the synchronous replication. This latency can impact your database performance.

Topics
• Configuring prerequisites to modify a Single-AZ to a Multi-AZ deployment using
CloudFormation (p. 1150)
• Configuring prerequisites to modify a Single-AZ to a Multi-AZ deployment manually (p. 1151)
• Modify using the RDS console, AWS CLI, or RDS API. (p. 1152)

Configuring prerequisites to modify a Single-AZ to a Multi-AZ deployment using


CloudFormation
To use a Multi-AZ deployment, you must ensure you've applied the latest CloudFormation template
with prerequisites, or manually configure the latest prerequisites. If you've already applied the latest
CloudFormation prerequisite template, you can skip these steps.

To configure the RDS Custom for SQL Server Multi-AZ deployment prerequisites using CloudFormation

1. Open the CloudFormation console at https://console.aws.amazon.com/cloudformation.


2. To start the Create Stack wizard, select the existing stack you used to create a Single-AZ deployment
and choose Update.

The Update stack page appears.


3. For Prerequisite - Prepare template, choose Replace current template.
4. For Specify template, do the following:

a. Download the latest AWS CloudFormation template file. Open the context (right-click) menu for
the link custom-sqlserver-onboard.zip and choose Save Link As.
b. Save and extract the custom-sqlserver-onboard.json file to your computer.
c. For Template source, choose Upload a template file.
d. For Choose file, navigate to and then choose custom-sqlserver-onboard.json.
5. Choose Next.

The Specify stack details page appears.


6. To keep the default options, choose Next.

The Advanced Options page appears.


7. To keep the default options, choose Next.
8. To keep the default options, choose Next.
9. On the Review Changes page, do the following:

a. For Capabilities, select the I acknowledge that AWS CloudFormation might create IAM
resources with custom names check box.
b. Choose Submit.
10. Verify the update is successful. The status of a successful operation shows UPDATE_COMPLETE.

1150
Amazon Relational Database Service User Guide
Managing a Multi-AZ deployment
for RDS Custom for SQL Server

If the update fails, any new configuration specified in the update process will be rolled back. The
existing resource will still be usable. For example, if you add network ACL rules numbered 18 and 19, but
there were existing rules with same numbers, the update would return the following error: Resource
handler returned message: "The network acl entry identified by 18 already
exists. In this scenario you can modify the existing ACL rules to use a number lower than 18, then
retry the update.

Configuring prerequisites to modify a Single-AZ to a Multi-AZ deployment


manually
Important
To simplify setup, we recommend that you use the latest AWS CloudFormation template file
provided in the network setup instructions. For more information, see Configuring prerequisites
to modify a Single-AZ to a Multi-AZ deployment using CloudFormation (p. 1150).

If you choose to configure the prerequisites manually, perform the following tasks.

1. Open the Amazon VPC console at https://console.aws.amazon.com/vpc/.


2. Choose Endpoint. The Create Endpoint page appears.
3. For Service Category, choose AWS services.
4. In Services, search for SQS
5. In VPC, choose the VPC where your RDS Custom for SQL Server DB instance is deployed.
6. In Subnets, choose the subnets where your RDS Custom for SQL Server DB instance is deployed.
7. In Security Groups, choose the -vpc-endpoint-sg group.
8. For Policy, choose Custom
9. In your custom policy, replace the AWS partition, Region, accountId,and IAM-Instance-
role with your own values.

{
"Version": "2012-10-17",
"Statement": [
{
"Condition": {
"StringLike": {
"aws:ResourceTag/AWSRDSCustom": "custom-sqlserver"
}
},
"Action": [
"SQS:SendMessage",
"SQS:ReceiveMessage",
"SQS:DeleteMessage",
"SQS:GetQueueUrl"
],
"Resource": "arn:${AWS::Partition}:sqs:${AWS::Region}:
${AWS::AccountId}:do-not-delete-rds-custom-*",
"Effect": "Allow",
"Principal": {
"AWS": "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/{IAM-
Instance-role}"
}
}
]
}

10. Update the Instance profile with permission to access Amazon SQS. Replace the AWS partition,
Region, and accountId with your own values.

1151
Amazon Relational Database Service User Guide
Managing a Multi-AZ deployment
for RDS Custom for SQL Server

{
"Sid": "SendMessageToSQSQueue",
"Effect": "Allow",
"Action": [
"SQS:SendMessage",
"SQS:ReceiveMessage",
"SQS:DeleteMessage",
"SQS:GetQueueUrl"

],
"Resource": [
{
"Fn::Sub": "arn:${AWS::Partition}:sqs:${AWS::Region}:${AWS::AccountId}:do-not-
delete-rds-custom-*"
}
],
"Condition": {
"StringLike": {
"aws:ResourceTag/AWSRDSCustom": "custom-sqlserver"
}
}
}
>

11. Update the Amazon RDS security group inbound and outbound rules to allow port 1120.

a. In Security Groups, choose the -rds-custom-instance-sg group.


b. For Inbound Rules, create a Custom TCP rule to allow port 1120 from the source -rds-
custom-instance-sg group.
c. For Outbound Rules, create a Custom TCP rule to allow port 1120 to the destination -rds-
custom-instance-sg group.
12. Add a rule in your private network Access Control List (ACL) that allows TCP ports 0-65535 for the
source subnet of the DB instance.
Note
When creating an Inbound Rule and Outbound Rule, take note of the highest existing Rule
number. The new rules you create must have a Rule number lower than 100 and not match
any existing Rule number.

a. In Network ACLs, choose the -private-network-acl group.


b. For Inbound Rules, create an All TCP rule to allow TCP ports 0-65535 with a source from
privatesubnet1 and privatesubnet2.
c. For Outbound Rules, create an All TCP rule to allow TCP ports 0-65535 to destination
privatesubnet1 and privatesubnet2.

Modify using the RDS console, AWS CLI, or RDS API.


After you've completed the prerequisites, you can modify an RDS Custom for SQL Server DB instance
from a Single-AZ to Multi-AZ deployment using the RDS console, AWS CLI, or RDS API.

Console

To modify an existing RDS Custom for SQL Server Single-AZ to Multi-AZ deployment

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.

1152
Amazon Relational Database Service User Guide
Managing a Multi-AZ deployment
for RDS Custom for SQL Server

2. In the Amazon RDS console, choose Databases.

The Databases pane appears.


3. Choose the RDS Custom for SQL Server DB instance that you want to modify.
4. For Actions, choose Convert to Multi-AZ deployment.
5. On the Confirmation page, choose Apply immediately to apply the changes immediately. Choosing
this option doesn't cause downtime, but there is a possible performance impact. Alternatively, you
can choose to apply the update during the next maintenance window. For more information, see
Using the Apply Immediately setting (p. 402).
6. On the Confirmation page, choose Convert to Multi-AZ.

AWS CLI
To convert to a Multi-AZ DB instance deployment by using the AWS CLI, call the modify-db-instance
command and set the --multi-az option. Specify the DB instance identifier and the values for
other options that you want to modify. For information about each option, see Settings for DB
instances (p. 402).

Example
The following code modifies mycustomdbinstance by including the --multi-az option. The changes
are applied during the next maintenance window by using --no-apply-immediately. Use --
apply-immediately to apply the changes immediately. For more information, see Using the Apply
Immediately setting (p. 402).

For Linux, macOS, or Unix:

aws rds modify-db-instance \


--db-instance-identifier mycustomdbinstance \
--multi-az \
--no-apply-immediately

For Windows:

aws rds modify-db-instance ^


--db-instance-identifier mycustomdbinstance ^
--multi-az \ ^
--no-apply-immediately

RDS API
To convert to a Multi-AZ DB instance deployment with the RDS API, call the ModifyDBInstance operation
and set the MultiAZ parameter to true.

Modifying an RDS Custom for SQL Server Multi-AZ deployment


to a Single-AZ deployment
You can modify an existing RDS Custom for SQL Server DB instance from a Multi-AZ to a Single-AZ
deployment.

Console

To modify an RDS Custom for SQL Server DB instance from a Multi-AZ to Single-AZ
deployment.

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.

1153
Amazon Relational Database Service User Guide
Managing a Multi-AZ deployment
for RDS Custom for SQL Server

2. In the Amazon RDS console, choose Databases.

The Databases pane appears.


3. Choose the RDS Custom for SQL Server DB instance that you want to modify.
4. For Multi-AZ deployment, choose No.
5. On the Confirmation page, choose Apply immediately to apply the changes immediately. Choosing
this option doesn't cause downtime, but there is a possible performance impact. Alternatively, you
can choose to apply the update during the next maintenance window. For more information, see
Using the Apply Immediately setting (p. 402).
6. On the Confirmation page, choose Modify DB Instance.

AWS CLI

To modify a Multi-AZ deployment to a Single-AZ deployment by using the AWS CLI, call the modify-db-
instance command and include the --no-multi-az option. Specify the DB instance identifier and the
values for other options that you want to modify. For information about each option, see Settings for DB
instances (p. 402).

Example

The following code modifies mycustomdbinstance by including the --no-multi-az option. The
changes are applied during the next maintenance window by using --no-apply-immediately. Use
--apply-immediately to apply the changes immediately. For more information, see Using the Apply
Immediately setting (p. 402).

For Linux, macOS, or Unix:

aws rds modify-db-instance \


--db-instance-identifier mycustomdbinstance \
--no-multi-az \
--no-apply-immediately

For Windows:

aws rds modify-db-instance ^


--db-instance-identifier mycustomdbinstance ^
--no-multi-az \ ^
--no-apply-immediately

RDS API

To modify a Multi-AZ deployment to a Single-AZ deployment by using the RDS API, call the
ModifyDBInstance operation and set the MultiAZ parameter to false.

Failover process for an RDS Custom for SQL Server Multi-AZ


deployment
If a planned or unplanned outage of your DB instance results from an infrastructure defect, Amazon RDS
automatically switches to a standby replica in another Availability Zone if you have turned on Multi-AZ.
The time that it takes for the failover to complete depends on the database activity and other conditions
at the time that the primary DB instance became unavailable. Failover times are typically 60 – 120
seconds. However, large transactions or a lengthy recovery process can increase failover time. When the
failover is complete, it can take additional time for the RDS console to show the new Availability Zone.

1154
Amazon Relational Database Service User Guide
Managing a Multi-AZ deployment
for RDS Custom for SQL Server

Note
You can force a failover manually when you reboot a DB instance with failover. For more
information on rebooting a DB instance, see Rebooting a DB instance (p. 436)

Amazon RDS handles failovers automatically so you can resume database operations as quickly as
possible without administrative intervention. The primary DB instance switches over automatically to
the standby replica if any of the conditions described in the following table occurs. You can view these
failover reasons in the RDS event log.

Failover reason Description

The operating system A failover was triggered during the maintenance window for an OS
for the RDS Custom for patch or a security update. For more information, see Maintaining a DB
SQL Server Multi-AZ instance (p. 418).
DB instance is being
patched in an offline
operation

The primary host of The Multi-AZ DB instance deployment detected an impaired primary
the RDS Custom for DB instance and failed over.
SQL Server Multi-AZ DB
instance is unhealthy.

The primary host of RDS monitoring detected a network reachability failure to the primary
the RDS Custom for DB instance and triggered a failover.
SQL Server Multi-
AZ DB instance is
unreachable due
to loss of network
connectivity.

The RDS Custom for SQL A DB instance modification triggered a failover. For more information,
Server Multi-AZ DB see Modifying an RDS Custom for SQL Server DB instance (p. 1141).
instance was modified
by the customer.

The storage volume The Multi-AZ DB instance deployment detected a storage issue on the
of the primary host primary DB instance and failed over.
of the RDS Custom for
SQL Server Multi-AZ DB
instance experienced a
failure.

The user requested a The RDS Custom for SQL Server Multi-AZ DB instance was
failover of the RDS rebooted with failover. For more information, see Rebooting a DB
Custom for SQL Server instance (p. 436).
Multi-AZ DB instance.

The RDS Custom for The primary DB instance is unresponsive. We recommend that you try
SQL Server Multi-AZ the following steps:
primary DB instance is
busy or unresponsive. • Examine the event logs and CloudWatch logs for excessive CPU,
memory, or swap space usage. For more information, see Working
with Amazon RDS event notification (p. 855).
• Create a rule that triggers on an Amazon RDS event. For more
information, see Creating a rule that triggers on an Amazon RDS
event (p. 870).

1155
Amazon Relational Database Service User Guide
Managing a Multi-AZ deployment
for RDS Custom for SQL Server

Failover reason Description


• Evaluate your workload to determine whether you're using the
appropriate DB instance class. For more information, see DB instance
classes (p. 11).

To determine if your Multi-AZ DB instance has failed over, you can do the following:

• Set up DB event subscriptions to notify you by email or SMS that a failover has been initiated. For
more information about events, see Working with Amazon RDS event notification (p. 855).
• View your DB events by using the RDS console or API operations.
• View the current state of your RDS Custom for SQL Server Multi-AZ DB instance deployment by using
the RDS console, CLI, or API operations.

Time to live (TTL) settings with applications using an RDS


Custom for SQL Server Multi-AZ deployment
The failover mechanism automatically changes the Domain Name System (DNS) record of the DB
instance to point to the standby DB instance. As a result, you need to re-establish any existing
connections to your DB instance. Ensure that any DNS cache time-to-live (TTL) configuration value is
low, and validate that your application will not cache DNS for an extended time. A high TTL value might
prevent your application from quickly reconnecting to the DB instance after failover.

1156
Amazon Relational Database Service User Guide
Backing up and restoring an RDS
Custom for SQL Server DB instance

Backing up and restoring an Amazon RDS Custom for


SQL Server DB instance
Like Amazon RDS, RDS Custom creates and saves automated backups of your RDS Custom for SQL Server
DB instance during the backup window of your DB instance. You can also back up your DB instance
manually.

The procedure is identical to taking a snapshot of an Amazon RDS DB instance. The first snapshot
of an RDS Custom DB instance contains the data for the full DB instance. Subsequent snapshots are
incremental.

Restore DB snapshots using either the AWS Management Console or the AWS CLI.

Topics
• Creating an RDS Custom for SQL Server snapshot (p. 1157)
• Restoring from an RDS Custom for SQL Server DB snapshot (p. 1158)
• Restoring an RDS Custom for SQL Server instance to a point in time (p. 1159)
• Deleting an RDS Custom for SQL Server snapshot (p. 1162)
• Deleting RDS Custom for SQL Server automated backups (p. 1163)

Creating an RDS Custom for SQL Server snapshot


RDS Custom for SQL Server creates a storage volume snapshot of your DB instance, backing up the
entire DB instance and not just individual databases. When you create a snapshot, specify which RDS
Custom for SQL Server DB instance to back up. Give your snapshot a name so you can restore from it
later.

When you create a snapshot, RDS Custom for SQL Server creates an Amazon EBS snapshot for every
volume attached to the DB instance. RDS Custom for SQL Server uses the EBS snapshot of the root
volume to register a new Amazon Machine Image (AMI). To make snapshots easy to associate with a
specific DB instance, they're tagged with DBSnapshotIdentifier, DbiResourceId, and VolumeType.

Creating a DB snapshot results in a brief I/O suspension. This suspension can last from a few seconds to
a few minutes, depending on the size and class of your DB instance. The snapshot creation time varies
with the size of your database. Because the snapshot includes the entire storage volume, the size of files,
such as temporary files, also affects snapshot creation time. To learn more about creating snapshots, see
Creating a DB snapshot (p. 613).

Create an RDS Custom for SQL Server snapshot using the console or the AWS CLI.

Console

To create an RDS Custom snapshot

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases.
3. In the list of RDS Custom DB instances, choose the instance for which you want to take a snapshot.
4. For Actions, choose Take snapshot.

The Take DB snapshot window appears.


5. For Snapshot name, enter the name of the snapshot.
6. Choose Take snapshot.

1157
Amazon Relational Database Service User Guide
Backing up and restoring an RDS
Custom for SQL Server DB instance

AWS CLI

You create a snapshot of an RDS Custom DB instance by using the create-db-snapshot AWS CLI
command.

Specify the following options:

• --db-instance-identifier – Identifies which RDS Custom DB instance you are going to back up
• --db-snapshot-identifier – Names your RDS Custom snapshot so you can restore from it later

In this example, you create a DB snapshot called my-custom-snapshot for an RDS Custom DB instance
called my-custom-instance.

Example

For Linux, macOS, or Unix:

aws rds create-db-snapshot \


--db-instance-identifier my-custom-instance \
--db-snapshot-identifier my-custom-snapshot

For Windows:

aws rds create-db-snapshot ^


--db-instance-identifier my-custom-instance ^
--db-snapshot-identifier my-custom-snapshot

Restoring from an RDS Custom for SQL Server DB snapshot


When you restore an RDS Custom for SQL Server DB instance, you provide the name of the DB snapshot
and a name for the new instance. You can't restore from a snapshot to an existing RDS Custom DB
instance. A new RDS Custom for SQL Server DB instance is created when you restore.

The restore process differs in the following ways from restore in Amazon RDS:

• Before restoring a snapshot, RDS Custom for SQL Server backs up existing configuration files. These
files are available on the restored instance in the directory /rdsdbdata/config/backup. RDS
Custom for SQL Server restores the DB snapshot with default parameters and overwrites the previous
database configuration files with existing ones. Thus, the restored instance doesn't preserve custom
parameters and changes to database configuration files.
• The restored database has the same name as in the snapshot. You can't specify a different name.

Console

To restore an RDS Custom DB instance from a DB snapshot

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Snapshots.
3. Choose the DB snapshot that you want to restore from.
4. For Actions, choose Restore snapshot.
5. On the Restore DB instance page, for DB instance identifier, enter the name for your restored RDS
Custom DB instance.

1158
Amazon Relational Database Service User Guide
Backing up and restoring an RDS
Custom for SQL Server DB instance

6. Choose Restore DB instance.

AWS CLI
You restore an RDS Custom DB snapshot by using the restore-db-instance-from-db-snapshot AWS CLI
command.

If the snapshot you are restoring from is for a private DB instance, make sure to specify both the correct
db-subnet-group-name and no-publicly-accessible. Otherwise, the DB instance defaults to
publicly accessible. The following options are required:

• db-snapshot-identifier – Identifies the snapshot from which to restore


• db-instance-identifier – Specifies the name of the RDS Custom DB instance to create from the
DB snapshot
• custom-iam-instance-profile – Specifies the instance profile associated with the underlying
Amazon EC2 instance of an RDS Custom DB instance.

The following code restores the snapshot named my-custom-snapshot for my-custom-instance.

Example
For Linux, macOS, or Unix:

aws rds restore-db-instance-from-db-snapshot \


--db-snapshot-identifier my-custom-snapshot \
--db-instance-identifier my-custom-instance \
--custom-iam-instance-profile AWSRDSCustomInstanceProfileForRdsCustomInstance \
--no-publicly-accessible

For Windows:

aws rds restore-db-instance-from-db-snapshot ^


--db-snapshot-identifier my-custom-snapshot ^
--db-instance-identifier my-custom-instance ^
--custom-iam-instance-profile AWSRDSCustomInstanceProfileForRdsCustomInstance ^
--no-publicly-accessible

Restoring an RDS Custom for SQL Server instance to a point in


time
You can restore a DB instance to a specific point in time (PITR), creating a new DB instance. To support
PITR, your DB instances must have backup retention set to a nonzero value.

The latest restorable time for an RDS Custom for SQL Server DB instance depends on several factors,
but is typically within 5 minutes of the current time. To see the latest restorable time for a DB
instance, use the AWS CLI describe-db-instances command and look at the value returned in the
LatestRestorableTime field for the DB instance. To see the latest restorable time for each DB
instance in the Amazon RDS console, choose Automated backups.

You can restore to any point in time within your backup retention period. To see the earliest restorable
time for each DB instance, choose Automated backups in the Amazon RDS console.

For general information about PITR, see Restoring a DB instance to a specified time (p. 660).

Topics
• PITR considerations for RDS Custom for SQL Server (p. 1160)

1159
Amazon Relational Database Service User Guide
Backing up and restoring an RDS
Custom for SQL Server DB instance

PITR considerations for RDS Custom for SQL Server


In RDS Custom for SQL Server, PITR differs in the following important ways from PITR in Amazon RDS:

• PITR only restores the databases in the DB instance. It doesn't restore the operating system or files on
the C: drive.
• For an RDS Custom for SQL Server DB instance, a database is backed up automatically and is eligible
for PITR only under the following conditions:
• The database is online.
• Its recovery model is set to FULL.
• It's writable.
• It has its physical files on the D: drive.
• It's not listed in the rds_pitr_blocked_databases table. For more information, see Making
databases ineligible for PITR (p. 1160).
• RDS Custom for SQL Server allows up to 5,000 databases per DB instance. However, the maximum
number of databases restored by a PITR operation for an RDS Custom for SQL Server DB instance is
100. The 100 databases are determined by the order of their database ID.

Other databases that aren't part of PITR can be restored from DB snapshots, including the automated
backups used for PITR.
• Adding a new database, renaming a database, or restoring a database that is eligible for PITR initiates
a snapshot of the DB instance.
• Restored databases have the same name as in the source DB instance. You can't specify a different
name.
• AWSRDSCustomSQLServerIamRolePolicy requires new permissions. For more information, see Add
an access policy to AWSRDSCustomSQLServerInstanceRole (p. 1105).
• Time zone changes aren't supported for RDS Custom for SQL Server. If you change the operating
system or DB instance time zone, PITR (and other automation) doesn't work.

Making databases ineligible for PITR

You can specify that certain RDS Custom for SQL Server databases aren't part of automated backups and
PITR. To do this, put their database_id values into a rds_pitr_blocked_databases table. Use the
following SQL script to create the table.

To create the rds_pitr_blocked_databases table

• Run the following SQL script.

create table msdb..rds_pitr_blocked_databases


(
database_id INT NOT NULL,
database_name SYSNAME NOT NULL,
db_entry_updated_date datetime NOT NULL DEFAULT GETDATE(),
db_entry_updated_by SYSNAME NOT NULL DEFAULT CURRENT_USER,
PRIMARY KEY (database_id)
);

For the list of eligible and ineligible databases, see the RI.End file in the RDSCustomForSQLServer/
Instances/DB_instance_resource_ID/TransactionLogMetadata directory in the Amazon S3
bucket do-not-delete-rds-custom-$ACCOUNT_ID-$REGION-unique_identifier. For more
information about the RI.End file, see Transaction logs in Amazon S3 (p. 1161).

1160
Amazon Relational Database Service User Guide
Backing up and restoring an RDS
Custom for SQL Server DB instance

Transaction logs in Amazon S3

The backup retention period determines whether transaction logs for RDS Custom for SQL Server
DB instances are automatically extracted and uploaded to Amazon S3. A nonzero value means that
automatic backups are created, and that the RDS Custom agent uploads the transaction logs to S3 every
5 minutes.

Transaction log files on S3 are encrypted at rest using the AWS KMS key that you provided when you
created your DB instance. For more information, see Protecting data using server-side encryption in the
Amazon Simple Storage Service User Guide.

The transaction logs for each database are uploaded to an S3 bucket named do-not-delete-
rds-custom-$ACCOUNT_ID-$REGION-unique_identifier. The RDSCustomForSQLServer/
Instances/DB_instance_resource_ID directory in the S3 bucket contains two subdirectories:

• TransactionLogs – Contains the transaction logs for each database and their respective metadata.

The transaction log file name follows the pattern yyyyMMddHHmm.database_id.timestamp, for
example:

202110202230.11.1634769287

The same file name with the suffix _metadata contains information about the transaction log such as
log sequence numbers, database name, and RdsChunkCount. RdsChunkCount determines how many
physical files represent a single transaction log file. You might see files with suffixes _0001, _0002,
and so on, which mean the physical chunks of a transaction log file. If you want to use a chunked
transaction log file, make sure to merge the chunks after downloading them.

Consider a scenario where you have the following files:


• 202110202230.11.1634769287
• 202110202230.11.1634769287_0001
• 202110202230.11.1634769287_0002
• 202110202230.11.1634769287_metadata

The RdsChunkCount is 3. The order for merging the files is the following:
202110202230.11.1634769287, 202110202230.11.1634769287_0001,
202110202230.11.1634769287_0002.
• TransactionLogMetadata – Contains metadata information about each iteration of transaction log
extraction.

The RI.End file contains information for all databases that had their transaction logs extracted, and
all databases that exist but didn't have their transaction logs extracted. The RI.End file name follows
the pattern yyyyMMddHHmm.RI.End.timestamp, for example:

202110202230.RI.End.1634769281

You can restore an RDS Custom for SQL Server DB instance to a point in time using the AWS
Management Console, the AWS CLI, or the RDS API.

Console

To restore an RDS Custom DB instance to a specified time

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.

1161
Amazon Relational Database Service User Guide
Backing up and restoring an RDS
Custom for SQL Server DB instance

2. In the navigation pane, choose Automated backups.


3. Choose the RDS Custom DB instance that you want to restore.
4. For Actions, choose Restore to point in time.

The Restore to point in time window appears.


5. Choose Latest restorable time to restore to the latest possible time, or choose Custom to choose a
time.

If you chose Custom, enter the date and time to which you want to restore the instance.

Times are shown in your local time zone, which is indicated by an offset from Coordinated Universal
Time (UTC). For example, UTC-5 is Eastern Standard Time/Central Daylight Time.
6. For DB instance identifier, enter the name of the target restored RDS Custom DB instance. The
name must be unique.
7. Choose other options as needed, such as DB instance class.
8. Choose Restore to point in time.

AWS CLI

You restore a DB instance to a specified time by using the restore-db-instance-to-point-in-time AWS CLI
command to create a new RDS Custom DB instance.

Use one of the following options to specify the backup to restore from:

• --source-db-instance-identifier mysourcedbinstance
• --source-dbi-resource-id dbinstanceresourceID
• --source-db-instance-automated-backups-arn backupARN

The custom-iam-instance-profile option is required.

The following example restores my-custom-db-instance to a new DB instance named my-


restored-custom-db-instance, as of the specified time.

Example

For Linux, macOS, or Unix:

aws rds restore-db-instance-to-point-in-time \


--source-db-instance-identifier my-custom-db-instance\
--target-db-instance-identifier my-restored-custom-db-instance \
--custom-iam-instance-profile AWSRDSCustomInstanceProfileForRdsCustomInstance \
--restore-time 2022-10-14T23:45:00.000Z

For Windows:

aws rds restore-db-instance-to-point-in-time ^


--source-db-instance-identifier my-custom-db-instance ^
--target-db-instance-identifier my-restored-custom-db-instance ^
--custom-iam-instance-profile AWSRDSCustomInstanceProfileForRdsCustomInstance ^
--restore-time 2022-10-14T23:45:00.000Z

Deleting an RDS Custom for SQL Server snapshot


You can delete DB snapshots managed by RDS Custom for SQL Server when you no longer need them.
The deletion procedure is the same for both Amazon RDS and RDS Custom DB instances.

1162
Amazon Relational Database Service User Guide
Backing up and restoring an RDS
Custom for SQL Server DB instance

The Amazon EBS snapshots for the binary and root volumes remain in your account for a longer time
because they might be linked to some instances running in your account or to other RDS Custom for SQL
Server snapshots. These EBS snapshots are automatically deleted after they're no longer related to any
existing RDS Custom for SQL Server resources (DB instances or backups).

Console

To delete a snapshot of an RDS Custom DB instance

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Snapshots.
3. Choose the DB snapshot that you want to delete.
4. For Actions, choose Delete snapshot.
5. Choose Delete on the confirmation page.

AWS CLI

To delete an RDS Custom snapshot, use the AWS CLI command delete-db-snapshot.

The following option is required:

• --db-snapshot-identifier – The snapshot to be deleted

The following example deletes the my-custom-snapshot DB snapshot.

Example

For Linux, macOS, or Unix:

aws rds delete-db-snapshot \


--db-snapshot-identifier my-custom-snapshot

For Windows:

aws rds delete-db-snapshot ^


--db-snapshot-identifier my-custom-snapshot

Deleting RDS Custom for SQL Server automated backups


You can delete retained automated backups for RDS Custom for SQL Server when they are no longer
needed. The procedure is the same as the procedure for deleting Amazon RDS backups.

Console

To delete a retained automated backup

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Automated backups.
3. Choose Retained.
4. Choose the retained automated backup that you want to delete.
5. For Actions, choose Delete.

1163
Amazon Relational Database Service User Guide
Backing up and restoring an RDS
Custom for SQL Server DB instance

6. On the confirmation page, enter delete me and choose Delete.

AWS CLI

You can delete a retained automated backup by using the AWS CLI command delete-db-instance-
automated-backup.

The following option is used to delete a retained automated backup:

• --dbi-resource-id – The resource identifier for the source RDS Custom DB instance.

You can find the resource identifier for the source DB instance of a retained automated backup by
using the AWS CLI command describe-db-instance-automated-backups.

The following example deletes the retained automated backup with source DB instance resource
identifier custom-db-123ABCEXAMPLE.

Example

For Linux, macOS, or Unix:

aws rds delete-db-instance-automated-backup \


--dbi-resource-id custom-db-123ABCEXAMPLE

For Windows:

aws rds delete-db-instance-automated-backup ^


--dbi-resource-id custom-db-123ABCEXAMPLE

1164
Amazon Relational Database Service User Guide
Migrating an on-premises database
to RDS Custom for SQL Server

Migrating an on-premises database to Amazon RDS


Custom for SQL Server
You can use the following process to migrate an on-premises Microsoft SQL Server database to Amazon
RDS Custom for SQL Server using native backup and restore:

1. Take a full backup of the database on the on-premises DB instance.


2. Upload the backup file to Amazon S3.
3. Download the backup file from S3 to your RDS Custom for SQL Server DB instance.
4. Restore a database using the downloaded backup file on the RDS Custom for SQL Server DB instance.

This process explains the migration of a database from on-premises to RDS Custom for SQL Server, using
native full backup and restore. To reduce the cutover time during the migration process, you might also
consider using differential or log backups.

For general information about native backup and restore for RDS for SQL Server, see Importing and
exporting SQL Server databases using native backup and restore (p. 1419).

Topics
• Prerequisites (p. 1165)
• Backing up the on-premises database (p. 1165)
• Uploading the backup file to Amazon S3 (p. 1166)
• Downloading the backup file from Amazon S3 (p. 1166)
• Restoring the backup file to the RDS Custom for SQL Server DB instance (p. 1166)

Prerequisites
Perform the following tasks before migrating the database:

1. Configure Remote Desktop Connection (RDP) for your RDS Custom for SQL Server DB instance. For
more information, see Connecting to your RDS Custom DB instance using RDP (p. 1135).
2. Configure access to Amazon S3 so you can upload and download the database backup file. For more
information, see Integrating an Amazon RDS for SQL Server DB instance with Amazon S3 (p. 1464).

Backing up the on-premises database


You use SQL Server native backup to take a full backup of the database on the on-premises DB instance.

The following example shows a backup of a database called mydatabase, with the COMPRESSION
option specified to reduce the backup file size.

To back up the on-premises database

1. Using SQL Server Management Studio (SSMS), connect to the on-premises SQL Server instance.
2. Run the following T-SQL command.

backup database mydatabase to


disk ='C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\Backup\mydb-
full-compressed.bak'
with compression;

1165
Amazon Relational Database Service User Guide
Migrating an on-premises database
to RDS Custom for SQL Server

Uploading the backup file to Amazon S3


You use the AWS Management Console to upload the backup file mydb-full-compressed.bak to
Amazon S3.

To upload the backup file to S3

1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. For Buckets, choose the name of the bucket to which you want to upload your backup file.
3. Choose Upload.
4. In the Upload window, do one of the following:

• Drag and drop mydb-full-compressed.bak to the Upload window.


• Choose Add file, choose mydb-full-compressed.bak, and then choose Open.

Amazon S3 uploads your backup file as an S3 object. When the upload completes, you can see a
success message on the Upload: status page.

Downloading the backup file from Amazon S3


You use the console to download the backup file from S3 to the RDS Custom for SQL Server DB instance.

To download the backup file from S3

1. Using RDP, connect to your RDS Custom for SQL Server DB instance.
2. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
3. In the Buckets list, choose the name of the bucket that contains your backup file.
4. Choose the backup file mydb-full-compressed.bak.
5. For Actions, choose Download as.
6. Open the context (right-click) menu for the link provided, then choose Save As.
7. Save mydb-full-compressed.bak to the D:\rdsdbdata\BACKUP directory.

Restoring the backup file to the RDS Custom for SQL Server DB
instance
You use SQL Server native restore to restore the backup file to your RDS Custom for SQL Server DB
instance.

In this example, the MOVE option is specified because the data and log file directories are different from
the on-premises DB instance.

To restore the backup file

1. Using SSMS, connect to your RDS Custom for SQL Server DB instance.
2. Run the following T-SQL command.

restore database mydatabase from disk='D:\rdsdbdata\BACKUP\mydb-full-compressed.bak'


with move 'mydatabase' to 'D:\rdsdbdata\DATA\mydatabase.mdf',
move 'mydatabase_log' to 'D:\rdsdbdata\DATA\mydatabase_log.ldf';

1166
Amazon Relational Database Service User Guide
Migrating an on-premises database
to RDS Custom for SQL Server

1167
Amazon Relational Database Service User Guide
Upgrading a DB instance for RDS Custom for SQL Server

Upgrading a DB instance for Amazon RDS Custom for


SQL Server
You can upgrade an Amazon RDS Custom for SQL Server DB instance by modifying it to use a new DB
engine version, the same as you do for Amazon RDS.

The same limitations for upgrading an RDS Custom for SQL Server DB instance apply as for modifying an
RDS Custom for SQL Server DB instance in general. For more information, see Modifying an RDS Custom
for SQL Server DB instance (p. 1141).

For general information about upgrading DB instances, see Upgrading a DB instance engine
version (p. 429).

1168
Amazon Relational Database Service User Guide
Troubleshooting Amazon RDS Custom for SQL Server

Troubleshooting DB issues for Amazon RDS Custom


for SQL Server
The shared responsibility model of RDS Custom provides OS shell–level access and database
administrator access. RDS Custom runs resources in your account, unlike Amazon RDS, which runs
resources in a system account. With greater access comes greater responsibility. In the following sections,
you can learn how to troubleshoot issues with Amazon RDS Custom for SQL Server DB instances.
Note
This section explains how to troubleshoot RDS Custom for SQL Server. For troubleshooting RDS
Custom for Oracle, see Troubleshooting DB issues for Amazon RDS Custom for Oracle (p. 1078).

Topics
• Viewing RDS Custom events (p. 1169)
• Viewing RDS Custom events (p. 1169)
• Troubleshooting CEV errors for RDS Custom for SQL Server (p. 1170)
• Fixing unsupported configurations in RDS Custom for SQL Server (p. 1172)

Viewing RDS Custom events


The procedure for viewing events is the same for RDS Custom and Amazon RDS DB instances. For more
information, see Viewing Amazon RDS events (p. 852).

To view RDS Custom event notification using the AWS CLI, use the describe-events command. RDS
Custom introduces several new events. The event categories are the same as for Amazon RDS. For the list
of events, see Amazon RDS event categories and event messages (p. 874).

The following example retrieves details for the events that have occurred for the specified RDS Custom
DB instance.

aws rds describe-events \


--source-identifier my-custom-instance \
--source-type db-instance

Viewing RDS Custom events


The procedure for subscribing to events is the same for RDS Custom and Amazon RDS DB instances. For
more information, see Subscribing to Amazon RDS event notification (p. 860).

To subscribe to RDS Custom event notification using the CLI, use the create-event-subscription
command. Include the following required parameters:

• --subscription-name
• --sns-topic-arn

The following example creates a subscription for backup and recovery events for an RDS Custom DB
instance in the current AWS account. Notifications are sent to an Amazon Simple Notification Service
(Amazon SNS) topic, specified by --sns-topic-arn.

aws rds create-event-subscription \


--subscription-name my-instance-events \
--source-type db-instance \
--event-categories '["backup","recovery"]' \
--sns-topic-arn arn:aws:sns:us-east-1:123456789012:interesting-events

1169
Amazon Relational Database Service User Guide
Troubleshooting Amazon RDS Custom for SQL Server

Troubleshooting CEV errors for RDS Custom for SQL Server


When you try to create a CEV, it might fail. In this case, RDS Custom issues the RDS-EVENT-0198 event
message. For more information on viewing RDS events, see Amazon RDS event categories and event
messages (p. 874).

Use the following information to help you address possible causes.

Message Troubleshooting suggestions

Custom Engine Version Run Sysprep on the EC2 instance


creation expected a Sysprep’d that you created from the AMI. For
AMI. Retry creation using a more information about prepping
Sysprep’d AMI. an AMI using Sysprep, see Create a
standardized Amazon Machine Image
(AMI) using Sysprep.

EC2 Image permissions for Verify that your account and profile
image (AMI_ID) weren't found used for creation has the required
for customer (Customer_ID). permissions on create EC2
Verify customer (Customer_ID) Instance and Describe Images
has valid permissions on the for the selected AMI.
EC2 Image.

Image (AMI_ID) doesn't exist Ensure the AMI exists in the same
in your account (ACCOUNT_ID). customer account.
Verify (ACCOUNT_ID) is the
owner of the EC2 image.

Image id (AMI_ID) isn't The name of the AMI is incorrect.


valid. Specify a valid image Ensure the correct AMI ID is provided.
id, and try again.

Image (AMI_ID) operating Choose a supported AMI that has


system platform isn't Windows Server with SQL Server
supported. Specify a valid Enterprise, Standard, or Web edition.
image, and try again. Choose an AMI with one of the
following usage operation codes from
the EC2 Marketplace:

• RunInstances:0102 - Windows with


SQL Server Enterprise
• RunInstances:0006 - Windows with
SQL Server Standard
• RunInstances:0202 - Windows with
SQL Server Web

SQL Server Web Edition isn't Use an AMI that contains a supported
supported for creating a edition of SQL Server. For more
Custom Engine Version using information, see Version support
Bring Your Own Media. Specify for RDS Custom for SQL Server
a valid image, and try again. CEVs (p. 1119).

The custom engine version Classic RDS Custom for SQL Server
can't be the same as the OEV engine versions aren't supported. For
engine version. Specify a example, version 15.00.4073.23.v1.
valid CEV, and try again. Use a supported version number.

1170
Amazon Relational Database Service User Guide
Troubleshooting Amazon RDS Custom for SQL Server

Message Troubleshooting suggestions

The custom engine version The CEV must be in an AVAILABLE


isn't in an active state. state to complete the operation.
Specify a valid CEV, and try Modify the CEV from INACTIVE to
again. AVAILABLE.

The custom engine version The target CEV is not valid. Check the
isn't valid for an upgrade. requirements for a valid upgrade path.
Specify a valid CEV with an
engine version greater or
equal to (X), and try again.

The custom engine version Follow the required CEV naming


isn't valid. Names can convention. For more information, see
include only lowercase Requirements for RDS Custom for SQL
letters (a-z), dashes (-), Server CEVs (p. 1119).
underscores (_), and periods
(.). Specify a valid CEV, and
try again.

The custom engine version An unsupported DB engine version


isn't valid. Specify valid was provided. Use a supported DB
database engine version, engine version.
and try again. Example:
15.00.4073.23-cev123.

The expected architecture is Use an AMI built on the x86_64


(X) for image (AMI_ID), but architecture.
architecture (Y) was found.

The expected owner of image Create the EC2 instance from the AMI
(AMI_ID) is customer account that you have permission for. Run
ID (ACCOUNT_ID), but owner Sysprep on the EC2 instance to create
(ACCOUNT_ID) was found. and save a base image.

The expected platform is Use an AMI built with the Windows


(X) for image (AMI_ID), but platform.
platform (Y) was found.

The expected root device type Create the AMI with the EBS device
is (X) for image %s, but root type.
device type (Y) was found.

The expected SQL Server Choose a supported AMI that has


edition is (X), but (Y) was Windows Server with SQL Server
found. Enterprise, Standard, or Web edition.
Choose an AMI with one of the
following usage operation codes from
the EC2 Marketplace:

• RunInstances:0102 - Windows with


SQL Server Enterprise
• RunInstances:0006 - Windows with
SQL Server Standard
• RunInstances:0202 - Windows with
SQL Server Web

1171
Amazon Relational Database Service User Guide
Troubleshooting Amazon RDS Custom for SQL Server

Message Troubleshooting suggestions

The expected state is (X) Ensure the AMI is in a state of


for image (AMI_ID), but the AVAILABLE.
following state was found:
(Y).

The provided Windows OS Use a supported Windows OS.


name (X) isn’t valid. Make
sure the OS is one of the
following: (Y).

RDS expected a Windows build Use an AMI with a minimum OS build


version greater than or equal version of 14393.
to (X), but found version
(Y).

RDS expected a Windows major Use an AMI with a minimum OS major


version greater than or equal version of 10.0 or higher.
to (X).1f, but found version
(Y).1f.

Fixing unsupported configurations in RDS Custom for SQL


Server
Because of the shared responsibility model, it's your responsibility to fix configuration issues that put
your RDS Custom for SQL Server DB instance into the unsupported-configuration state. If the issue
is with the AWS infrastructure, you can use the console or the AWS CLI to fix it. If the issue is with the
operating system or the database configuration, you can log in to the host to fix it.
Note
This section explains how to fix unsupported configurations in RDS Custom for SQL Server.
For information about RDS Custom for Oracle, see Fixing unsupported configurations in RDS
Custom for Oracle (p. 1080).

In the following table, you can find descriptions of the notifications and events that the support
perimeter sends and how to fix them. These notifications and the support perimeter are subject to
change. For background on the support perimeter, see RDS Custom support perimeter (p. 985). For event
descriptions, see Amazon RDS event categories and event messages (p. 874).

Configuration area RDS event message Description Action

Database

Database health You need to manually The support perimeter Log in to the host and examine the
recover the database monitors the DB state of your RDS Custom for SQL
on EC2 instance instance state. It also Server database.
[i- monitors how many
xxxxxxxxxxxxxxxxx]. restarts occurred ps -eo pid,state,command | grep
during the previous smon
The DB instance hour and day.
restarted.
You're notified when If necessary, restart your RDS Custom
the instance is in a for SQL Server DB instance to get it
state where it still running again. Sometimes you might
need to reboot the host.

1172
Amazon Relational Database Service User Guide
Troubleshooting Amazon RDS Custom for SQL Server

Configuration area RDS event message Description Action


exists, but you can't After the restart, the RDS Custom
interact with it. agent detects that the DB instance is
no longer in an unresponsive state. It
then notifies the support perimeter to
reevaluate your DB instance state.

Database file locations The RDS Custom All SQL Server Store all RDS Custom for SQL Server
instance is going out database files are database files on the D: drive.
of perimeter because stored on the D: drive
an unsupported by default, in the D:
configuration was \rdsdbdata\DATA
used for database files directory.
location.
If you create or
alter the database
file location to be
anywhere other than
the D: drive, then RDS
Custom places the DB
instance outside the
support perimeter.

We strongly
recommend that
you don't save any
database files on
the C: drive. You can
lose data on the C:
drive during certain
operations, such as
hardware failure.
Storage on the C: drive
doesn't offer the same
durability as on the D:
drive, which is an EBS
volume.

Also, if database files


can't be found, RDS
Custom places the DB
instance outside the
support perimeter.

1173
Amazon Relational Database Service User Guide
Troubleshooting Amazon RDS Custom for SQL Server

Configuration area RDS event message Description Action

Shared memory The RDS Custom The RDS Custom To bring the RDS Custom for SQL
connections instance is going out agent on the EC2 Server DB instance back within
of perimeter because host connects to the support perimeter, turn on the
an unsupported SQL Server using shared memory protocol on the
configuration was the shared memory Protocol page of the Shared Memory
used for shared protocol. Properties window by setting Enabled
memory protocol. to Yes. After you enable the protocol,
If this protocol is restart SQL Server.
turned off (Enabled is
set to No), then RDS
Custom can't perform
its management
actions and places the
DB instance outside
the support perimeter.

Operating system

RDS Custom agent The RDS Custom The RDS Custom Log in to the host and make sure that
status instance is going out agent must always be the RDS Custom agent is running.
of perimeter because running.
an unsupported You can use the following commands
configuration was The support perimeter to find the agent's status.
used for RDS Custom monitors the RDS
agent. Custom agent process $name = "RDSCustomAgent"
state on the host every $service = Get-Service $name
1 minute. Write-Host $service.Status

On RDS Custom for


If the status isn't Running, you can
SQL Server, a stopped
start the service with the following
agent is recovered by
command:
the monitoring service.
The DB instance goes
outside the support Start-Service $name
perimeter if the RDS
Custom agent is
uninstalled.

SSM agent status The RDS Custom The SSM agent must For more information, see
instance is going out always be running. The Troubleshooting SSM Agent.
of perimeter because RDS Custom agent is
an unsupported responsible for making
configuration was sure that the Systems
used for SSM agent. Manager agent is
running.

The support perimeter


monitors the SSM
agent process state
on the host every 1
minute.

AWS resources

1174
Amazon Relational Database Service User Guide
Troubleshooting Amazon RDS Custom for SQL Server

Configuration area RDS event message Description Action

Amazon EC2 instance The state of the The support perimeter If the EC2 instance is stopped, start
state EC2 instance monitors EC2 it and remount the binary and data
[i- instance state-change volumes.
xxxxxxxxxxxxxxxxx] notifications. The EC2
has changed from instance must always If the EC2 instance is terminated,
[RUNNING] to be running. RDS Custom performs an automated
[STOPPING]. AMI associated with a recovery to provision a new EC2
CEV should always be instance.
The Amazon active and available.
EC2 instance
[i-
xxxxxxxxxxxxxxxxx]
has been terminated
and can't be found.
Delete the database
instance to clean up
resources.

The Amazon
EC2 instance
[i-
xxxxxxxxxxxxxxxxx]
has been stopped.
Start the instance,
and restore the host
configuration. For
more information, see
the troubleshooting
documentation.

Amazon EC2 instance The RDS Custom The support perimeter Change the EC2 instance type back to
attributes instance is going out monitors the instance the original type using the EC2 console
of perimeter because type of the EC2 or CLI.
an unsupported instance where the
configuration was RDS Custom DB To change the instance type because
used for EC2 instance instance is running. of scaling requirements, do a PITR
metadata. The EC2 instance type and specify the new instance type and
must stay the same class. However, doing this results in a
as when you set it up new RDS Custom DB instance with a
during RDS Custom DB new host and Domain Name System
instance creation. (DNS) name.

1175
Amazon Relational Database Service User Guide
Troubleshooting Amazon RDS Custom for SQL Server

Configuration area RDS event message Description Action

Amazon Elastic Block The RDS Custom RDS Custom creates If you detached any initial EBS
Store (Amazon EBS) instance is going out two types of EBS volumes, contact AWS Support.
volumes of perimeter because volume, besides the
an unsupported root volume created If you modified the storage type,
configuration was from the Amazon Provisioned IOPS, or storage
used for EBS volume Machine Image (AMI), throughput of an EBS volume, revert
metadata. and associates them the modification to the original value.
with the EC2 instance.
If you modified the storage size of an
The binary volume is EBS volume, contact AWS Support.
where the database
software binaries are
located. The data
volumes are where
database files are
located. The storage
configurations that
you set when creating
the DB instance are
used to configure the
data volumes.

The support perimeter


monitors the
following:

• The initial EBS


volumes created
with the DB instance
are still associated.
• The initial EBS
volumes still
have the same
configurations
as initially set:
storage type,
size, Provisioned
IOPS, and storage
throughput.

1176
Amazon Relational Database Service User Guide

Working with Amazon RDS on AWS


Outposts
Amazon RDS on AWS Outposts extends RDS for SQL Server, RDS for MySQL, and RDS for PostgreSQL
databases to AWS Outposts environments. AWS Outposts uses the same hardware as in public AWS
Regions to bring AWS services, infrastructure, and operation models on-premises. With RDS on Outposts,
you can provision managed DB instances close to the business applications that must run on-premises.
For more information about AWS Outposts, see AWS Outposts.

You use the same AWS Management Console, AWS CLI, and RDS API to provision and manage on-
premises RDS on Outposts DB instances as you do for RDS DB instances running in the AWS Cloud. RDS
on Outposts automates tasks, such as database provisioning, operating system and database patching,
backup, and long-term archival in Amazon S3.

RDS on Outposts supports automated backups of DB instances. Network connectivity between your
Outpost and your AWS Region is required to back up and restore DB instances. All DB snapshots and
transaction logs from an Outpost are stored in your AWS Region. From your AWS Region, you can restore
a DB instance from a DB snapshot to a different Outpost. For more information, see Working with
backups (p. 591).

RDS on Outposts supports automated maintenance and upgrades of DB instances. For more information,
see Maintaining a DB instance (p. 418).

RDS on Outposts uses encryption at rest for DB instances and DB snapshots using your AWS KMS key. For
more information about encryption at rest, see Encrypting Amazon RDS resources (p. 2586).

By default, EC2 instances in Outposts subnets can use the Amazon Route 53 DNS Service to resolve
domain names to IP addresses. You might encounter longer DNS resolution times with Route 53,
depending on the path latency between your Outpost and the AWS Region. In such cases, you can use
the DNS servers installed locally in your on-premises environment. For more information, see DNS in the
AWS Outposts User Guide.

When network connectivity to the AWS Region isn't available, your DB instance continues to run locally.
You can continue to access DB instances using DNS name resolution by configuring a local DNS server
as a secondary server. However, you can't create new DB instances or take new actions on existing DB
instances. Automatic backups don't occur when there is no connectivity. If there is a DB instance failure,
the DB instance isn't automatically replaced until connectivity is restored. We recommend restoring
network connectivity as soon as possible.

Topics
• Prerequisites for Amazon RDS on AWS Outposts (p. 1178)
• Amazon RDS on AWS Outposts support for Amazon RDS features (p. 1179)
• Supported DB instance classes for Amazon RDS on AWS Outposts (p. 1182)
• Customer-owned IP addresses for Amazon RDS on AWS Outposts (p. 1184)
• Working with Multi-AZ deployments for Amazon RDS on AWS Outposts (p. 1186)
• Creating DB instances for Amazon RDS on AWS Outposts (p. 1189)
• Creating read replicas for Amazon RDS on AWS Outposts (p. 1196)
• Considerations for restoring DB instances on Amazon RDS on AWS Outposts (p. 1198)

1177
Amazon Relational Database Service User Guide
Prerequisites

Prerequisites for Amazon RDS on AWS Outposts


The following are prerequisites for using Amazon RDS on AWS Outposts:

• Install AWS Outposts in your on-premises data center. For more information about AWS Outposts, see
AWS Outposts.
• Make sure that you have at least one subnet available for RDS on Outposts. You can use the same
subnet for other workloads.
• Make sure that you have a reliable network connection between your Outpost and an AWS Region.

1178
Amazon Relational Database Service User Guide
Support for Amazon RDS features

Amazon RDS on AWS Outposts support for


Amazon RDS features
The following table describes the Amazon RDS features supported by Amazon RDS on AWS Outposts.

Feature Supported Notes More information

DB instance Yes You can only create DB Creating DB instances


provisioning instances for RDS for SQL for Amazon RDS on AWS
Server, RDS for MySQL, Outposts (p. 1189)
and RDS for PostgreSQL
DB engines. The following
versions are supported:

• Microsoft SQL Server:


• 15.00.4043.16.v1 and
higher 2019 versions
• 14.00.3294.2.v1 and
higher 2017 versions
• 13.00.5820.21.v1 and
higher 2016 versions
• MySQL version 8.0.28 and
higher MySQL 8.0 versions
• All PostgreSQL 15 &
14 & 13 versions, and
PostgreSQL version 12.5
and higher PostgreSQL 12
versions

Connect to a Yes Some TLS versions and Connecting to a DB instance


Microsoft SQL encryption ciphers might running the Microsoft
Server DB instance not be secure. To turn SQL Server database
with Microsoft them off, follow the engine (p. 1380)
SQL Server instructions in Configuring
Management security protocols and
Studio ciphers (p. 1459).

Modifying the Yes — Modifying an Amazon RDS


master user DB instance (p. 401)
password

Renaming a DB Yes — Modifying an Amazon RDS


instance DB instance (p. 401)

Rebooting a DB Yes — Rebooting a DB


instance instance (p. 436)

Stopping a DB Yes — Stopping an Amazon RDS DB


instance instance temporarily (p. 381)

Starting a DB Yes — Starting an Amazon RDS DB


instance instance that was previously
stopped (p. 384)

1179
Amazon Relational Database Service User Guide
Support for Amazon RDS features

Feature Supported Notes More information

Multi-AZ Yes Multi-AZ deployments are Creating DB instances


deployments supported on MySQL and for Amazon RDS on AWS
PostgreSQL DB instances. Outposts (p. 1189)

Multi-AZ deployments do not Configuring and


support Direct VPC Routing managing a Multi-AZ
(DVR). deployment (p. 492)

DB parameter Yes — Working with parameter


groups groups (p. 347)

Read replicas Yes Read replicas are supported Creating read replicas
for MySQL and PostgreSQL for Amazon RDS on AWS
DB instances. Outposts (p. 1196)

Read replicas do not support


Direct VPC Routing (DVR).

Encryption at rest Yes RDS on Outposts doesn't Encrypting Amazon RDS


support unencrypted DB resources (p. 2586)
instances.

AWS Identity No — IAM database authentication


and Access for MariaDB, MySQL, and
Management PostgreSQL (p. 2642)
(IAM) database
authentication

Associating an No — add-role-to-db-instance AWS


IAM role with a DB CLI command
instance
AddRoleToDBInstance RDS
API operation

Kerberos No — Kerberos
authentication authentication (p. 2567)

Tagging Amazon Yes — Tagging Amazon RDS


RDS resources resources (p. 461)

Option groups Yes — Working with option


groups (p. 331)

Modifying the Yes — Maintaining a DB


maintenance instance (p. 418)
window

Automatic minor Yes — Automatically upgrading the


version upgrade minor engine version (p. 431)

Modifying the Yes — Working with


backup window backups (p. 591)

Modifying an Amazon RDS


DB instance (p. 401)

Changing the DB Yes — Modifying an Amazon RDS


instance class DB instance (p. 401)

1180
Amazon Relational Database Service User Guide
Support for Amazon RDS features

Feature Supported Notes More information

Changing the Yes — Modifying an Amazon RDS


allocated storage DB instance (p. 401)

Storage Yes — Managing capacity


autoscaling automatically with
Amazon RDS storage
autoscaling (p. 480)

Manual and Yes You can store automated Creating DB instances


automatic DB backups and manual for Amazon RDS on AWS
instance snapshots snapshots in your AWS Outposts (p. 1189)
Region. Or you can store
them locally on your Amazon S3 on Outposts
Outpost.
Creating a DB
Local backups are supported snapshot (p. 613)
on MySQL and PostgreSQL
DB instances.

To store backups on your


Outpost, make sure that you
have Amazon S3 on Outposts
configured.

Local backups are not


supported for Multi-AZ
instance deployments.

Restoring from a Yes You can store automated Considerations for restoring
DB snapshot backups and manual DB instances on Amazon RDS
snapshots for the restored on AWS Outposts (p. 1198)
DB instance in the parent
AWS Region or locally on Restoring from a DB
your Outpost. snapshot (p. 615)

Restoring a DB No — Restoring a backup into a


instance from MySQL DB instance (p. 1680)
Amazon S3

Exporting No — Exporting DB snapshot data


snapshot data to to Amazon S3 (p. 642)
Amazon S3

Point-in-time Yes You can store automated Considerations for restoring


recovery backups and manual DB instances on Amazon RDS
snapshots for the restored on AWS Outposts (p. 1198)
DB instance in the parent
AWS Region or locally on Restoring a DB instance to a
your Outpost, with one specified time (p. 660)
exception.

Enhanced No — Monitoring OS metrics


monitoring with Enhanced
Monitoring (p. 797)

1181
Amazon Relational Database Service User Guide
Supported DB instance classes

Feature Supported Notes More information

Amazon Yes You can view the same set of Monitoring Amazon RDS
CloudWatch metrics that are available for metrics with Amazon
monitoring your databases in the AWS CloudWatch (p. 706)
Region.

Publishing Yes — Publishing database logs


database to Amazon CloudWatch
engine logs to Logs (p. 898)
CloudWatch Logs

Event notification Yes — Working with Amazon RDS


event notification (p. 855)

Amazon RDS No — Monitoring DB load with


Performance Performance Insights on
Insights Amazon RDS (p. 720)

Viewing or No RDS on Outposts doesn't Monitoring Amazon RDS log


downloading support viewing database files (p. 895)
database logs logs using the console or
describing database logs
using the AWS CLI or RDS
API.

RDS on Outposts doesn't


support downloading
database logs using the
console or downloading
database logs using the AWS
CLI or RDS API.

Amazon RDS No — Using Amazon RDS


Proxy Proxy (p. 1199)

Stored procedures Yes — RDS for MySQL


for Amazon RDS stored procedure
for MySQL reference (p. 1757)

Replication with No — Configuring binary log


external databases file position replication
for RDS for MySQL with an external source
instance (p. 1724)

Native backup Yes — Importing and exporting


and restore for SQL Server databases
Amazon RDS for using native backup and
Microsoft SQL restore (p. 1419)
Server

Supported DB instance classes for Amazon RDS on


AWS Outposts
Amazon RDS on AWS Outposts supports the following DB instance classes:

1182
Amazon Relational Database Service User Guide
Supported DB instance classes

• General purpose DB instance classes


• db.m5.24xlarge
• db.m5.12xlarge
• db.m5.4xlarge
• db.m5.2xlarge
• db.m5.xlarge
• db.m5.large
• Memory optimized DB instance classes
• db.r5.24xlarge
• db.r5.12xlarge
• db.r5.4xlarge
• db.r5.2xlarge
• db.r5.xlarge
• db.r5.large

Depending on how you've configured your Outpost, you might not have all of these classes available. For
example, if you haven't purchased the db.r5 classes for your Outpost, you can't use them with RDS on
Outposts.

Only general purpose SSD storage is supported for RDS on Outposts DB instances. For more information
about DB instance classes, see DB instance classes (p. 11).

Amazon RDS manages maintenance and recovery for your DB instances and requires active capacity on
the Outpost to do so. We recommend that you configure N+1 EC2 instances for each DB instance class
in your production environments. RDS on Outposts can use the extra capacity of these EC2 instances for
maintenance and repair operations. For example, if your production environments have 3 db.m5.large
and 5 db.r5.xlarge DB instance classes, then we recommend that they have at least 4 m5.large EC2
instances and 6 r5.xlarge EC2 instances. For more information, see Resilience in AWS Outposts in the
AWS Outposts User Guide.

1183
Amazon Relational Database Service User Guide
Customer-owned IP addresses

Customer-owned IP addresses for Amazon RDS on


AWS Outposts
Amazon RDS on AWS Outposts uses information that you provide about your on-premises network to
create an address pool. This pool is known as a customer-owned IP address pool (CoIP pool). Customer-
owned IP addresses (CoIPs) provide local or external connectivity to resources in your Outpost subnets
through your on-premises network. For more information about CoIPs, see Customer-owned IP
addresses in the AWS Outposts User Guide.

Each RDS on Outposts DB instance has a private IP address for traffic inside its virtual private cloud
(VPC). This private IP address isn't publicly accessible. You can use the Public option to set whether the
DB instance also has a public IP address in addition to the private IP address. Using the public IP address
for connections routes them through the internet and can result in high latencies in some cases.

Instead of using these private and public IP addresses, RDS on Outposts supports using CoIPs for DB
instances through their subnets. When you use a CoIP for an RDS on Outposts DB instance, you connect
to the DB instance with the DB instance endpoint. RDS on Outposts then automatically uses the CoIP for
all connections from both inside and outside of the VPC.

CoIPs can provide the following benefits for RDS on Outposts DB instances:

• Lower connection latency


• Enhanced security

Using CoIPs
You can turn CoIPs on or off for an RDS on Outposts DB instance using the AWS Management Console,
the AWS CLI, or the RDS API:

• With the AWS Management Console, choose the Customer-owned IP address (CoIP) setting in Access
type to use CoIPs. Choose one of the other settings to turn them off.

1184
Amazon Relational Database Service User Guide
Limitations

• With the AWS CLI, use the --enable-customer-owned-ip | --no-enable-customer-owned-


ip option.
• With the RDS API, use the EnableCustomerOwnedIp parameter.

You can turn CoIPs on or off when you perform any of the following actions:

• Create a DB instance

For more information, see Creating DB instances for Amazon RDS on AWS Outposts (p. 1189).
• Modify a DB instance

For more information, see Modifying an Amazon RDS DB instance (p. 401).
• Create a read replica

For more information, see Creating read replicas for Amazon RDS on AWS Outposts (p. 1196).
• Restore a DB instance from a snapshot

For more information, see Restoring from a DB snapshot (p. 615).


• Restore a DB instance to a specified time

For more information, see Restoring a DB instance to a specified time (p. 660).

Note
In some cases, you might turn on CoIPs for a DB instance but Amazon RDS isn't able to allocate
a CoIP for the DB instance. In such cases, the DB instance status is changed to incompatible-
network. For more information about the DB instance status, see Viewing Amazon RDS DB
instance status (p. 684).

Limitations
The following limitations apply to CoIP support for RDS on Outposts DB instances:

• When using a CoIP for a DB instance, make sure that public accessibility is turned off for that DB
instance.
• Make sure that the inbound rules for your VPC security groups include the CoIP address range (CIDR
block). For more information about setting up security groups, see Provide access to your DB instance
in your VPC by creating a security group (p. 177).
• You can't assign a CoIP from a CoIP pool to a DB instance. When you use a CoIP for a DB instance,
Amazon RDS automatically assigns a CoIP from a CoIP pool to the DB instance.
• You must use the AWS account that owns the Outpost resources (owner) or share the following
resources with other AWS accounts (consumers) in the same organization:
• The Outpost
• The local gateway (LGW) route table for the DB instance's VPC
• The CoIP pool or pools for the LGW route table

For more information, see Working with shared AWS Outposts resources in the AWS Outposts User
Guide.

1185
Amazon Relational Database Service User Guide
Multi-AZ deployments

Working with Multi-AZ deployments for Amazon


RDS on AWS Outposts
For Multi-AZ deployments, Amazon RDS creates a primary DB instance on one AWS Outpost. RDS
synchronously replicates the data to a standby DB instance on a different Outpost.

Multi-AZ deployments on AWS Outposts operate like Multi-AZ deployments in AWS Regions, but with
the following differences:

• They require a local connection between two or more Outposts.


• They require customer-owned IP (CoIP) pools. For more information, see Customer-owned IP addresses
for Amazon RDS on AWS Outposts (p. 1184).
• Replication runs on your local network.

Multi-AZ on AWS Outposts is available for all supported versions of MySQL and PostgreSQL on RDS on
Outposts. Local backups aren't supported for Multi-AZ deployments. For more information, see Creating
DB instances for Amazon RDS on AWS Outposts (p. 1189).

Working with the shared responsibility model


Although AWS uses commercially reasonable efforts to provide DB instances configured for high
availability, the availability uses a shared responsibility model. The ability of RDS on Outposts to fail over
and repair DB instances requires each of your Outposts to be connected to its AWS Region.

RDS on Outposts also requires connectivity between the Outpost that is hosting the primary DB instance
and the Outpost that is hosting the standby DB instance for synchronous replication. Any impact to this
connection can prevent RDS on Outposts from performing a failover.

You might see elevated latencies for a standard DB instance deployment as a result of the synchronous
data replication. The bandwidth and latency of the connection between the Outpost hosting the
primary DB instance and the Outpost hosting the standby DB instance directly affect latencies. For more
information, see Prerequisites (p. 1187).

Improving availability
We recommend the following actions to improve availability:

• Allocate enough additional capacity for your mission-critical applications to allow recovery and failover
if there is an underlying host issue. This applies to all Outposts that contain subnets in your DB subnet
group. For more information, see Resilience in AWS Outposts.
• Provide redundant network connectivity for your Outposts.
• Use more than two Outposts. Having more than two Outposts allows Amazon RDS to recover a DB
instance. RDS does this recovery by moving the DB instance to another Outpost if the current Outpost
experiences a failure.
• Provide dual power sources and redundant network connectivity for your Outpost.

We recommend the following for your local networks:

• The round trip time (RTT) latency between the Outpost hosting your primary DB instance and the
Outpost hosting your standby DB instance directly affects write latency. Keep the RTT latency between
the AWS Outposts in the low single-digit milliseconds. We recommend not more than 5 milliseconds,
but your requirements might vary.

1186
Amazon Relational Database Service User Guide
Prerequisites

You can find the net impact to network latency in the Amazon CloudWatch metrics for
WriteLatency. For more information, see Amazon CloudWatch metrics for Amazon RDS (p. 806).
• The availability of the connection between the Outposts affects the overall availability of your DB
instances. Have redundant network connectivity between the Outposts.

Prerequisites
Multi-AZ deployments on RDS on Outposts have the following prerequisites:

• Have at least two Outposts, connected over local connections and attached to different Availability
Zones in an AWS Region.
• Make sure that your DB subnet groups contain the following:
• At least two subnets in at least two Availability Zones in a given AWS Region.
• Subnets only in Outposts.
• At least two subnets in at least two Outposts within the same virtual private cloud (VPC).
• Associate your DB instance's VPC with all of your local gateway route tables. This association is
necessary because replication runs over your local network using your Outposts' local gateways.

For example, suppose that your VPC contains subnet-A in Outpost-A and subnet-B in Outpost-B.
Outpost-A uses LocalGateway-A (LGW-A), and Outpost-B uses LocalGateway-B (LGW-B). LGW-A has
RouteTable-A, and LGW-B has RouteTable-B. You want to use both RouteTable-A and RouteTable-B for
replication traffic. To do this, associate your VPC with both RouteTable-A and RouteTable-B.

For more information about how to create an association, see the Amazon EC2 create-local-gateway-
route-table-vpc-association AWS CLI command.
• Make sure that your Outposts use customer-owned IP (CoIP) routing. Each route table must also each
have at least one address pool. Amazon RDS allocates an additional IP address each for the primary
and standby DB instances for data synchronization.
• Make sure that the AWS account that owns the RDS DB instances owns the local gateway route tables
and CoIP pools. Or make sure it's part of a Resource Access Manager share with access to the local
gateway route tables and CoIP pools.
• Make sure that the IP addresses in your CoIP pools can be routed from one Outpost local gateway to
the others.
• Make sure that the VPC's CIDR blocks (for example, 10.0.0.0/4) and your CoIP pool CIDR blocks don't
contain IP addresses from Class E (240.0.0.0/4). RDS uses these IP addresses internally.
• Make sure that you correctly set up outbound and related inbound traffic.

RDS on Outposts establishes a virtual private network (VPN) connection between the primary and
standby DB instances. For this to work correctly, your local network must allow outbound and related
inbound traffic for Internet Security Association and Key Management Protocol (ISAKMP). It does so
using User Datagram Protocol (UDP) port 500 and IP Security (IPsec) Network Address Translation
Traversal (NAT-T) using UDP port 4500.

For more information on CoIPs, see Customer-owned IP addresses for Amazon RDS on AWS
Outposts (p. 1184) in this guide, and Customer-owned IP addresses in the AWS Outposts User Guide.

1187
Amazon Relational Database Service User Guide
Working with API operations for Amazon EC2 permissions

Working with API operations for Amazon EC2


permissions
Regardless of whether you use CoIPs for your DB instance on AWS Outposts, RDS requires access to your
CoIP pool resources. RDS can call the following EC2 permissions API operations for CoIPs on your behalf
for Multi-AZ deployments:

• CreateCoipPoolPermission – When you create a Multi-AZ DB instance on RDS on Outposts


• DeleteCoipPoolPermission – When you delete a Multi-AZ DB instance on RDS on Outposts

These API operations grant to, or remove from, internal RDS accounts the permission to allocate elastic
IP addresses from the CoIP pool specified by the permission. You can view these IP addresses using the
DescribeCoipPoolUsage API operation. For more information on CoIPs, see Customer-owned IP
addresses for Amazon RDS on AWS Outposts (p. 1184) and Customer-owned IP addresses in the AWS
Outposts User Guide.

RDS can also call the following EC2 permission API operations for local gateway route tables on your
behalf for Multi-AZ deployments:

• CreateLocalGatewayRouteTablePermission – When you create a Multi-AZ DB instance on RDS


on Outposts
• DeleteLocalGatewayRouteTablePermission – When you delete a Multi-AZ DB instance on RDS
on Outposts

These API operations grant to, or remove from, internal RDS accounts the permission to associate
internal RDS VPCs with your local gateway route tables. You can view these route table–VPC associations
using the DescribeLocalGatewayRouteTableVpcAssociations API operations.

1188
Amazon Relational Database Service User Guide
Creating DB instances for RDS on Outposts

Creating DB instances for Amazon RDS on AWS


Outposts
Creating an Amazon RDS on AWS Outposts DB instance is similar to creating an Amazon RDS DB instance
in the AWS Cloud. However, make sure that you specify a DB subnet group that is associated with your
Outpost.

A virtual private cloud (VPC) based on the Amazon VPC service can span all of the Availability Zones
in an AWS Region. You can extend any VPC in the AWS Region to your Outpost by adding an Outpost
subnet. To add an Outpost subnet to a VPC, specify the Amazon Resource Name (ARN) of the Outpost
when you create the subnet.

Before you create an RDS on Outposts DB instance, you can create a DB subnet group that includes one
subnet that is associated with your Outpost. When you create an RDS on Outposts DB instance, specify
this DB subnet group. You can also choose to create a new DB subnet group when you create your DB
instance.

For information about configuring AWS Outposts, see the AWS Outposts User Guide.

Console
Creating a DB subnet group
Create a DB subnet group with one subnet that is associated with your Outpost.

You can also create a new DB subnet group for the Outpost when you create your DB instance. If you
want to do so, then skip this procedure.
Note
To create a DB subnet group for the AWS Cloud, specify at least two subnets.

To create a DB subnet group for your Outpost

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the upper-right corner of the Amazon RDS console, choose the AWS Region where you want to
create the DB subnet group.
3. Choose Subnet groups, and then choose Create DB Subnet Group.

The Create DB subnet group page appears.

1189
Amazon Relational Database Service User Guide
Creating DB instances for RDS on Outposts

4. For Name, choose the name of the DB subnet group.


5. For Description, choose a description for the DB subnet group.
6. For VPC, choose the VPC that you're creating the DB subnet group for.
7. For Availability Zones, choose the Availability Zone for your Outpost.
8. For Subnets, choose the subnet for use by RDS on Outposts.
9. Choose Create to create the DB subnet group.

Creating the RDS on Outposts DB instance


Create the DB instance, and choose the Outpost for your DB instance.

1190
Amazon Relational Database Service User Guide
Creating DB instances for RDS on Outposts

To create an RDS on Outposts DB instance using the console

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the upper-right corner of the Amazon RDS console, choose the AWS Region where the Outpost on
which you want to create the DB instance is attached.
3. In the navigation pane, choose Databases.
4. Choose Create database.

The AWS Management Console detects available Outposts that you have configured and presents
the On-premises option in the Database location section.
Note
If you haven't configured any Outposts, either the Database location section doesn't appear
or the RDS on Outposts option isn't available in the Choose an on-premises creation
method section.
5. For Database location, choose On-premises.
6. For On-premises creation method, choose RDS on Outposts.
7. Specify your settings for Outposts Connectivity. These settings are for the Outpost that uses the
VPC that has the DB subnet group for your DB instance. Your VPC must be based on the Amazon
VPC service.

a. For Virtual Private Cloud (VPC), choose the VPC that contains the DB subnet group for your DB
instance.
b. For VPC security group, choose the Amazon VPC security group for your DB instance.
c. For DB subnet group, choose the DB subnet group for your DB instance.

You can choose an existing DB subnet group that's associated with the Outpost—for example, if
you performed the procedure in Creating a DB subnet group (p. 1189).

You can also create a new DB subnet group for the Outpost.
8. For Multi-AZ deployment, choose Create a standby instance (recommended for production usage)
to create a standby DB instance in another Outpost.
Note
This option isn't available for Microsoft SQL Server.
If you choose to create a Multi-AZ deployment, you can't store backups on your Outpost.
9. Under Backup, do the following:

a. For Backup target, choose one of the following:

• AWS Cloud to store automated backups and manual snapshots in the parent AWS Region.
• Outposts (on-premises) to create local backups.
Note
To store backups on your Outpost, your Outpost must have Amazon S3 capability.
For more information, see Amazon S3 on Outposts.
Local backups aren't supported for Multi-AZ deployments or read replicas.
b. Choose Enable automated backups to create point-in-time snapshots of your DB instance.

If you turn on automated backups, then you can choose values for Backup retention period and
Backup window, or leave the default values.
10. Specify other DB instance settings as needed.

For information about each setting when creating a DB instance, see Settings for DB
instances (p. 308).

1191
Amazon Relational Database Service User Guide
Creating DB instances for RDS on Outposts

11. Choose Create database.

The Databases page appears. A banner tells you that your DB instance is being created, and displays
the View credential details button.

Viewing DB instance details


After you create your DB instance, you can view credentials and other details for it.

To view DB instance details

1. To view the master user name and password for the DB instance, choose View credential details on
the Databases page.

You can connect to the DB instance as the master user by using these credentials.
Important
You can't view the master user password again. If you don't record it, you might have to
change it. To change the master user password after the DB instance is available, modify
the DB instance. For more information about modifying a DB instance, see Modifying an
Amazon RDS DB instance (p. 401).
2. Choose the name of the new DB instance on the Databases page.

On the RDS console, the details for the new DB instance appear. The DB instance has a status of
Creating until the DB instance is created and ready for use. When the state changes to Available,
you can connect to the DB instance. Depending on the DB instance class and storage allocated, it can
take several minutes for the new DB instance to be available.

After the DB instance is available, you can manage it the same way that you manage RDS DB
instances in the AWS Cloud.

AWS CLI
Before you create a new DB instance in an Outpost with the AWS CLI, first create a DB subnet group for
use by RDS on Outposts.

1192
Amazon Relational Database Service User Guide
Creating DB instances for RDS on Outposts

To create a DB subnet group for your Outpost

• Use the create-db-subnet-group command. For --subnet-ids, specify the subnet group in the
Outpost for use by RDS on Outposts.

For Linux, macOS, or Unix:

aws rds create-db-subnet-group \


--db-subnet-group-name myoutpostdbsubnetgr \
--db-subnet-group-description "DB subnet group for RDS on Outposts" \
--subnet-ids subnet-abc123

For Windows:

aws rds create-db-subnet-group ^


--db-subnet-group-name myoutpostdbsubnetgr ^
--db-subnet-group-description "DB subnet group for RDS on Outposts" ^
--subnet-ids subnet-abc123

To create an RDS on Outposts DB instance using the AWS CLI

• Use the create-db-instance command. Specify an Availability Zone for the Outpost, an Amazon VPC
security group associated with the Outpost, and the DB subnet group you created for the Outpost.
You can include the following options:

• --db-instance-identifier
• --db-instance-class
• --engine – The database engine. Use one of the following values:
• MySQL – Specify mysql.
• PostgreSQL – Specify postgres.
• Microsoft SQL Server – Specify sqlserver-ee, sqlserver-se, or sqlserver-web.
• --availability-zone
• --vpc-security-group-ids
• --db-subnet-group-name
• --allocated-storage
• --max-allocated-storage
• --master-username
• --master-user-password
• --multi-az | --no-multi-az – (Optional) Whether to create a standby DB instance in a
different Availability Zone. The default is --no-multi-az.

The --multi-az option isn't available for SQL Server.


• --backup-retention-period
• --backup-target – (Optional) Where to store automated backups and manual snapshots. Use
one of the following values:
• outposts – Store them locally on your Outpost.
• region – Store them in the parent AWS Region. This is the default value.

If you use the --multi-az option, you can't use outposts for --backup-target. In addition,
the DB instance can't have read replicas if you use outposts for --backup-target.
• --storage-encrypted
1193
Amazon Relational Database Service User Guide
Creating DB instances for RDS on Outposts

• --kms-key-id

Example

The following example creates a MySQL DB instance named myoutpostdbinstance with backups
stored on your Outpost.

For Linux, macOS, or Unix:

aws rds create-db-instance \


--db-instance-identifier myoutpostdbinstance \
--engine-version 8.0.17 \
--db-instance-class db.m5.large \
--engine mysql \
--availability-zone us-east-1d \
--vpc-security-group-ids outpost-sg \
--db-subnet-group-name myoutpostdbsubnetgr \
--allocated-storage 100 \
--max-allocated-storage 1000 \
--master-username masterawsuser \
--manage-master-user-password \
--backup-retention-period 3 \
--backup-target outposts \
--storage-encrypted \
--kms-key-id mykey

For Windows:

aws rds create-db-instance ^


--db-instance-identifier myoutpostdbinstance ^
--engine-version 8.0.17 ^
--db-instance-class db.m5.large ^
--engine mysql ^
--availability-zone us-east-1d ^
--vpc-security-group-ids outpost-sg ^
--db-subnet-group-name myoutpostdbsubnetgr ^
--allocated-storage 100 ^
--max-allocated-storage 1000 ^
--master-username masterawsuser ^
--manage-master-user-password ^
--backup-retention-period 3 ^
--backup-target outposts ^
--storage-encrypted ^
--kms-key-id mykey

For information about each setting when creating a DB instance, see Settings for DB instances (p. 308).

RDS API
To create a new DB instance in an Outpost with the RDS API, first create a DB subnet group for use by
RDS on Outposts by calling the CreateDBSubnetGroup operation. For SubnetIds, specify the subnet
group in the Outpost for use by RDS on Outposts.

Next, call the CreateDBInstance operation with the following parameters. Specify an Availability Zone for
the Outpost, an Amazon VPC security group associated with the Outpost, and the DB subnet group you
created for the Outpost.

• AllocatedStorage
• AvailabilityZone

1194
Amazon Relational Database Service User Guide
Creating DB instances for RDS on Outposts

• BackupRetentionPeriod
• BackupTarget

If you are creating a Multi-AZ DB instance deployment, you can't use outposts for BackupTarget. In
addition, the DB instance can't have read replicas if you use outposts for BackupTarget.
• DBInstanceClass
• DBInstanceIdentifier
• VpcSecurityGroupIds
• DBSubnetGroupName
• Engine
• EngineVersion
• MasterUsername
• MasterUserPassword
• MaxAllocatedStorage (optional)
• MultiAZ (optional)
• StorageEncrypted
• KmsKeyID

For information about each setting when creating a DB instance, see Settings for DB instances (p. 308).

1195
Amazon Relational Database Service User Guide
Creating read replicas for RDS on Outposts

Creating read replicas for Amazon RDS on AWS


Outposts
Amazon RDS on AWS Outposts uses the MySQL and PostgreSQL DB engines' built-in replication
functionality to create a read replica from a source DB instance. The source DB instance becomes the
primary DB instance. Updates made to the primary DB instance are asynchronously copied to the
read replica. You can reduce the load on your primary DB instance by routing read queries from your
applications to the read replica. Using read replicas, you can elastically scale out beyond the capacity
constraints of a single DB instance for read-heavy database workloads.

When you create a read replica from an RDS on Outposts DB instance, the read replica uses a customer-
owned IP address (CoIP). For more information, see Customer-owned IP addresses for Amazon RDS on
AWS Outposts (p. 1184).

Read replicas on RDS on Outposts have the following limitations:

• You can't create read replicas for RDS for SQL Server on RDS on Outposts DB instances.
• Cross-Region read replicas aren't supported on RDS on Outposts.
• Cascading read replicas aren't supported on RDS on Outposts.
• The source RDS on Outposts DB instance can't have local backups. The backup target for the source DB
instance must be your AWS Region.
• Read replicas require customer-owned IP (CoIP) pools. For more information, see Customer-owned IP
addresses for Amazon RDS on AWS Outposts (p. 1184).

You can create a read replica from an RDS on Outposts DB instance using the AWS Management
Console, AWS CLI, or RDS API. For more information on read replicas, see Working with DB instance read
replicas (p. 438).

Console
To create a read replica from a source DB instance

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases.
3. Choose the DB instance that you want to use as the source for a read replica.
4. For Actions, choose Create read replica.
5. For DB instance identifier, enter a name for the read replica.
6. Specify your settings for Outposts Connectivity. These settings are for the Outpost that uses the
virtual private cloud (VPC) that has the DB subnet group for your DB instance. Your VPC must be
based on the Amazon VPC service.
7. Choose your DB instance class. We recommend that you use the same or larger DB instance class
and storage type as the source DB instance for the read replica.
8. For Multi-AZ deployment, choose Create a standby instance (recommended for production usage)
to create a standby DB instance in a different Availability Zone.

Creating your read replica as a Multi-AZ DB instance is independent of whether the source database
is a Multi-AZ DB instance.
9. (Optional) Under Connectivity, set values for Subnet Group and Availability Zone.

If you specify values for both Subnet Group and Availability Zone, the read replica is created on an
Outpost that is associated with the Availability Zone in the DB subnet group.

1196
Amazon Relational Database Service User Guide
Creating read replicas for RDS on Outposts

If you specify a value for Subnet Group and No preference for Availability Zone, the read replica is
created on a random Outpost in the DB subnet group.
10. For AWS KMS key, choose the AWS KMS key identifier of the KMS key.

The read replica must be encrypted.


11. Choose other options as needed.
12. Choose Create read replica.

After the read replica is created, you can see it on the Databases page in the RDS console. It shows
Replica in the Role column.

AWS CLI
To create a read replica from a source MySQL or PostgreSQL DB instance, use the AWS CLI command
create-db-instance-read-replica.

You can control where the read replica is created by specifying the --db-subnet-group-name and --
availability-zone options:

• If you specify both the --db-subnet-group-name and --availability-zone options, the read
replica is created on an Outpost that is associated with the Availability Zone in the DB subnet group.
• If you specify the --db-subnet-group-name option and don't specify the --availability-zone
option, the read replica is created on a random Outpost in the DB subnet group.
• If you don't specify either option, the read replica is created on the same Outpost as the source RDS on
Outposts DB instance.

The following example creates a replica and specifies the location of the read replica by including --db-
subnet-group-name and --availability-zone options.

Example
For Linux, macOS, or Unix:

aws rds create-db-instance-read-replica \


--db-instance-identifier myreadreplica \
--source-db-instance-identifier mydbinstance \
--db-subnet-group-name myoutpostdbsubnetgr \
--availability-zone us-west-2a

For Windows:

aws rds create-db-instance-read-replica ^


--db-instance-identifier myreadreplica ^
--source-db-instance-identifier mydbinstance ^
--db-subnet-group-name myoutpostdbsubnetgr ^
--availability-zone us-west-2a

RDS API
To create a read replica from a source MySQL or PostgreSQL DB instance, call the Amazon RDS API
CreateDBInstanceReadReplica operation with the following required parameters:

• DBInstanceIdentifier
• SourceDBInstanceIdentifier

1197
Amazon Relational Database Service User Guide
Considerations for restoring DB instances

You can control where the read replica is created by specifying the DBSubnetGroupName and
AvailabilityZone parameters:

• If you specify both the DBSubnetGroupName and AvailabilityZone parameters, the read replica is
created on an Outpost that is associated with the Availability Zone in the DB subnet group.
• If you specify the DBSubnetGroupName parameter and don't specify the AvailabilityZone
parameter, the read replica is created on a random Outpost in the DB subnet group.
• If you don't specify either parameter, the read replica is created on the same Outpost as the source
RDS on Outposts DB instance.

Considerations for restoring DB instances on


Amazon RDS on AWS Outposts
When you restore a DB instance in Amazon RDS on AWS Outposts, you can generally choose the storage
location for automated backups and manual snapshots of the restored DB instance.

• When restoring from a manual DB snapshot, you can store backups either in the parent AWS Region or
locally on your Outpost.
• When restoring from an automated backup (point-in-time recovery), you have fewer choices:
• If restoring from the parent AWS Region, you can store backups either in the AWS Region or on your
Outpost.
• If restoring from your Outpost, you can store backups only on your Outpost.

1198
Amazon Relational Database Service User Guide
Region and version availability

Using Amazon RDS Proxy


By using Amazon RDS Proxy, you can allow your applications to pool and share database connections
to improve their ability to scale. RDS Proxy makes applications more resilient to database failures
by automatically connecting to a standby DB instance while preserving application connections. By
using RDS Proxy, you can also enforce AWS Identity and Access Management (IAM) authentication for
databases, and securely store credentials in AWS Secrets Manager.

Using RDS Proxy, you can handle unpredictable surges in database traffic. Otherwise, these surges might
cause issues due to oversubscribing connections or creating new connections at a fast rate. RDS Proxy
establishes a database connection pool and reuses connections in this pool. This approach avoids the
memory and CPU overhead of opening a new database connection each time. To protect the database
against oversubscription, you can control the number of database connections that are created.

RDS Proxy queues or throttles application connections that can't be served immediately from the pool
of connections. Although latencies might increase, your application can continue to scale without
abruptly failing or overwhelming the database. If connection requests exceed the limits you specify, RDS
Proxy rejects application connections (that is, it sheds load). At the same time, it maintains predictable
performance for the load that can be served with the available capacity.

You can reduce the overhead to process credentials and establish a secure connection for each new
connection. RDS Proxy can handle some of that work on behalf of the database.

RDS Proxy is fully compatible with the engine versions that it supports. You can enable RDS Proxy for
most applications with no code changes.

Topics
• Region and version availability (p. 1199)
• Quotas and limitations for RDS Proxy (p. 1199)
• Planning where to use RDS Proxy (p. 1202)
• RDS Proxy concepts and terminology (p. 1203)
• Getting started with RDS Proxy (p. 1207)
• Managing an RDS Proxy (p. 1220)
• Working with Amazon RDS Proxy endpoints (p. 1232)
• Monitoring RDS Proxy metrics with Amazon CloudWatch (p. 1239)
• Working with RDS Proxy events (p. 1244)
• RDS Proxy command-line examples (p. 1245)
• Troubleshooting for RDS Proxy (p. 1247)
• Using RDS Proxy with AWS CloudFormation (p. 1253)

Region and version availability


Feature availability and support varies across specific versions of each database engine, and across AWS
Regions. For more information on version and Region availability of Amazon RDS with RDS Proxy, see
Amazon RDS Proxy (p. 155).

Quotas and limitations for RDS Proxy


The following quotas and limitations apply to RDS Proxy:

1199
Amazon Relational Database Service User Guide
RDS for MariaDB limitations

• You can have up to 20 proxies for each AWS account ID. If your application requires more proxies, you
can request additional proxies by opening a ticket with the AWS Support organization.
• Each proxy can have up to 200 associated Secrets Manager secrets. Thus, each proxy can connect to
with up to 200 different user accounts at any given time.
• You can create, view, modify, and delete up to 20 endpoints for each proxy. These endpoints are in
addition to the default endpoint that's automatically created for each proxy.
• For RDS DB instances in replication configurations, you can associate a proxy only with the writer DB
instance, not a read replica.
• Your RDS Proxy must be in the same virtual private cloud (VPC) as the database. The proxy can't be
publicly accessible, although the database can be. For example, if you're prototyping on a local host,
you can't connect to your RDS Proxy unless you set up dedicated networking. This is the case because
your local host is outside of the proxy's VPC.
• You can't use RDS Proxy with a VPC that has its tenancy set to dedicated.
• If you use RDS Proxy with an RDS DB instance that has IAM authentication enabled, check user
authentication. Users who connect through a proxy must authenticate through sign-in credentials.
For details about Secrets Manager and IAM support in RDS Proxy, see Setting up database credentials
in AWS Secrets Manager (p. 1209) and Setting up AWS Identity and Access Management (IAM)
policies (p. 1210).
• You can't use RDS Proxy with custom DNS when using SSL hostname validation.
• Each proxy can be associated with a single target DB instance . However, you can associate multiple
proxies with the same DB instance .
• Any statement with a text size greater than 16 KB causes the proxy to pin the session to the current
connection.

For additional limitations for each DB engine, see the following sections:

• Additional limitations for RDS for MariaDB (p. 1200)


• Additional limitations for RDS for Microsoft SQL Server (p. 1201)
• Additional limitations for RDS for MySQL (p. 1201)
• Additional limitations for RDS for PostgreSQL (p. 1201)

Additional limitations for RDS for MariaDB


The following additional limitations apply to RDS Proxy with RDS for MariaDB databases:

• Currently, all proxies listen on port 3306 for MariaDB. The proxies still connect to your database using
the port that you specified in the database settings.
• You can't use RDS Proxy with self-managed MariaDB databases in Amazon EC2 instances.
• You can't use RDS Proxy with an RDS for MariaDB DB instance that has the read_only parameter in
its DB parameter group set to 1.
• RDS Proxy doesn't support compressed mode. For example, it doesn't support the compression used
by the --compress or -C options of the mysql command.
• Some SQL statements and functions can change the connection state without causing pinning. For the
most current pinning behavior, see Avoiding pinning (p. 1228).
• RDS Proxy doesn't support the MariaDB auth_ed25519 plugin.
• RDS Proxy doesn't support Transport Layer Security (TLS) version 1.3 for MariaDB databases.
• Database connections processing a GET DIAGNOSTIC command might return inaccurate information
when RDS Proxy reuses the same database connection to run another query. This can happen when
RDS Proxy multiplexes database connections.

1200
Amazon Relational Database Service User Guide
RDS for SQL Server limitations

Important
For proxies associated with MariaDB databases, don't set the configuration parameter
sql_auto_is_null to true or a nonzero value in the initialization query. Doing so might
cause incorrect application behavior.

Additional limitations for RDS for Microsoft SQL


Server
The following additional limitations apply to RDS Proxy with RDS for Microsoft SQL Server databases:

• The number of Secrets Manager secrets that you need to create for a proxy depends on the collation
that your DB instance uses. For example, suppose that your DB instance uses case-sensitive collation.
If your application accepts both "Admin" and "admin," then your proxy needs two separate secrets. For
more information about collation in SQL Server, see the Microsoft SQL Server documentation.
• RDS Proxy doesn't support connections that use Active Directory.
• You can't use IAM authentication with clients that don't support token properties. For more
information, see Considerations for connecting to a proxy with Microsoft SQL Server (p. 1219).
• The results of @@IDENTITY, @@ROWCOUNT, and SCOPE_IDENTITY aren't always accurate. As a work-
around, retrieve their values in the same session statement to ensure that they return the correct
information.
• If the connection uses multiple active result sets (MARS), RDS Proxy doesn't run the initialization
queries. For information about MARS, see the Microsoft SQL Server documentation.

Additional limitations for RDS for MySQL


The following additional limitations apply to RDS Proxy with RDS for MySQL databases:

• RDS Proxy doesn't support the MySQL sha256_password and caching_sha2_password


authentication plugins. These plugins implement SHA-256 hashing for user account passwords.
• Currently, all proxies listen on port 3306 for MySQL. The proxies still connect to your database using
the port that you specified in the database settings.
• You can't use RDS Proxy with self-managed MySQL databases in EC2 instances.
• You can't use RDS Proxy with an RDS for MySQL DB instance that has the read_only parameter in its
DB parameter group set to 1.
• RDS Proxy doesn't support MySQL compressed mode. For example, it doesn't support the compression
used by the --compress or -C options of the mysql command.
• Database connections processing a GET DIAGNOSTIC command might return inaccurate information
when RDS Proxy reuses the same database connection to run another query. This can happen when
RDS Proxy multiplexes database connections.
• Some SQL statements and functions such as SET LOCALcan change the connection state without
causing pinning. For the most current pinning behavior, see Avoiding pinning (p. 1228).

Important
For proxies associated with MySQL databases, don't set the configuration parameter
sql_auto_is_null to true or a nonzero value in the initialization query. Doing so might
cause incorrect application behavior.

Additional limitations for RDS for PostgreSQL


The following additional limitations apply to RDS Proxy with RDS for PostgreSQL databases:

1201
Amazon Relational Database Service User Guide
Planning where to use RDS Proxy

• RDS Proxy doesn't support session pinning filters for PostgreSQL.


• Currently, all proxies listen on port 5432 for PostgreSQL.
• For PostgreSQL, RDS Proxy doesn't currently support canceling a query from a client by issuing a
CancelRequest. This is the case, for example, when you cancel a long-running query in an interactive
psql session by using Ctrl+C.
• The results of the PostgreSQL function lastval aren't always accurate. As a work-around, use the
INSERT statement with the RETURNING clause.
• RDS Proxy doesn't multiplex connections when your client application drivers use the PostgreSQL
extended query protocol.
• RDS Proxy currently doesn't support streaming replication mode.

Important
For existing proxies with PostgreSQL databases, if you modify the database authentication to
use SCRAM only, the proxy becomes unavailable for up to 60 seconds. To avoid the issue, do one
of the following:

• Ensure that the database allows both SCRAM and MD5 authentication.
• To use only SCRAM authentication, create a new proxy, migrate your application traffic to the
new proxy, then delete the proxy previously associated with the database.

Planning where to use RDS Proxy


You can determine which of your DB instances, clusters, and applications might benefit the most from
using RDS Proxy. To do so, consider these factors:

• Any DB instance that encounters "too many connections" errors is a good candidate for associating
with a proxy. This is often characterized by a high value of the ConnectionAttempts CloudWatch
metric. The proxy enables applications to open many client connections, while the proxy manages a
smaller number of long-lived connections to the DB instance .
• For DB instances that use smaller AWS instance classes, such as T2 or T3, using a proxy can help avoid
out-of-memory conditions. It can also help reduce the CPU overhead for establishing connections.
These conditions can occur when dealing with large numbers of connections.
• You can monitor certain Amazon CloudWatch metrics to determine whether a DB instance is
approaching certain types of limit. These limits are for the number of connections and the memory
associated with connection management. You can also monitor certain CloudWatch metrics to
determine whether a DB instance is handling many short-lived connections. Opening and closing such
connections can impose performance overhead on your database. For information about the metrics to
monitor, see Monitoring RDS Proxy metrics with Amazon CloudWatch (p. 1239).
• AWS Lambda functions can also be good candidates for using a proxy. These functions make frequent
short database connections that benefit from connection pooling offered by RDS Proxy. You can take
advantage of any IAM authentication you already have for Lambda functions, instead of managing
database credentials in your Lambda application code.
• Applications that typically open and close large numbers of database connections and don't have
built-in connection pooling mechanisms are good candidates for using a proxy.
• Applications that keep a large number of connections open for long periods are typically good
candidates for using a proxy. Applications in industries such as software as a service (SaaS) or
ecommerce often minimize the latency for database requests by leaving connections open. With RDS
Proxy, an application can keep more connections open than it can when connecting directly to the DB
instance .
• You might not have adopted IAM authentication and Secrets Manager due to the complexity of setting
up such authentication for all DB instances. If so, you can leave the existing authentication methods

1202
Amazon Relational Database Service User Guide
RDS Proxy concepts and terminology

in place and delegate the authentication to a proxy. The proxy can enforce the authentication policies
for client connections for particular applications. You can take advantage of any IAM authentication
you already have for Lambda functions, instead of managing database credentials in your Lambda
application code.
• RDS Proxy can help make applications more resilient and transparent to database failures. RDS Proxy
bypasses Domain Name System (DNS) caches to reduce failover times by up to 66% for Amazon RDS
Multi-AZ databases. RDS Proxy also automatically routes traffic to a new database instance while
preserving application connections. This makes failovers more transparent for applications.

RDS Proxy concepts and terminology


You can simplify connection management for your Amazon RDS DB instances and Amazon Aurora DB
clusters by using RDS Proxy.

RDS Proxy handles the network traffic between the client application and the database. It does so in an
active way first by understanding the database protocol. It then adjusts its behavior based on the SQL
operations from your application and the result sets from the database.

RDS Proxy reduces the memory and CPU overhead for connection management on your database.
The database needs less memory and CPU resources when applications open many simultaneous
connections. It also doesn't require logic in your applications to close and reopen connections that stay
idle for a long time. Similarly, it requires less application logic to reestablish connections in case of a
database problem.

The infrastructure for RDS Proxy is highly available and deployed over multiple Availability Zones (AZs).
The computation, memory, and storage for RDS Proxy are independent of your RDS DB instances and
Aurora DB clusters. This separation helps lower overhead on your database servers, so that they can
devote their resources to serving database workloads. The RDS Proxy compute resources are serverless,
automatically scaling based on your database workload.

Topics
• Overview of RDS Proxy concepts (p. 1203)
• Connection pooling (p. 1204)
• RDS Proxy security (p. 1204)
• Failover (p. 1206)
• Transactions (p. 1206)

Overview of RDS Proxy concepts


RDS Proxy handles the infrastructure to perform connection pooling and the other features described in
the sections that follow. You see the proxies represented in the RDS console on the Proxies page.

Each proxy handles connections to a single RDS DB instance or Aurora DB cluster. The proxy
automatically determines the current writer instance for RDS Multi-AZ DB instances and Aurora
provisioned clusters.

The connections that a proxy keeps open and available for your database application to use make up the
connection pool.

By default, RDS Proxy can reuse a connection after each transaction in your session. This transaction-
level reuse is called multiplexing. When RDS Proxy temporarily removes a connection from the
connection pool to reuse it, that operation is called borrowing the connection. When it's safe to do so,
RDS Proxy returns that connection to the connection pool.

1203
Amazon Relational Database Service User Guide
Connection pooling

In some cases, RDS Proxy can't be sure that it's safe to reuse a database connection outside of the current
session. In these cases, it keeps the session on the same connection until the session ends. This fallback
behavior is called pinning.

A proxy has a default endpoint. You connect to this endpoint when you work with an RDS DB instance or
Aurora DB cluster. You do so instead of connecting to the read/write endpoint that connects directly to
the instance or cluster. The special-purpose endpoints for an Aurora cluster remain available for you to
use. For Aurora DB clusters, you can also create additional read/write and read-only endpoints. For more
information, see Overview of proxy endpoints (p. 1233).

For example, you can still connect to the cluster endpoint for read/write connections without connection
pooling. You can still connect to the reader endpoint for load-balanced read-only connections. You can
still connect to the instance endpoints for diagnosis and troubleshooting of specific DB instances within
an Aurora cluster. If you use other AWS services such as AWS Lambda to connect to RDS databases,
change their connection settings to use the proxy endpoint. For example, you specify the proxy endpoint
to allow Lambda functions to access your database while taking advantage of RDS Proxy functionality.

Each proxy contains a target group. This target group embodies the RDS DB instance or Aurora DB cluster
that the proxy can connect to. For an Aurora cluster, by default the target group is associated with all
the DB instances in that cluster. That way, the proxy can connect to whichever Aurora DB instance is
promoted to be the writer instance in the cluster. The RDS DB instance associated with a proxy, or the
Aurora DB cluster and its instances, are called the targets of that proxy. For convenience, when you create
a proxy through the console, RDS Proxy also creates the corresponding target group and registers the
associated targets automatically.

An engine family is a related set of database engines that use the same DB protocol. You choose the
engine family for each proxy that you create.

Connection pooling
Each proxy performs connection pooling for the writer instance of its associated RDS or Aurora database.
Connection pooling is an optimization that reduces the overhead associated with opening and closing
connections and with keeping many connections open simultaneously. This overhead includes memory
needed to handle each new connection. It also involves CPU overhead to close each connection and open
a new one. Examples include Transport Layer Security/Secure Sockets Layer (TLS/SSL) handshaking,
authentication, negotiating capabilities, and so on. Connection pooling simplifies your application logic.
You don't need to write application code to minimize the number of simultaneous open connections.

Each proxy also performs connection multiplexing, also known as connection reuse. With multiplexing,
RDS Proxy performs all the operations for a transaction using one underlying database connection.
RDS then can use a different connection for the next transaction. You can open many simultaneous
connections to the proxy, and the proxy keeps a smaller number of connections open to the DB instance
or cluster. Doing so further minimizes the memory overhead for connections on the database server. This
technique also reduces the chance of "too many connections" errors.

RDS Proxy security


RDS Proxy uses the existing RDS security mechanisms such as TLS/SSL and AWS Identity and
Access Management (IAM). For general information about those security features, see Security in
Amazon RDS (p. 2565). Also, make sure to familiarize yourself with how RDS and Aurora work with
authentication, authorization, and other areas of security.

RDS Proxy can act as an additional layer of security between client applications and the underlying
database. For example, you can connect to the proxy using TLS 1.2, even if the underlying DB instance
supports an older version of TLS. You can connect to the proxy using an IAM role. This is so even if the
proxy connects to the database using the native user and password authentication method. By using
this technique, you can enforce strong authentication requirements for database applications without a
costly migration effort for the DB instances themselves.

1204
Amazon Relational Database Service User Guide
Security

You store the database credentials used by RDS Proxy in AWS Secrets Manager. Each database user
for the RDS DB instance or Aurora DB cluster accessed by a proxy must have a corresponding secret
in Secrets Manager. You can also set up IAM authentication for users of RDS Proxy. By doing so,
you can enforce IAM authentication for database access even if the databases use native password
authentication. We recommend using these security features instead of embedding database credentials
in your application code.

Using TLS/SSL with RDS Proxy


You can connect to RDS Proxy using the TLS/SSL protocol.
Note
RDS Proxy uses certificates from the AWS Certificate Manager (ACM). If you are using RDS Proxy,
you don't need to download Amazon RDS certificates or update applications that use RDS Proxy
connections.

To enforce TLS for all connections between the proxy and your database, you can specify a setting
Require Transport Layer Security when you create or modify a proxy.

RDS Proxy can also ensure that your session uses TLS/SSL between your client and the RDS Proxy
endpoint. To have RDS Proxy do so, specify the requirement on the client side. SSL session variables are
not set for SSL connections to a database using RDS Proxy.

• For RDS for MySQL and Aurora MySQL, specify the requirement on the client side with the --ssl-
mode parameter when you run the mysql command.
• For Amazon RDS PostgreSQL and Aurora PostgreSQL, specify sslmode=require as part of the
conninfo string when you run the psql command.

RDS Proxy supports TLS protocol version 1.0, 1.1, and 1.2. You can connect to the proxy using a higher
version of TLS than you use in the underlying database.

By default, client programs establish an encrypted connection with RDS Proxy, with further control
available through the --ssl-mode option. From the client side, RDS Proxy supports all SSL modes.

For the client, the SSL modes are the following:

PREFERRED

SSL is the first choice, but it isn't required.


DISABLED

No SSL is allowed.
REQUIRED

Enforce SSL.
VERIFY_CA

Enforce SSL and verify the certificate authority (CA).


VERIFY_IDENTITY

Enforce SSL and verify the CA and CA hostname.

When using a client with --ssl-mode VERIFY_CA or VERIFY_IDENTITY, specify the --ssl-ca option
pointing to a CA in .pem format. For the .pem file to use, download all root CA PEMs from Amazon Trust
Services and place them into a single .pem file.

RDS Proxy uses wildcard certificates, which apply to a both a domain and its subdomains. If you use the
mysql client to connect with SSL mode VERIFY_IDENTITY, currently you must use the MySQL 8.0-
compatible mysql command.

1205
Amazon Relational Database Service User Guide
Failover

Failover
Failover is a high-availability feature that replaces a database instance with another one when the
original instance becomes unavailable. A failover might happen because of a problem with a database
instance. It might also be part of normal maintenance procedures, such as during a database upgrade.
Failover applies to RDS DB instances in a Multi-AZ configuration. Failover applies to Aurora DB clusters
with one or more reader instances in addition to the writer instance.

Connecting through a proxy makes your application more resilient to database failovers. When the
original DB instance becomes unavailable, RDS Proxy connects to the standby database without dropping
idle application connections. Doing so helps to speed up and simplify the failover process. The result is
faster failover that's less disruptive to your application than a typical reboot or database problem.

Without RDS Proxy, a failover involves a brief outage. During the outage, you can't perform write
operations on that database. Any existing database connections are disrupted, and your application must
reopen them. The database becomes available for new connections and write operations when a read-
only DB instance is promoted in place of one that's unavailable.

During DB failovers, RDS Proxy continues to accept connections at the same IP address and automatically
directs connections to the new primary DB instance. Clients connecting through RDS Proxy are not
susceptible to the following:

• Domain Name System (DNS) propagation delays on failover.


• Local DNS caching.
• Connection timeouts.
• Uncertainty about which DB instance is the current writer.
• Waiting for a query response from a former writer that became unavailable without closing
connections.

For applications that maintain their own connection pool, going through RDS Proxy means that most
connections stay alive during failovers or other disruptions. Only connections that are in the middle of a
transaction or SQL statement are canceled. RDS Proxy immediately accepts new connections. When the
database writer is unavailable, RDS Proxy queues up incoming requests.

For applications that don't maintain their own connection pools, RDS Proxy offers faster connection
rates and more open connections. It offloads the expensive overhead of frequent reconnects from the
database. It does so by reusing database connections maintained in the RDS Proxy connection pool. This
approach is particularly important for TLS connections, where setup costs are significant.

Transactions
All the statements within a single transaction always use the same underlying database connection.
The connection becomes available for use by a different session when the transaction ends. Using the
transaction as the unit of granularity has the following consequences:

• Connection reuse can happen after each individual statement when the RDS for MySQL or Aurora
MySQL autocommit setting is turned on.
• Conversely, when the autocommit setting is turned off, the first statement you issue in a session
begins a new transaction. For example, suppose that you enter a sequence of SELECT, INSERT,
UPDATE, and other data manipulation language (DML) statements. In this case, connection reuse
doesn't happen until you issue a COMMIT, ROLLBACK, or otherwise end the transaction.
• Entering a data definition language (DDL) statement causes the transaction to end after that
statement completes.

1206
Amazon Relational Database Service User Guide
Getting started with RDS Proxy

RDS Proxy detects when a transaction ends through the network protocol used by the database client
application. Transaction detection doesn't rely on keywords such as COMMIT or ROLLBACK appearing in
the text of the SQL statement.

In some cases, RDS Proxy might detect a database request that makes it impractical to move your session
to a different connection. In these cases, it turns off multiplexing for that connection the remainder
of your session. The same rule applies if RDS Proxy can't be certain that multiplexing is practical for
the session. This operation is called pinning. For ways to detect and minimize pinning, see Avoiding
pinning (p. 1228).

Getting started with RDS Proxy


In the following sections, you can find how to set up RDS Proxy. You can also find how to set related
security options. These control who can access each proxy and how each proxy connects to DB instances.

Topics
• Setting up network prerequisites (p. 1207)
• Setting up database credentials in AWS Secrets Manager (p. 1209)
• Setting up AWS Identity and Access Management (IAM) policies (p. 1210)
• Creating an RDS Proxy (p. 1212)
• Viewing an RDS Proxy (p. 1217)
• Connecting to a database through RDS Proxy (p. 1218)

Setting up network prerequisites


Using RDS Proxy requires you to have a common virtual private cloud (VPC) between your Aurora DB
cluster or RDS DB instance and RDS Proxy. This VPC should have a minimum of two subnets that are
in different Availability Zones. Your account can either own these subnets or share them with other
accounts. For information about VPC sharing, see Work with shared VPCs. Your client application
resources such as Amazon EC2, Lambda, or Amazon ECS can be in the same VPC as the proxy. Or they
can be in a separate VPC from the proxy. If you successfully connected to any RDS DB instances or Aurora
DB clusters, you already have the required network resources.

Topics
• Getting information about your subnets (p. 1207)
• Planning for IP address capacity (p. 1208)

Getting information about your subnets


The following Linux example shows AWS CLI commands that examine the VPCs and subnets owned by
your AWS account. In particular, you pass subnet IDs as parameters when you create a proxy using the
CLI.

aws ec2 describe-vpcs


aws ec2 describe-internet-gateways
aws ec2 describe-subnets --query '*[].[VpcId,SubnetId]' --output text | sort

The following Linux example shows AWS CLI commands to determine the subnet IDs corresponding to
a specific Aurora DB cluster or RDS DB instance. For an Aurora cluster, first you find the ID for one of
the associated DB instances. You can extract the subnet IDs used by that DB instance. To do so, examine
the nested fields within the DBSubnetGroup and Subnets attributes in the describe output for the DB
instance. You specify some or all of those subnet IDs when setting up a proxy for that database server.

1207
Amazon Relational Database Service User Guide
Setting up network prerequisites

$ # Optional first step, only needed if you're starting from an Aurora cluster. Find the ID
of any DB instance in the cluster.
$ aws rds describe-db-clusters --db-cluster-identifier my_cluster_id --query '*[].
[DBClusterMembers]|[0]|[0][*].DBInstanceIdentifier' --output text
my_instance_id
instance_id_2
instance_id_3
...

$ # From the DB instance, trace through the DBSubnetGroup and Subnets to find the subnet
IDs.
$ aws rds describe-db-instances --db-instance-identifier my_instance_id --query '*[].
[DBSubnetGroup]|[0]|[0]|[Subnets]|[0]|[*].SubnetIdentifier' --output text
subnet_id_1
subnet_id_2
subnet_id_3
...

Or you can first find the VPC ID for the DB instance. Then you can examine the VPC to find its subnets.
The following Linux example shows how.

$ # From the DB instance, find the VPC.


$ aws rds describe-db-instances --db-instance-identifier my_instance_id --query '*[].
[DBSubnetGroup]|[0]|[0].VpcId' --output text
my_vpc_id

$ aws ec2 describe-subnets --filters Name=vpc-id,Values=my_vpc_id --query '*[].[SubnetId]'


--output text
subnet_id_1
subnet_id_2
subnet_id_3
subnet_id_4
subnet_id_5
subnet_id_6

Planning for IP address capacity


An RDS Proxy automatically adjusts its capacity as needed based on the size and number of DB instances
registered with it. Certain operations might also require additional proxy capacity. Some examples are
increasing the size of a registered database or internal RDS Proxy maintenance operations. During these
operations, your proxy might need more IP addresses to provision the extra capacity. These additional
addresses allow your proxy to scale without affecting your workload. A lack of free IP addresses in your
subnets prevents a proxy from scaling up. This can lead to higher query latencies or client connection
failures. RDS notifies you through event RDS-EVENT-0243 when there aren't enough free IP addresses in
your subnets. For information about this event, see Working with RDS Proxy events (p. 1244).

Following are the recommended minimum number of IP addresses to leave free in your subnets for your
proxy based on DB instance class sizes.

DB instance class Minimum free IP addresses

db.*.xlarge or smaller 10

db.*.2xlarge 15

db.*.4xlarge 25

db.*.8xlarge 45

1208
Amazon Relational Database Service User Guide
Setting up database credentials in Secrets Manager

DB instance class Minimum free IP addresses

db.*.12xlarge 60

db.*.16xlarge 75

db.*.24xlarge 110

These numbers of recommended IP addresses are estimates for a proxy with only the default endpoint.
A proxy with additional endpoints or read replicas might need more free IP addresses. For each
additional endpoint, we recommend that you reserve three more IP addresses. For each read replica, we
recommend that you reserve additional IP addresses as specified in the table based on that read replica's
size.
Note
RDS Proxy never uses more than 215 IP addresses in a VPC.

Setting up database credentials in AWS Secrets


Manager
For each proxy that you create, you first use the Secrets Manager service to store sets of user name and
password credentials. You create a separate Secrets Manager secret for each database user account that
the proxy connects to on the RDS DB instance or Aurora DB cluster.

In Secrets Manager, you create these secrets with values for the username and password fields. Doing
so allows the proxy to connect to the corresponding database users on RDS DB instances or Aurora DB
clusters that you associate with the proxy. To do this, you can use the setting Credentials for other
database, Credentials for RDS database, or Other type of secrets. Fill in the appropriate values for
the User name and Password fields, and placeholder values for any other required fields. The proxy
ignores other fields such as Host and Port if they're present in the secret. Those details are automatically
supplied by the proxy.

You can also choose Other type of secrets. In this case, you create the secret with keys named username
and password.

Because the secrets used by your proxy aren't tied to a specific database server, you can reuse a secret
across multiple proxies. To do so, use the same credentials across multiple database servers. For example,
you might use the same credentials across a group of development and test servers.

To connect through the proxy as a specific user, make sure that the password associated with a secret
matches the database password for that user. If there's a mismatch, you can update the associated secret
in Secrets Manager. In this case, you can still connect to other accounts where the secret credentials and
the database passwords do match.
Note
For RDS for SQL Server, the number of Secrets Manager secrets that you need to create for a
proxy depends on the collation that your DB instance uses. For example, suppose that your DB
instance uses case-sensitive collation. If your application accepts both "Admin" and "admin,"
then your proxy needs two separate secrets. For more information about collation in SQL Server,
see the Microsoft SQL Server documentation.

When you create a proxy through the AWS CLI or RDS API, you specify the Amazon Resource Names
(ARNs) of the corresponding secrets. You do so for all the DB user accounts that the proxy can access. In
the AWS Management Console, you choose the secrets by their descriptive names.

For instructions about creating secrets in Secrets Manager, see the Creating a secret page in the Secrets
Manager documentation. Use one of the following techniques:

1209
Amazon Relational Database Service User Guide
Setting up IAM policies

• Use Secrets Manager in the console.


• To use the CLI to create a Secrets Manager secret for use with RDS Proxy, use a command such as the
following.

aws secretsmanager create-secret


--name "secret_name"
--description "secret_description"
--region region_name
--secret-string '{"username":"db_user","password":"db_user_password"}'

For example, the following commands create Secrets Manager secrets for two database users, one
named admin and the other named app-user.

aws secretsmanager create-secret \


--name admin_secret_name --description "db admin user" \
--secret-string '{"username":"admin","password":"choose_your_own_password"}'

aws secretsmanager create-secret \


--name proxy_secret_name --description "application user" \
--secret-string '{"username":"app-user","password":"choose_your_own_password"}'

To see the secrets owned by your AWS account, use a command such as the following.

aws secretsmanager list-secrets

When you create a proxy using the CLI, you pass the Amazon Resource Names (ARNs) of one or more
secrets to the --auth parameter. The following Linux example shows how to prepare a report with only
the name and ARN of each secret owned by your AWS account. This example uses the --output table
parameter that is available in AWS CLI version 2. If you are using AWS CLI version 1, use --output text
instead.

aws secretsmanager list-secrets --query '*[].[Name,ARN]' --output table

To verify that you stored the correct credentials and in the right format in a secret, use a command such
as the following. Substitute the short name or the ARN of the secret for your_secret_name.

aws secretsmanager get-secret-value --secret-id your_secret_name

The output should include a line displaying a JSON-encoded value like the following.

"SecretString": "{\"username\":\"your_username\",\"password\":\"your_password\"}",

Setting up AWS Identity and Access Management


(IAM) policies
After you create the secrets in Secrets Manager, you create an IAM policy that can access those secrets.
For general information about using IAM with RDS and Aurora, see Identity and access management for
Amazon RDS (p. 2606).
Tip
The following procedure applies if you use the IAM console. If you use the AWS Management
Console for RDS, RDS can create the IAM policy for you automatically. In that case, you can skip
the following procedure.

1210
Amazon Relational Database Service User Guide
Setting up IAM policies

To create an IAM policy that accesses your Secrets Manager secrets for use with your proxy

1. Sign in to the IAM console. Follow the Create role process, as described in Creating IAM roles,
choosing Creating a role to delegate permissions to an AWS service.

Choose AWS service for the Trusted entity type. Under Use case, select RDS from Use cases for
other AWS services dropdown. Select RDS - Add Role to Database.
2. For the new role, perform the Add inline policy step. Use the same general procedures as in Editing
IAM policies. Paste the following JSON into the JSON text box. Substitute your own account ID.
Substitute your AWS Region for us-east-2. Substitute the Amazon Resource Names (ARNs) for the
secrets that you created, see Specifying KMS keys in IAM policy statements. For the kms:Decrypt
action, substitute the ARN of the default AWS KMS key or your own KMS key. Which one you use
depends on which one you used to encrypt the Secrets Manager secrets.

{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "secretsmanager:GetSecretValue",
"Resource": [
"arn:aws:secretsmanager:us-east-2:account_id:secret:secret_name_1",
"arn:aws:secretsmanager:us-east-2:account_id:secret:secret_name_2"
]
},
{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": "kms:Decrypt",
"Resource": "arn:aws:kms:us-east-2:account_id:key/key_id",
"Condition": {
"StringEquals": {
"kms:ViaService": "secretsmanager.us-east-2.amazonaws.com"
}
}
}
]
}

3. Edit the trust policy for this IAM role. Paste the following JSON into the JSON text box.

{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "rds.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}

The following commands perform the same operation through the AWS CLI.

PREFIX=my_identifier

1211
Amazon Relational Database Service User Guide
Creating an RDS Proxy

aws iam create-role --role-name my_role_name \


--assume-role-policy-document '{"Version":"2012-10-17","Statement":
[{"Effect":"Allow","Principal":{"Service":
["rds.amazonaws.com"]},"Action":"sts:AssumeRole"}]}'

aws iam put-role-policy --role-name my_role_name \


--policy-name $PREFIX-secret-reader-policy --policy-document
'{"Version":"2012-10-17","Statement":[{"Effect":"Allow","Principal":{"Service":
["rds.amazonaws.com"]},"Action":"sts:AssumeRole"}]}'

aws kms create-key --description "$PREFIX-test-key" --policy '{


"Id":"$PREFIX-kms-policy",
"Version":"2012-10-17",
"Statement":
[
{
"Sid":"Enable IAM User Permissions",
"Effect":"Allow",
"Principal":{"AWS":"arn:aws:iam::account_id:root"},
"Action":"kms:*","Resource":"*"
},
{
"Sid":"Allow access for Key Administrators",
"Effect":"Allow",
"Principal":
{
"AWS":
["$USER_ARN","arn:aws:iam::account_id:role/Admin"]
},
"Action":
[
"kms:Create*",
"kms:Describe*",
"kms:Enable*",
"kms:List*",
"kms:Put*",
"kms:Update*",
"kms:Revoke*",
"kms:Disable*",
"kms:Get*",
"kms:Delete*",
"kms:TagResource",
"kms:UntagResource",
"kms:ScheduleKeyDeletion",
"kms:CancelKeyDeletion"
],
"Resource":"*"
},
{
"Sid":"Allow use of the key",
"Effect":"Allow",
"Principal":{"AWS":"$ROLE_ARN"},
"Action":["kms:Decrypt","kms:DescribeKey"],
"Resource":"*"
}
]
}'

Creating an RDS Proxy


To manage connections for a specified set of DB instances, you can create a proxy. You can associate a
proxy with an RDS for MariaDB, RDS for Microsoft SQL Server, RDS for MySQL, or RDS for PostgreSQL DB
instance.

1212
Amazon Relational Database Service User Guide
Creating an RDS Proxy

AWS Management Console

To create a proxy

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Proxies.
3. Choose Create proxy.
4. Choose all the settings for your proxy.

For Proxy configuration, provide information for the following:

• Engine family. This setting determines which database network protocol the proxy recognizes
when it interprets network traffic to and from the database. For RDS for MariaDB or RDS for
MySQL, choose MariaDB and MySQL. For RDS for PostgreSQL, choose PostgreSQL. For RDS for
SQL Server, choose SQL Server.
• Proxy identifier. Specify a name of your choosing, unique within your AWS account ID and current
AWS Region.
• Idle client connection timeout. Choose a time period that a client connection can be idle before
the proxy can close it. The default is 1,800 seconds (30 minutes). A client connection is considered
idle when the application doesn't submit a new request within the specified time after the
previous request completed. The underlying database connection stays open and is returned to
the connection pool. Thus, it's available to be reused for new client connections.

Consider lowering the idle client connection timeout if you want the proxy to proactively remove
stale connections. If your workload is spiking, consider raising the idle client connection timeout to
save the cost of establishing connections.

For Target group configuration, provide information for the following:

• Database. Choose one RDS DB instance or Aurora DB cluster to access through this proxy. The list
only includes DB instances and clusters with compatible database engines, engine versions, and
other settings. If the list is empty, create a new DB instance or cluster that's compatible with RDS
Proxy. To do so, follow the procedure in Creating an Amazon RDS DB instance (p. 300). Then try
creating the proxy again.
• Connection pool maximum connections. Specify a value from 1 through 100. This
setting represents the percentage of the max_connections value that RDS Proxy can use
for its connections. If you only intend to use one proxy with this DB instance or cluster,
you can set this value to 100. For details about how RDS Proxy uses this setting, see
MaxConnectionsPercent (p. 1226).
• Session pinning filters. (Optional) This option allows you to force RDS Proxy to not pin for certain
types of detected session states. This circumvents the default safety measures for multiplexing
database connections across client connections. Currently, the setting isn't supported for
PostgreSQL and the only choice is EXCLUDE_VARIABLE_SETS.

Enabling this setting can cause variables of one connection to impact other connections. This can
cause errors or correctness issues if your queries depend on session variable values set outside of
the current transaction. Consider using this option after verifying it is safe for your applications to
share database connections across client connections.

The following patterns can be considered safe:


• SET statements where there is no change to the effective session variable value, i.e., there is no
change to the session variable.
• You change the session variable value and execute a statement in the same transaction.

1213
Amazon Relational Database Service User Guide
Creating an RDS Proxy

For more information, see Avoiding pinning (p. 1228).


• Connection borrow timeout. In some cases, you might expect the proxy to sometimes use all
available database connections. In such cases, you can specify how long the proxy waits for
a database connection to become available before returning a timeout error. You can specify
a period up to a maximum of five minutes. This setting only applies when the proxy has the
maximum number of connections open and all connections are already in use.
• Initialization query. (Optional) You can specify one or more SQL statements for the proxy to run
when opening each new database connection. The setting is typically used with SET statements
to make sure that each connection has identical settings such as time zone and character set. For
multiple statements, use semicolons as the separator. You can also include multiple variables in a
single SET statement, such as SET x=1, y=2.

For Authentication, provide information for the following:

• IAM role. Choose an IAM role that has permission to access the Secrets Manager secrets that you
chose earlier. You can also choose for the AWS Management Console to create a new IAM role for
you and use that.
• Secrets Manager secrets. Choose at least one Secrets Manager secret that contains database user
credentials for the RDS DB instance or Aurora DB cluster to access with this proxy.
• Client authentication type. Choose the type of authentication the proxy uses for connections
from clients. Your choice applies to all Secrets Manager secrets that you associate with this proxy.
If you need to specify a different client authentication type for each secret, create your proxy by
using the AWS CLI or the API instead.
• IAM authentication. Choose whether to require, allow, or disallow IAM authentication for
connections to your proxy. The allow option is only valid for proxies for RDS for SQL Server. Your
choice applies to all Secrets Manager secrets that you associate with this proxy. If you need to
specify a different IAM authentication for each secret, create your proxy by using the AWS CLI or
the API instead.

For Connectivity, provide information for the following:

• Require Transport Layer Security. Choose this setting if you want the proxy to enforce TLS/SSL
for all client connections. For an encrypted or unencrypted connection to a proxy, the proxy uses
the same encryption setting when it makes a connection to the underlying database.
• Subnets. This field is prepopulated with all the subnets associated with your VPC. You can remove
any subnets that you don't need for this proxy. You must leave at least two subnets.

Provide additional connectivity configuration:

• VPC security group. Choose an existing VPC security group. You can also choose for the AWS
Management Console to create a new security group for you and use that. You must configure
the Inbound rules to allow your applications to access the proxy. You must also configure the
Outbound rules to allow traffic from your DB targets.
Note
This security group must allow connections from the proxy to the database. The same
security group is used for ingress from your applications to the proxy, and for egress from
the proxy to the database. For example, suppose that you use the same security group for
your database and your proxy. In this case, make sure that you specify that resources in
that security group can communicate with other resources in the same security group.
When using a shared VPC, you can't use the default security group for the VPC, or one
that belongs to another account. Choose a security group that belongs to your account. If
one doesn't exist, create one. For more information about this limitation, see Work with
shared VPCs.

1214
Amazon Relational Database Service User Guide
Creating an RDS Proxy

(Optional) Provide advanced configuration:

• Enable enhanced logging. You can enable this setting to troubleshoot proxy compatibility or
performance issues.

When this setting is enabled, RDS Proxy includes detailed information about SQL statements in
its logs. This information helps you to debug issues involving SQL behavior or the performance
and scalability of the proxy connections. The debug information includes the text of SQL
statements that you submit through the proxy. Thus, only enable this setting when needed
for debugging. Also, only enable it when you have security measures in place to safeguard any
sensitive information that appears in the logs.

To minimize overhead associated with your proxy, RDS Proxy automatically turns this setting off
24 hours after you enable it. Enable it temporarily to troubleshoot a specific issue.
5. Choose Create Proxy.

AWS CLI
To create a proxy by using the AWS CLI, call the create-db-proxy command with the following required
parameters:

• --db-proxy-name
• --engine-family
• --role-arn
• --auth
• --vpc-subnet-ids

The --engine-family value is case-sensitive.

Example

For Linux, macOS, or Unix:

aws rds create-db-proxy \


--db-proxy-name proxy_name \
--engine-family { MYSQL | POSTGRESQL | SQLSERVER } \
--auth ProxyAuthenticationConfig_JSON_string \
--role-arn iam_role \
--vpc-subnet-ids space_separated_list \
[--vpc-security-group-ids space_separated_list] \
[--require-tls | --no-require-tls] \
[--idle-client-timeout value] \
[--debug-logging | --no-debug-logging] \
[--tags comma_separated_list]

For Windows:

aws rds create-db-proxy ^


--db-proxy-name proxy_name ^
--engine-family { MYSQL | POSTGRESQL | SQLSERVER } ^
--auth ProxyAuthenticationConfig_JSON_string ^
--role-arn iam_role ^
--vpc-subnet-ids space_separated_list ^
[--vpc-security-group-ids space_separated_list] ^

1215
Amazon Relational Database Service User Guide
Creating an RDS Proxy

[--require-tls | --no-require-tls] ^
[--idle-client-timeout value] ^
[--debug-logging | --no-debug-logging] ^
[--tags comma_separated_list]

The following is an example of the JSON value for the --auth option. This example
applies a different client authentication type to each secret.

[
{
"Description": "proxy description 1",
"AuthScheme": "SECRETS",
"SecretArn": "arn:aws:secretsmanager:us-
west-2:123456789123:secret/1234abcd-12ab-34cd-56ef-1234567890ab",
"IAMAuth": "DISABLED",
"ClientPasswordAuthType": "POSTGRES_SCRAM_SHA_256"
},

{
"Description": "proxy description 2",
"AuthScheme": "SECRETS",
"SecretArn": "arn:aws:secretsmanager:us-
west-2:111122223333:seret/1234abcd-12ab-34cd-56ef-1234567890cd",
"IAMAuth": "DISABLED",
"ClientPasswordAuthType": "POSTGRES_MD5"

},

{
"Description": "proxy description 3",
"AuthScheme": "SECRETS",
"SecretArn": "arn:aws:secretsmanager:us-
west-2:111122221111:secret/1234abcd-12ab-34cd-56ef-1234567890ef",
"IAMAuth": "REQUIRED"
}

Tip
If you don't already know the subnet IDs to use for the --vpc-subnet-ids parameter, see
Setting up network prerequisites (p. 1207) for examples of how to find them.
Note
The security group must allow access to the database the proxy connects to. The same security
group is used for ingress from your applications to the proxy, and for egress from the proxy to
the database. For example, suppose that you use the same security group for your database
and your proxy. In this case, make sure that you specify that resources in that security group can
communicate with other resources in the same security group.
When using a shared VPC, you can't use the default security group for the VPC, or one that
belongs to another account. Choose a security group that belongs to your account. If one
doesn't exist, create one. For more information about this limitation, see Work with shared VPCs.

To create the required information and associations for the proxy, you also use the register-db-proxy-
targets command. Specify the target group name default. RDS Proxy automatically creates a target
group with this name when you create each proxy.

aws rds register-db-proxy-targets


--db-proxy-name value
[--target-group-name target_group_name]
[--db-instance-identifiers space_separated_list] # rds db instances, or
[--db-cluster-identifiers cluster_id] # rds db cluster (all instances)

1216
Amazon Relational Database Service User Guide
Viewing an RDS Proxy

RDS API
To create an RDS proxy, call the Amazon RDS API operation CreateDBProxy. You pass a parameter with
the AuthConfig data structure.

RDS Proxy automatically creates a target group named default when you create each proxy. You
associate an RDS DB instance or Aurora DB cluster with the target group by calling the function
RegisterDBProxyTargets.

Viewing an RDS Proxy


After you create one or more RDS proxies, you can view them all. Doing so makes it possible to examine
their configuration details and choose which ones to modify, delete, and so on.

Any database applications that use the proxy require the proxy endpoint to use in the connection string.

AWS Management Console


To view your proxy

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the upper-right corner of the AWS Management Console, choose the AWS Region in which you
created the RDS Proxy.
3. In the navigation pane, choose Proxies.
4. Choose the name of an RDS proxy to display its details.
5. On the details page, the Target groups section shows how the proxy is associated with a specific
RDS DB instance or Aurora DB cluster. You can follow the link to the default target group page to
see more details about the association between the proxy and the database. This page is where
you see settings that you specified when creating the proxy. These include maximum connection
percentage, connection borrow timeout, engine family, and session pinning filters.

CLI
To view your proxy using the CLI, use the describe-db-proxies command. By default, it displays all proxies
owned by your AWS account. To see details for a single proxy, specify its name with the --db-proxy-
name parameter.

aws rds describe-db-proxies [--db-proxy-name proxy_name]

To view the other information associated with the proxy, use the following commands.

aws rds describe-db-proxy-target-groups --db-proxy-name proxy_name

aws rds describe-db-proxy-targets --db-proxy-name proxy_name

Use the following sequence of commands to see more detail about the things that are associated with
the proxy:

1. To get a list of proxies, run describe-db-proxies.


2. To show connection parameters such as the maximum percentage of connections that the proxy can
use, run describe-db-proxy-target-groups --db-proxy-name. Use the name of the proxy as the
parameter value.

1217
Amazon Relational Database Service User Guide
Connecting through RDS Proxy

3. To see the details of the RDS DB instance or Aurora DB cluster associated with the returned target
group, run describe-db-proxy-targets.

RDS API
To view your proxies using the RDS API, use the DescribeDBProxies operation. It returns values of the
DBProxy data type.

To see details of the connection settings for the proxy, use the proxy identifiers from this return value
with the DescribeDBProxyTargetGroups operation. It returns values of the DBProxyTargetGroup data
type.

To see the RDS instance or Aurora DB cluster associated with the proxy, use the DescribeDBProxyTargets
operation. It returns values of the DBProxyTarget data type.

Connecting to a database through RDS Proxy


You connect to an RDS DB instance through a proxy in generally the same way as you connect directly
to the database. The main difference is that you specify the proxy endpoint instead of the instance
endpoint. For more information, see Overview of proxy endpoints (p. 1233).

Topics
• Connecting to a proxy using native authentication (p. 1218)
• Connecting to a proxy using IAM authentication (p. 1219)
• Considerations for connecting to a proxy with Microsoft SQL Server (p. 1219)
• Considerations for connecting to a proxy with PostgreSQL (p. 1220)

Connecting to a proxy using native authentication


Use the following basic steps to connect to a proxy using native authentication:

1. Find the proxy endpoint. In the AWS Management Console, you can find the endpoint on the details
page for the corresponding proxy. With the AWS CLI, you can use the describe-db-proxies command.
The following example shows how.

# Add --output text to get output as a simple tab-separated list.


$ aws rds describe-db-proxies --query '*[*].{DBProxyName:DBProxyName,Endpoint:Endpoint}'
[
[
{
"Endpoint": "the-proxy.proxy-demo.us-east-1.rds.amazonaws.com",
"DBProxyName": "the-proxy"
},
{
"Endpoint": "the-proxy-other-secret.proxy-demo.us-east-1.rds.amazonaws.com",
"DBProxyName": "the-proxy-other-secret"
},
{
"Endpoint": "the-proxy-rds-secret.proxy-demo.us-east-1.rds.amazonaws.com",
"DBProxyName": "the-proxy-rds-secret"
},
{
"Endpoint": "the-proxy-t3.proxy-demo.us-east-1.rds.amazonaws.com",
"DBProxyName": "the-proxy-t3"
}
]
]

1218
Amazon Relational Database Service User Guide
Connecting through RDS Proxy

2. Specify that endpoint as the host parameter in the connection string for your client application. For
example, specify the proxy endpoint as the value for the mysql -h option or psql -h option.
3. Supply the same database user name and password as you usually do.

Connecting to a proxy using IAM authentication


When you use IAM authentication with RDS Proxy, set up your database users to authenticate with
regular user names and passwords. The IAM authentication applies to RDS Proxy retrieving the user
name and password credentials from Secrets Manager. The connection from RDS Proxy to the underlying
database doesn't go through IAM.

To connect to RDS Proxy using IAM authentication, use the same general connection procedure as for
IAM authentication with an RDS DB instance or Aurora cluster. For general information about using IAM
with RDS and Aurora, see Security in Amazon RDS (p. 2565).

The major differences in IAM usage for RDS Proxy include the following:

• You don't configure each individual database user with an authorization plugin. The database users
still have regular user names and passwords within the database. You set up Secrets Manager secrets
containing these user names and passwords, and authorize RDS Proxy to retrieve the credentials from
Secrets Manager.

The IAM authentication applies to the connection between your client program and the proxy. The
proxy then authenticates to the database using the user name and password credentials retrieved from
Secrets Manager.
• Instead of the instance, cluster, or reader endpoint, you specify the proxy endpoint. For details about
the proxy endpoint, see Connecting to your DB instance using IAM authentication (p. 2650).
• In the direct database IAM authentication case, you selectively choose database users and configure
them to be identified with a special authentication plugin. You can then connect to those users using
IAM authentication.

In the proxy use case, you provide the proxy with Secrets that contain some user's user name and
password (native authentication). You then connect to the proxy using IAM authentication. Here, you
do this by generating an authentication token with the proxy endpoint, not the database endpoint.
You also use a user name that matches one of the user names for the secrets that you provided.
• Make sure that you use Transport Layer Security (TLS)/Secure Sockets Layer (SSL) when connecting to
a proxy using IAM authentication.

You can grant a specific user access to the proxy by modifying the IAM policy. An example follows.

"Resource": "arn:aws:rds-db:us-east-2:1234567890:dbuser:prx-ABCDEFGHIJKL01234/db_user"

Considerations for connecting to a proxy with Microsoft SQL


Server
For connecting to a proxy using IAM authentication, you don't use the password field. Instead, you
provide the appropriate token property for each type of database driver in the token field. For example,
use the accessToken property for JDBC, or the sql_copt_ss_access_token property for ODBC.
Or use the AccessToken property for the .NET SqlClient driver. You can't use IAM authentication with
clients that don't support token properties.

Under some conditions, a proxy can't share a database connection and instead pins the connection from
your client application to the proxy to a dedicated database connection. For more information about
these conditions, see Avoiding pinning (p. 1228).

1219
Amazon Relational Database Service User Guide
Managing an RDS Proxy

Considerations for connecting to a proxy with PostgreSQL


For PostgreSQL, when a client starts a connection to a PostgreSQL database, it sends a startup message.
This message includes pairs of parameter name and value strings. For details, see the StartupMessage
in PostgreSQL message formats in the PostgreSQL documentation.

When connecting through an RDS proxy, the startup message can include the following currently
recognized parameters:

• user
• database
• replication

The startup message can also include the following additional runtime parameters:

• application_name
• client_encoding
• DateStyle
• TimeZone
• extra_float_digits

For more information about PostgreSQL messaging, see the Frontend/Backend protocol in the
PostgreSQL documentation.

For PostgreSQL, if you use JDBC we recommend the following to avoid pinning:

• Set the JDBC connection parameter assumeMinServerVersion to at least 9.0 to avoid pinning.
Doing this prevents the JDBC driver from performing an extra round trip during connection startup
when it runs SET extra_float_digits = 3.
• Set the JDBC connection parameter ApplicationName to any/your-application-name to
avoid pinning. Doing this prevents the JDBC driver from performing an extra round trip during
connection startup when it runs SET application_name = "PostgreSQL JDBC Driver".
Note the JDBC parameter is ApplicationName but the PostgreSQL StartupMessage parameter is
application_name.
• Set the JDBC connection parameter preferQueryMode to extendedForPrepared to avoid pinning.
The extendedForPrepared ensures that the extended mode is used only for prepared statements.

The default for the preferQueryMode parameter is extended, which uses the extended mode for
all queries. The extended mode uses a series of Prepare, Bind, Execute, and Sync requests and
corresponding responses. This type of series causes connection pinning in an RDS proxy.

For more information, see Avoiding pinning (p. 1228). For more information about connecting using
JDBC, see Connecting to the database in the PostgreSQL documentation.

Managing an RDS Proxy


Following, you can find an explanation of how to manage RDS Proxy operation and configuration. These
procedures help your application make the most efficient use of database connections and achieve
maximum connection reuse. The more that you can take advantage of connection reuse, the more CPU
and memory overhead that you can save. This in turn reduces latency for your application and enables
the database to devote more of its resources to processing application requests.

1220
Amazon Relational Database Service User Guide
Modifying an RDS Proxy

Topics
• Modifying an RDS Proxy (p. 1221)
• Adding a new database user (p. 1225)
• Changing the password for a database user (p. 1226)
• Configuring connection settings (p. 1226)
• Avoiding pinning (p. 1228)
• Deleting an RDS Proxy (p. 1232)

Modifying an RDS Proxy


You can change specific settings associated with a proxy after you create the proxy. You do so by
modifying the proxy itself, its associated target group, or both. Each proxy has an associated target
group.

AWS Management Console


Important
The values in the Client authentication type and IAM authentication fields apply to all Secrets
Manager secrets that are associated with this proxy. To specify different values for each secret,
modify your proxy by using the AWS CLI or the API instead.

To modify the settings for a proxy

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Proxies.
3. In the list of proxies, choose the proxy whose settings you want to modify or go to its details page.
4. For Actions, choose Modify.
5. Enter or choose the properties to modify. You can modify the following:

• Proxy identifier – Rename the proxy by entering a new identifier.


• Idle client connection timeout – Enter a time period for the idle client connection timeout.
• IAM role – Change the IAM role used to retrieve the secrets from Secrets Manager.
• Secrets Manager secrets – Add or remove Secrets Manager secrets. These secrets correspond to
database user names and passwords.
• Client authentication type – (PostgreSQL only) Change the type of authentication for client
connections to the proxy.
• IAM authentication – Require or disallow IAM authentication for connections to the proxy.
• Require Transport Layer Security – Turn the requirement for Transport layer Security (TLS) on or
off.
• VPC security group – Add or remove VPC security groups for the proxy to use.
• Enable enhanced logging – Enable or disable enhanced logging.
6. Choose Modify.

If you didn't find the settings listed that you want to change, use the following procedure to update the
target group for the proxy. The target group associated with a proxy controls the settings related to the
physical database connections. Each proxy has one associated target group named default, which is
created automatically along with the proxy.

You can only modify the target group from the proxy details page, not from the list on the Proxies page.

1221
Amazon Relational Database Service User Guide
Modifying an RDS Proxy

To modify the settings for a proxy target group

1. On the Proxies page, go to the details page for a proxy.


2. For Target groups, choose the default link. Currently, all proxies have a single target group named
default.
3. On the details page for the default target group, choose Modify.
4. Choose new settings for the properties that you can modify:

• Database – Choose a different RDS DB instance or Aurora cluster.


• Connection pool maximum connections – Adjust what percentage of the maximum available
connections the proxy can use.
• Session pinning filters – (Optional) Choose a session pinning filter. Doing this can help reduce
performance issues due to insufficient transaction-level reuse for connections. Using this setting
requires understanding of application behavior and the circumstances under which RDS Proxy pins
a session to a database connection. Currently, the setting isn't supported for PostgreSQL and the
only choice is EXCLUDE_VARIABLE_SETS.
• Connection borrow timeout – Adjust the connection borrow timeout interval. This setting applies
when the maximum number of connections is already being used for the proxy. The setting
determines how long the proxy waits for a connection to become available before returning a
timeout error.
• Initialization query – (Optional) Add an initialization query, or modify the current one. You can
specify one or more SQL statements for the proxy to run when opening each new database
connection. The setting is typically used with SET statements to make sure that each connection
has identical settings such as time zone and character set. For multiple statements, use semicolons
as the separator. You can also include multiple variables in a single SET statement, such as SET
x=1, y=2. Initialization query is not currently supported for PostgreSQL.

You can't change certain properties, such as the target group identifier and the database engine.
5. Choose Modify target group.

AWS CLI
To modify a proxy using the AWS CLI, use the commands modify-db-proxy, modify-db-proxy-target-
group, deregister-db-proxy-targets, and register-db-proxy-targets.

With the modify-db-proxy command, you can change properties such as the following:

• The set of Secrets Manager secrets used by the proxy.


• Whether TLS is required.
• The idle client timeout.
• Whether to log additional information from SQL statements for debugging.
• The IAM role used to retrieve Secrets Manager secrets.
• The security groups used by the proxy.

The following example shows how to rename an existing proxy.

aws rds modify-db-proxy --db-proxy-name the-proxy --new-db-proxy-name the_new_name

To modify connection-related settings or rename the target group, use the modify-db-proxy-
target-group command. Currently, all proxies have a single target group named default. When
working with this target group, you specify the name of the proxy and default for the name of the
target group.

1222
Amazon Relational Database Service User Guide
Modifying an RDS Proxy

The following example shows how to first check the MaxIdleConnectionsPercent setting for a proxy
and then change it, using the target group.

aws rds describe-db-proxy-target-groups --db-proxy-name the-proxy

{
"TargetGroups": [
{
"Status": "available",
"UpdatedDate": "2019-11-30T16:49:30.342Z",
"ConnectionPoolConfig": {
"MaxIdleConnectionsPercent": 50,
"ConnectionBorrowTimeout": 120,
"MaxConnectionsPercent": 100,
"SessionPinningFilters": []
},
"TargetGroupName": "default",
"CreatedDate": "2019-11-30T16:49:27.940Z",
"DBProxyName": "the-proxy",
"IsDefault": true
}
]
}

aws rds modify-db-proxy-target-group --db-proxy-name the-proxy --target-group-name default


--connection-pool-config '
{ "MaxIdleConnectionsPercent": 75 }'

{
"DBProxyTargetGroup": {
"Status": "available",
"UpdatedDate": "2019-12-02T04:09:50.420Z",
"ConnectionPoolConfig": {
"MaxIdleConnectionsPercent": 75,
"ConnectionBorrowTimeout": 120,
"MaxConnectionsPercent": 100,
"SessionPinningFilters": []
},
"TargetGroupName": "default",
"CreatedDate": "2019-11-30T16:49:27.940Z",
"DBProxyName": "the-proxy",
"IsDefault": true
}
}

With the deregister-db-proxy-targets and register-db-proxy-targets commands, you


change which RDS DB instance or Aurora DB cluster the proxy is associated with through its target group.
Currently, each proxy can connect to one RDS DB instance or Aurora DB cluster. The target group tracks
the connection details for all the RDS DB instances in a Multi-AZ configuration. Or the target group
tracks the connection details for all the DB instances in an Aurora cluster.

The following example starts with a proxy that is associated with an Aurora MySQL cluster named
cluster-56-2020-02-25-1399. The example shows how to change the proxy so that it can connect
to a different cluster named provisioned-cluster.

When you work with an RDS DB instance, you specify the --db-instance-identifier option. When
you work with an Aurora DB cluster, you specify the --db-cluster-identifier option instead.

The following example modifies an Aurora MySQL proxy. An Aurora PostgreSQL proxy has port 5432.

aws rds describe-db-proxy-targets --db-proxy-name the-proxy

1223
Amazon Relational Database Service User Guide
Modifying an RDS Proxy

{
"Targets": [
{
"Endpoint": "instance-9814.demo.us-east-1.rds.amazonaws.com",
"Type": "RDS_INSTANCE",
"Port": 3306,
"RdsResourceId": "instance-9814"
},
{
"Endpoint": "instance-8898.demo.us-east-1.rds.amazonaws.com",
"Type": "RDS_INSTANCE",
"Port": 3306,
"RdsResourceId": "instance-8898"
},
{
"Endpoint": "instance-1018.demo.us-east-1.rds.amazonaws.com",
"Type": "RDS_INSTANCE",
"Port": 3306,
"RdsResourceId": "instance-1018"
},
{
"Type": "TRACKED_CLUSTER",
"Port": 0,
"RdsResourceId": "cluster-56-2020-02-25-1399"
},
{
"Endpoint": "instance-4330.demo.us-east-1.rds.amazonaws.com",
"Type": "RDS_INSTANCE",
"Port": 3306,
"RdsResourceId": "instance-4330"
}
]
}

aws rds deregister-db-proxy-targets --db-proxy-name the-proxy --db-cluster-identifier


cluster-56-2020-02-25-1399

aws rds describe-db-proxy-targets --db-proxy-name the-proxy

{
"Targets": []
}

aws rds register-db-proxy-targets --db-proxy-name the-proxy --db-cluster-identifier


provisioned-cluster

{
"DBProxyTargets": [
{
"Type": "TRACKED_CLUSTER",
"Port": 0,
"RdsResourceId": "provisioned-cluster"
},
{
"Endpoint": "gkldje.demo.us-east-1.rds.amazonaws.com",
"Type": "RDS_INSTANCE",
"Port": 3306,
"RdsResourceId": "gkldje"
},
{
"Endpoint": "provisioned-1.demo.us-east-1.rds.amazonaws.com",
"Type": "RDS_INSTANCE",
"Port": 3306,
"RdsResourceId": "provisioned-1"
}
]

1224
Amazon Relational Database Service User Guide
Adding a database user

RDS API
To modify a proxy using the RDS API, you use the operations ModifyDBProxy,
ModifyDBProxyTargetGroup, DeregisterDBProxyTargets, and RegisterDBProxyTargets operations.

With ModifyDBProxy, you can change properties such as the following:

• The set of Secrets Manager secrets used by the proxy.


• Whether TLS is required.
• The idle client timeout.
• Whether to log additional information from SQL statements for debugging.
• The IAM role used to retrieve Secrets Manager secrets.
• The security groups used by the proxy.

With ModifyDBProxyTargetGroup, you can modify connection-related settings or rename the target
group. Currently, all proxies have a single target group named default. When working with this target
group, you specify the name of the proxy and default for the name of the target group.

With DeregisterDBProxyTargets and RegisterDBProxyTargets, you change which RDS DB


instance or Aurora DB cluster the proxy is associated with through its target group. Currently, each proxy
can connect to one RDS DB instance or Aurora DB cluster. The target group tracks the connection details
for all the RDS DB instances in a Multi-AZ configuration, or all the DB instances in an Aurora cluster.

Adding a new database user


In some cases, you might add a new database user to an RDS DB instance or Aurora cluster that's
associated with a proxy. If so, add or repurpose a Secrets Manager secret to store the credentials for that
user. To do this, choose one of the following options:

1. Create a new Secrets Manager secret, using the procedure described in Setting up database credentials
in AWS Secrets Manager (p. 1209).
2. Update the IAM role to give RDS Proxy access to the new Secrets Manager secret. To do so, update the
resources section of the IAM role policy.
3. Modify the RDS Proxy to add the new Secrets Manager secret under Secrets Manager secrets.
4. If the new user takes the place of an existing one, update the credentials stored in the proxy's Secrets
Manager secret for the existing user.

If you have you have the run the following command for your PostgreSQL databases:

REVOKE CONNECT ON DATABASE postgres FROM PUBLIC;

Grant the rdsproxyadmin user the CONNECT privilege so the user can monitor connections on the
target database.

GRANT CONNECT ON DATABASE postgres TO rdsproxyadmin;

You can also allow other target database users to perform health checks by changing rdsproxyadmin
to the database user in the command above.

1225
Amazon Relational Database Service User Guide
Changing database passwords

Changing the password for a database user


In some cases, you might change the password for a database user in an RDS DB instance or Aurora
cluster that's associated with a proxy. If so, update the corresponding Secrets Manager secret with the
new password.

Configuring connection settings


To adjust RDS Proxy's connection pooling, you can modify the following settings:

• IdleClientTimeout (p. 1226)


• MaxConnectionsPercent (p. 1226)
• MaxIdleConnectionsPercent (p. 1227)
• ConnectionBorrowTimeout (p. 1227)

IdleClientTimeout
You can specify how long a client connection can be idle before the proxy can close it. The default is
1,800 seconds (30 minutes).

A client connection is considered idle when the application doesn't submit a new request within the
specified time after the previous request completed. The underlying database connection stays open
and is returned to the connection pool. Thus, it's available to be reused for new client connections. If
you want the proxy to proactively remove stale connections, consider lowering the idle client connection
timeout. If your workload establishes frequent connections with the proxy, consider raising the idle client
connection timeout to save the cost of establishing connections.

This setting is represented by the Idle client connection timeout field in the RDS console and the
IdleClientTimeout setting in the AWS CLI and the API. To learn how to change the value of the Idle
client connection timeout field in the RDS console, see AWS Management Console (p. 1221). To learn
how to change the value of the IdleClientTimeout setting, see the CLI command modify-db-proxy or
the API operation ModifyDBProxy.

MaxConnectionsPercent
You can limit the number of connections that an RDS Proxy can establish with the target database.
You specify the limit as a percentage of the maximum connections available for your database. This
setting is represented by the Connection pool maximum connections field in the RDS console and the
MaxConnectionsPercent setting in the AWS CLI and the API.

The MaxConnectionsPercent value is expressed as a percentage of the max_connections setting


for the RDS DB instance used by the target group. The proxy doesn't create all of these connections in
advance. This setting reserves the right for the proxy to establish these connections as the workload
needs them.

For example, for a registered database target with max_connections set to 1000, and
MaxConnectionsPercent set to 95, RDS Proxy sets 950 connections as the upper limit for concurrent
connections to that database target.

A common side-effect of your workload reaching the maximum number of allowed


database connections is an increase in overall query latency, along with an increase in
the DatabaseConnectionsBorrowLatency metric. You can monitor currently used
and total allowed database connections by comparing the DatabaseConnections and
MaxDatabaseConnectionsAllowed metrics.

When setting this parameter, note the following best practices:

1226
Amazon Relational Database Service User Guide
Configuring connection settings

• Allow sufficient connection headroom for changes in workload pattern. It is recommended to set the
parameter at least 30% above your maximum recent monitored usage. As RDS Proxy redistributes
database connection quotas across multiple nodes, internal capacity changes might require at least
30% headroom for additional connections to avoid increased borrow latencies.
• RDS Proxy reserves a certain number of connections for active monitoring to support fast failover,
traffic routing and internal operations. The MaxDatabaseConnectionsAllowed metric does not
include these reserved connections. It represents the number of connections available to serve the
workload, and can be lower than the value derived from the MaxConnectionsPercent setting.

Minimal recommended MaxConnectionsPercent values


• db.t3.small: 30
• db.t3.medium or above: 20

To learn how to change the value of the Connection pool maximum connections field in the
RDS console, see AWS Management Console (p. 1221). To learn how to change the value of the
MaxConnectionsPercent setting, see the CLI command modify-db-proxy-target-group or the API
operation ModifyDBProxyTargetGroup.

For information on database connection limits, see Maximum number of database connections.

MaxIdleConnectionsPercent
You can control the number of idle database connections that RDS Proxy can keep in the connection
pool. RDS Proxy considers a database connection in it's pool to be idle when there's been no activity on
the connection for five minutes.

You specify the limit as a percentage of the maximum connections available for your database.
The default value is 50 percent of MaxConnectionsPercent, and the upper limit is the value of
MaxConnectionsPercent. With a high value, the proxy leaves a high percentage of idle database
connections open. With a low value, the proxy closes a high percentage of idle database connections. If
your workloads are unpredictable, consider setting a high value for MaxIdleConnectionsPercent.
Doing so means that RDS Proxy can accommodate surges in activity without opening a lot of new
database connections.

This setting is represented by the MaxIdleConnectionsPercent setting of DBProxyTargetGroup


in the AWS CLI and the API. To learn how to change the value of the MaxIdleConnectionsPercent
setting, see the CLI command modify-db-proxy-target-group or the API operation
ModifyDBProxyTargetGroup.
Note
RDS Proxy closes database connections some time after 24 hours when they are no longer in
use. The proxy performs this action regardless of the value of the maximum idle connections
setting.

For information on database connection limits, see Maximum number of database connections.

ConnectionBorrowTimeout
You can choose how long RDS Proxy waits for a database connection in the connection pool to become
available for use before returning a timeout error. The default is 120 seconds. This setting applies when
the number of connections is at the maximum, and so no connections are available in the connection
pool. It also applies if no appropriate database instance is available to handle the request because, for
example, a failover operation is in process. Using this setting, you can set the best wait period for your
application without having to change the query timeout in your application code.

This setting is represented by the Connection borrow timeout field in the RDS console or the
ConnectionBorrowTimeout setting of DBProxyTargetGroup in the AWS CLI or API. To learn how
to change the value of the Connection borrow timeout field in the RDS console, see AWS Management

1227
Amazon Relational Database Service User Guide
Avoiding pinning

Console (p. 1221). To learn how to change the value of the ConnectionBorrowTimeout setting, see
the CLI command modify-db-proxy-target-group or the API operation ModifyDBProxyTargetGroup.

Avoiding pinning
Multiplexing is more efficient when database requests don't rely on state information from previous
requests. In that case, RDS Proxy can reuse a connection at the conclusion of each transaction. Examples
of such state information include most variables and configuration parameters that you can change
through SET or SELECT statements. SQL transactions on a client connection can multiplex between
underlying database connections by default.

Your connections to the proxy can enter a state known as pinning. When a connection is pinned, each
later transaction uses the same underlying database connection until the session ends. Other client
connections also can't reuse that database connection until the session ends. The session ends when the
client connection is dropped.

RDS Proxy automatically pins a client connection to a specific DB connection when it detects a session
state change that isn't appropriate for other sessions. Pinning reduces the effectiveness of connection
reuse. If all or almost all of your connections experience pinning, consider modifying your application
code or workload to reduce the conditions that cause the pinning.

For example, suppose that your application changes a session variable or configuration parameter. In this
case, later statements can rely on the new variable or parameter to be in effect. Thus, when RDS Proxy
processes requests to change session variables or configuration settings, it pins that session to the DB
connection. That way, the session state remains in effect for all later transactions in the same session.

For some database engines, this rule doesn't apply to all parameters that you can set. RDS Proxy tracks
certain statements and variables. Thus RDS Proxy doesn't pin the session when you modify them. In
this case, RDS Proxy only reuses the connection for other sessions that have the same values for those
settings. For details about what RDS Proxy tracks for a database engine, see the following:

• What RDS Proxy tracks for RDS for SQL Server databases (p. 1228)
• What RDS Proxy tracks for RDS for MariaDB and RDS for MySQL databases (p. 1229)

What RDS Proxy tracks for RDS for SQL Server databases
Following are the SQL Server statements that RDS Proxy tracks:

• USE
• SET ANSI_NULLS
• SET ANSI_PADDING
• SET ANSI_WARNINGS
• SET ARITHABORT
• SET CONCAT_NULL_YIELDS_NULL
• SET CURSOR_CLOSE_ON_COMMIT
• SET DATEFIRST
• SET DATEFORMAT
• SET LANGUAGE
• SET LOCK_TIMEOUT
• SET NUMERIC_ROUNDABORT
• SET QUOTED_IDENTIFIER
• SET TEXTSIZE
• SET TRANSACTION ISOLATION LEVEL

1228
Amazon Relational Database Service User Guide
Avoiding pinning

What RDS Proxy tracks for RDS for MariaDB and RDS for MySQL
databases
Following are the MySQL and MariaDB statements that RDS Proxy tracks:

• DROP DATABASE
• DROP SCHEMA
• USE

Following are the MySQL and MariaDB variables that RDS Proxy tracks:

• AUTOCOMMIT
• AUTO_INCREMENT_INCREMENT
• CHARACTER SET (or CHAR SET)
• CHARACTER_SET_CLIENT
• CHARACTER_SET_DATABASE
• CHARACTER_SET_FILESYSTEM
• CHARACTER_SET_CONNECTION
• CHARACTER_SET_RESULTS
• CHARACTER_SET_SERVER
• COLLATION_CONNECTION
• COLLATION_DATABASE
• COLLATION_SERVER
• INTERACTIVE_TIMEOUT
• NAMES
• NET_WRITE_TIMEOUT
• QUERY_CACHE_TYPE
• SESSION_TRACK_SCHEMA
• SQL_MODE
• TIME_ZONE
• TRANSACTION_ISOLATION (or TX_ISOLATION)
• TRANSACTION_READ_ONLY (or TX_READ_ONLY)
• WAIT_TIMEOUT

Minimizing pinning
Performance tuning for RDS Proxy involves trying to maximize transaction-level connection reuse
(multiplexing) by minimizing pinning.

In some cases, RDS Proxy can't be sure that it's safe to reuse a database connection outside of the current
session. In these cases, it keeps the session on the same connection until the session ends. This fallback
behavior is called pinning.

You can minimize pinning by doing the following:

• Avoid unnecessary database requests that might cause pinning.


• Set variables and configuration settings consistently across all connections. That way, later sessions are
more likely to reuse connections that have those particular settings.

1229
Amazon Relational Database Service User Guide
Avoiding pinning

However, for PostgreSQL setting a variable leads to session pinning.


• For a MySQL engine family database, apply a session pinning filter to the proxy. You can exempt
certain kinds of operations from pinning the session if you know that doing so doesn't affect the
correct operation of your application.
• See how frequently pinning occurs by monitoring the Amazon CloudWatch metric
DatabaseConnectionsCurrentlySessionPinned. For information about this and other
CloudWatch metrics, see Monitoring RDS Proxy metrics with Amazon CloudWatch (p. 1239).
• If you use SET statements to perform identical initialization for each client connection, you can do so
while preserving transaction-level multiplexing. In this case, you move the statements that set up the
initial session state into the initialization query used by a proxy. This property is a string containing
one or more SQL statements, separated by semicolons.

For example, you can define an initialization query for a proxy that sets certain configuration
parameters. Then, RDS Proxy applies those settings whenever it sets up a new connection for that
proxy. You can remove the corresponding SET statements from your application code, so that they
don't interfere with transaction-level multiplexing.

For metrics about how often pinning occurs for a proxy, see Monitoring RDS Proxy metrics with
Amazon CloudWatch (p. 1239).

Conditions that cause pinning for all engine families


The proxy pins the session to the current connection in the following situations where multiplexing
might cause unexpected behavior:

• Any statement with a text size greater than 16 KB causes the proxy to pin the session.
• Prepared statements cause the proxy to pin the session. This rule applies whether the prepared
statement uses SQL text or the binary protocol.

Conditions that cause pinning for RDS for Microsoft SQL Server
For RDS for SQL Server, the following interactions also cause pinning:

• Using multiple active result sets (MARS). For information about MARS, see the SQL Server
documentation.
• Using distributed transaction coordinator (DTC) communication.
• Creating temporary tables, transactions, cursors, or prepared statements.
• Using the following SET statements:
• SET ANSI_DEFAULTS
• SET ANSI_NULL_DFLT
• SET ARITHIGNORE
• SET DEADLOCK_PRIORITY
• SET FIPS_FLAGGER
• SET FMTONLY
• SET FORCEPLAN
• SET IDENTITY_INSERT
• SET NOCOUNT
• SET NOEXEC
• SET OFFSETS
• SET PARSEONLY

1230
Amazon Relational Database Service User Guide
Avoiding pinning

• SET QUERY_GOVERNOR_COST_LIMIT
• SET REMOTE_PROC_TRANSACTIONS
• SET ROWCOUNT
• SET SHOWPLAN_ALL, SHOWPLAN_TEXT, and SHOWPLAN_XML
• SET STATISTICS
• SET XACT_ABORT

Conditions that cause pinning for RDS for MariaDB and RDS for
MySQL
For MySQL and MariaDB, the following interactions also cause pinning:

• Explicit table lock statements LOCK TABLE, LOCK TABLES, or FLUSH TABLES WITH READ LOCK
cause the proxy to pin the session.
• Creating named locks by using GET_LOCK causes the proxy to pin the session.
• Setting a user variable or a system variable (with some exceptions) causes the proxy to pin the session.
If this situation reduces your connection reuse too much, you can choose for SET operations not to
cause pinning. For information about how to do so by setting the session pinning filters property, see
Creating an RDS Proxy (p. 1212) and Modifying an RDS Proxy (p. 1221).
• RDS Proxy does not pin connections when you use SET LOCAL.
• Creating a temporary table causes the proxy to pin the session. That way, the contents of the
temporary table are preserved throughout the session regardless of transaction boundaries.
• Calling the functions ROW_COUNT, FOUND_ROWS, and LAST_INSERT_ID sometimes causes pinning.

Calling stored procedures and stored functions doesn't cause pinning. RDS Proxy doesn't detect any
session state changes resulting from such calls. Therefore, make sure that your application doesn't
change session state inside stored routines and rely on that session state to persist across transactions.
For example, if a stored procedure creates a temporary table that is intended to persist across
transactions, that application currently isn't compatible with RDS Proxy.

If you have expert knowledge about your application behavior, you can skip the pinning behavior for
certain application statements. To do so, choose the Session pinning filters option when creating the
proxy. Currently, you can opt out of session pinning for setting session variables and configuration
settings.

Conditions that cause pinning for RDS for PostgreSQL


For PostgreSQL, the following interactions also cause pinning:

• Using SET commands


• Using the PostgreSQL extended query protocol such as by using JDBC default settings
• Creating temporary sequences, tables, or views
• Declaring cursors
• Discarding the session state
• Listening on a notification channel
• Loading a library module such as auto_explain
• Manipulating sequences using functions such as nextval and setval
• Interacting with locks using functions such as pg_advisory_lock and pg_try_advisory_lock
• Using prepared statements, setting parameters, or resetting a parameter to its default

1231
Amazon Relational Database Service User Guide
Deleting an RDS Proxy

Deleting an RDS Proxy


You can delete a proxy if you no longer need it. You might delete a proxy because the application that
was using it is no longer relevant. Or you might delete a proxy if you take the DB instance or cluster
associated with it out of service.

AWS Management Console


To delete a proxy

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Proxies.
3. Choose the proxy to delete from the list.
4. Choose Delete Proxy.

AWS CLI
To delete a DB proxy, use the AWS CLI command delete-db-proxy. To remove related associations, also
use the deregister-db-proxy-targets command.

aws rds delete-db-proxy --name proxy_name

aws rds deregister-db-proxy-targets


--db-proxy-name proxy_name
[--target-group-name target_group_name]
[--target-ids comma_separated_list] # or
[--db-instance-identifiers instance_id] # or
[--db-cluster-identifiers cluster_id]

RDS API
To delete a DB proxy, call the Amazon RDS API function DeleteDBProxy. To delete related items and
associations, you also call the functions DeleteDBProxyTargetGroup and DeregisterDBProxyTargets.

Working with Amazon RDS Proxy endpoints


Following, you can learn about endpoints for RDS Proxy and how to use them. By using endpoints, you
can take advantage of the following capabilities:

• You can use multiple endpoints with a proxy to monitor and troubleshoot connections from different
applications independently.
• You can use reader endpoints with Aurora DB clusters to improve read scalability and high availability
for your query-intensive applications.
• You can use a cross-VPC endpoint to allow access to databases in one VPC from resources such as
Amazon EC2 instances in a different VPC.

Topics
• Overview of proxy endpoints (p. 1233)
• Reader endpoints (p. 1233)
• Accessing Aurora and RDS databases across VPCs (p. 1233)
• Creating a proxy endpoint (p. 1234)

1232
Amazon Relational Database Service User Guide
Overview of proxy endpoints

• Viewing proxy endpoints (p. 1236)


• Modifying a proxy endpoint (p. 1237)
• Deleting a proxy endpoint (p. 1238)
• Limitations for proxy endpoints (p. 1239)

Overview of proxy endpoints


Working with RDS Proxy endpoints involves the same kinds of procedures as with Aurora DB cluster
and reader endpoints and RDS instance endpoints. If you aren't familiar with RDS endpoints, find more
information in Connecting to a DB instance running the MySQL database engine and Connecting to a DB
instance running the PostgreSQL database engine.

By default, the endpoint that you connect to when you use RDS Proxy with an Aurora cluster has read/
write capability. As a result, this endpoint sends all requests to the writer instance of the cluster. All of
those connections count against the max_connections value for the writer instance. If your proxy is
associated with an Aurora DB cluster, you can create additional read/write or read-only endpoints for
that proxy.

You can use a read-only endpoint with your proxy for read-only queries. You do this the same way that
you use the reader endpoint for an Aurora provisioned cluster. Doing so helps you to take advantage
of the read scalability of an Aurora cluster with one or more reader DB instances. You can run more
simultaneous queries and make more simultaneous connections by using a read-only endpoint and
adding more reader DB instances to your Aurora cluster as needed.

For a proxy endpoint that you create, you can also associate the endpoint with a different virtual private
cloud (VPC) than the proxy itself uses. By doing so, you can connect to the proxy from a different VPC,
for example a VPC used by a different application within your organization.

For information about limits associated with proxy endpoints, see Limitations for proxy
endpoints (p. 1239).

In the RDS Proxy logs, each entry is prefixed with the name of the associated proxy endpoint. This name
can be the name you specified for a user-defined endpoint. Or it can be the special name default for
read/write requests using the default endpoint of a proxy.

Each proxy endpoint has its own set of CloudWatch metrics. You can monitor the metrics for all
endpoints of a proxy. You can also monitor metrics for a specific endpoint, or for all the read/write or
read-only endpoints of a proxy. For more information, see Monitoring RDS Proxy metrics with Amazon
CloudWatch (p. 1239).

A proxy endpoint uses the same authentication mechanism as its associated proxy. RDS Proxy
automatically sets up permissions and authorizations for the user-defined endpoint, consistent with the
properties of the associated proxy.

Reader endpoints
With RDS Proxy, you can create and use reader endpoints. However, these endpoints only work for
proxies associated with Aurora DB clusters. You might see references to reader endpoints in the AWS
Management Console. If you use the RDS CLI or API, you might see the TargetRole attribute with a
value of READ_ONLY. You can take advantage of these features by changing the target of a proxy from
an RDS DB instance to an Aurora DB cluster. To learn about reader endpoints, see Managing connections
with Amazon RDS Proxy in the Aurora User Guide.

Accessing Aurora and RDS databases across VPCs


By default, the components of your RDS and Aurora technology stack are all in the same Amazon VPC.
For example, suppose that an application running on an Amazon EC2 instance connects to an Amazon

1233
Amazon Relational Database Service User Guide
Creating a proxy endpoint

RDS DB instance or an Aurora DB cluster. In this case, the application server and database must both be
within the same VPC.

With RDS Proxy, you can set up access to an Aurora cluster or RDS instance in one VPC from resources
such as EC2 instances in another VPC. For example, your organization might have multiple applications
that access the same database resources. Each application might be in its own VPC.

To enable cross-VPC access, you create a new endpoint for the proxy. If you aren't familiar with creating
proxy endpoints, see Working with Amazon RDS Proxy endpoints (p. 1232) for details. The proxy itself
resides in the same VPC as the Aurora DB cluster or RDS instance. However, the cross-VPC endpoint
resides in the other VPC, along with the other resources such as the EC2 instances. The cross-VPC
endpoint is associated with subnets and security groups from the same VPC as the EC2 and other
resources. These associations let you connect to the endpoint from the applications that otherwise can't
access the database due to the VPC restrictions.

The following steps explain how to create and access a cross-VPC endpoint through RDS Proxy:

1. Create two VPCs, or choose two VPCs that you already use for Aurora and RDS work. Each VPC should
have its own associated network resources such as an Internet gateway, route tables, subnets, and
security groups. If you only have one VPC, you can consult Getting started with Amazon RDS (p. 180)
for the steps to set up another VPC to use RDS successfully. You can also examine your existing VPC in
the Amazon EC2 console to see what kinds of resources to connect together.
2. Create a DB proxy associated with the Aurora DB cluster or RDS instance that you want to connect to.
Follow the procedure in Creating an RDS Proxy (p. 1212).
3. On the Details page for your proxy in the RDS console, under the Proxy endpoints section, choose
Create endpoint. Follow the procedure in Creating a proxy endpoint (p. 1234).
4. Choose whether to make the cross-VPC endpoint read/write or read-only.
5. Instead of accepting the default of the same VPC as the Aurora DB cluster or RDS instance, choose a
different VPC. This VPC must be in the same AWS Region as the VPC where the proxy resides.
6. Now instead of accepting the defaults for subnets and security groups from the same VPC as the
Aurora DB cluster or RDS instance, make new selections. Make these based on the subnets and
security groups from the VPC that you chose.
7. You don't need to change any of the settings for the Secrets Manager secrets. The same credentials
work for all endpoints for your proxy, regardless of which VPC each endpoint is in.
8. Wait for the new endpoint to reach the Available state.
9. Make a note of the full endpoint name. This is the value ending in
Region_name.rds.amazonaws.com that you supply as part of the connection string for your
database application.
10.Access the new endpoint from a resource in the same VPC as the endpoint. A simple way to test this
process is to create a new EC2 instance in this VPC. Then you can log into the EC2 instance and run the
mysql or psql commands to connect by using the endpoint value in your connection string.

Creating a proxy endpoint


Console
To create a proxy endpoint

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Proxies.
3. Click the name of the proxy that you want to create a new endpoint for.

1234
Amazon Relational Database Service User Guide
Creating a proxy endpoint

The details page for that proxy appears.


4. In the Proxy endpoints section, choose Create proxy endpoint.

The Create proxy endpoint window appears.


5. For Proxy endpoint name, enter a descriptive name of your choice.
6. For Target role, choose whether to make the endpoint read/write or read-only.

Connections that use a read/write endpoint can perform any kind of operation: data definition
language (DDL) statements, data manipulation language (DML) statements, and queries. These
endpoints always connect to the primary instance of the Aurora cluster. You can use read/write
endpoints for general database operations when you only use a single endpoint in your application.
You can also use read/write endpoints for administrative operations, online transaction processing
(OLTP) applications, and extract-transform-load (ETL) jobs.

Connections that use a read-only endpoint can only perform queries. When there are multiple
reader instances in the Aurora cluster, RDS Proxy can use a different reader instance for each
connection to the endpoint. That way, a query-intensive application can take advantage of Aurora's
clustering capability. You can add more query capacity to the cluster by adding more reader DB
instances. These read-only connections don't impose any overhead on the primary instance of the
cluster. That way, your reporting and analysis queries don't slow down the write operations of your
OLTP applications.
7. For Virtual Private Cloud (VPC), choose the default to access the endpoint from the same EC2
instances or other resources where you normally access the proxy or its associated database. To set
up cross-VPC access for this proxy, choose a VPC other than the default. For more information about
cross-VPC access, see Accessing Aurora and RDS databases across VPCs (p. 1233).
8. For Subnets, RDS Proxy fills in the same subnets as the associated proxy by default. To restrict
access to the endpoint so only a portion of the VPC's address range can connect to it, remove one or
more subnets.
9. For VPC security group, you can choose an existing security group or create a new one. RDS Proxy
fills in the same security group or groups as the associated proxy by default. If the inbound and
outbound rules for the proxy are appropriate for this endpoint, you can leave the default choice.

If you choose to create a new security group, specify a name for the security group on this page.
Then edit the security group settings from the EC2 console afterward.
10. Choose Create proxy endpoint.

AWS CLI
To create a proxy endpoint, use the AWS CLI create-db-proxy-endpoint command.

Include the following required parameters:

• --db-proxy-name value
• --db-proxy-endpoint-name value
• --vpc-subnet-ids list_of_ids. Separate the subnet IDs with spaces. You don't specify the ID of
the VPC itself.

You can also include the following optional parameters:

• --target-role { READ_WRITE | READ_ONLY }. This parameter defaults to READ_WRITE. The


READ_ONLY value only has an effect on Aurora provisioned clusters that contain one or more reader
DB instances. When the proxy is associated with an RDS instance or with an Aurora cluster that only
contains a writer DB instance, you can't specify READ_ONLY.

1235
Amazon Relational Database Service User Guide
Viewing proxy endpoints

• --vpc-security-group-ids value. Separate the security group IDs with spaces. If you omit this
parameter, RDS Proxy uses the default security group for the VPC. RDS Proxy determines the VPC
based on the subnet IDs that you specify for the --vpc-subnet-ids parameter.

Example

The following example creates a proxy endpoint named my-endpoint.

For Linux, macOS, or Unix:

aws rds create-db-proxy-endpoint \


--db-proxy-name my-proxy \
--db-proxy-endpoint-name my-endpoint \
--vpc-subnet-ids subnet_id subnet_id subnet_id ... \
--target-role READ_ONLY \
--vpc-security-group-ids security_group_id ]

For Windows:

aws rds create-db-proxy-endpoint ^


--db-proxy-name my-proxy ^
--db-proxy-endpoint-name my-endpoint ^
--vpc-subnet-ids subnet_id_1 subnet_id_2 subnet_id_3 ... ^
--target-role READ_ONLY ^
--vpc-security-group-ids security_group_id

RDS API
To create a proxy endpoint, use the RDS API CreateDBProxyEndpoint action.

Viewing proxy endpoints


Console
To view the details for a proxy endpoint

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Proxies.
3. In the list, choose the proxy whose endpoint you want to view. Click the proxy name to view its
details page.
4. In the Proxy endpoints section, choose the endpoint that you want to view. Click its name to view
the details page.
5. Examine the parameters whose values you're interested in. You can check properties such as the
following:

• Whether the endpoint is read/write or read-only.


• The endpoint address that you use in a database connection string.
• The VPC, subnets, and security groups associated with the endpoint.

AWS CLI
To view one or more DB proxy endpoints, use the AWS CLI describe-db-proxy-endpoints command.

1236
Amazon Relational Database Service User Guide
Modifying a proxy endpoint

You can include the following optional parameters:

• --db-proxy-endpoint-name
• --db-proxy-name

The following example describes the my-endpoint proxy endpoint.

Example
For Linux, macOS, or Unix:

aws rds describe-db-proxy-endpoints \


--db-proxy-endpoint-name my-endpoint

For Windows:

aws rds describe-db-proxy-endpoints ^


--db-proxy-endpoint-name my-endpoint

RDS API
To describe one or more proxy endpoints, use the RDS API DescribeDBProxyEndpoints operation.

Modifying a proxy endpoint


Console
To modify one or more proxy endpoints

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Proxies.
3. In the list, choose the proxy whose endpoint you want to modify. Click the proxy name to view its
details page.
4. In the Proxy endpoints section, choose the endpoint that you want to modify. You can select it in
the list, or click its name to view the details page.
5. On the proxy details page, under the Proxy endpoints section, choose Edit. Or on the proxy
endpoint details page, for Actions, choose Edit.
6. Change the values of the parameters that you want to modify.
7. Choose Save changes.

AWS CLI
To modify a DB proxy endpoint, use the AWS CLI modify-db-proxy-endpoint command with the
following required parameters:

• --db-proxy-endpoint-name

Specify changes to the endpoint properties by using one or more of the following parameters:

• --new-db-proxy-endpoint-name
• --vpc-security-group-ids. Separate the security group IDs with spaces.

1237
Amazon Relational Database Service User Guide
Deleting a proxy endpoint

The following example renames the my-endpoint proxy endpoint to new-endpoint-name.

Example
For Linux, macOS, or Unix:

aws rds modify-db-proxy-endpoint \


--db-proxy-endpoint-name my-endpoint \
--new-db-proxy-endpoint-name new-endpoint-name

For Windows:

aws rds modify-db-proxy-endpoint ^


--db-proxy-endpoint-name my-endpoint ^
--new-db-proxy-endpoint-name new-endpoint-name

RDS API
To modify a proxy endpoint, use the RDS API ModifyDBProxyEndpoint operation.

Deleting a proxy endpoint


You can delete an endpoint for your proxy using the console as described following.
Note
You can't delete the default endpoint that RDS Proxy automatically creates for each proxy.
When you delete a proxy, RDS Proxy automatically deletes all the associated endpoints.

Console
To delete a proxy endpoint using the AWS Management Console

1. In the navigation pane, choose Proxies.


2. In the list, choose the proxy whose endpoint you want to endpoint. Click the proxy name to view its
details page.
3. In the Proxy endpoints section, choose the endpoint that you want to delete. You can select one or
more endpoints in the list, or click the name of a single endpoint to view the details page.
4. On the proxy details page, under the Proxy endpoints section, choose Delete. Or on the proxy
endpoint details page, for Actions, choose Delete.

AWS CLI
To delete a proxy endpoint, run the delete-db-proxy-endpoint command with the following required
parameters:

• --db-proxy-endpoint-name

The following command deletes the proxy endpoint named my-endpoint.

For Linux, macOS, or Unix:

aws rds delete-db-proxy-endpoint \


--db-proxy-endpoint-name my-endpoint

For Windows:

1238
Amazon Relational Database Service User Guide
Limitations for proxy endpoints

aws rds delete-db-proxy-endpoint ^


--db-proxy-endpoint-name my-endpoint

RDS API
To delete a proxy endpoint with the RDS API, run the DeleteDBProxyEndpoint operation. Specify the
name of the proxy endpoint for the DBProxyEndpointName parameter.

Limitations for proxy endpoints


Each proxy has a default endpoint that you can modify but not create or delete.

The maximum number of user-defined endpoints for a proxy is 20. Thus, a proxy can have up to 21
endpoints: the default endpoint, plus 20 that you create.

When you associate additional endpoints with a proxy, RDS Proxy automatically determines which DB
instances in your cluster to use for each endpoint. You can't choose specific instances the way that you
can with Aurora custom endpoints.

Reader endpoints aren't available for Aurora multi-writer clusters.

Monitoring RDS Proxy metrics with Amazon


CloudWatch
You can monitor RDS Proxy by using Amazon CloudWatch. CloudWatch collects and processes raw data
from the proxies into readable, near-real-time metrics. To find these metrics in the CloudWatch console,
choose Metrics, then choose RDS, and choose Per-Proxy Metrics. For more information, see Using
Amazon CloudWatch metrics in the Amazon CloudWatch User Guide.
Note
RDS publishes these metrics for each underlying Amazon EC2 instance associated with a proxy.
A single proxy might be served by more than one EC2 instance. Use CloudWatch statistics to
aggregate the values for a proxy across all the associated instances.
Some of these metrics might not be visible until after the first successful connection by a proxy.

In the RDS Proxy logs, each entry is prefixed with the name of the associated proxy endpoint. This name
can be the name you specified for a user-defined endpoint, or the special name default for read/write
requests using the default endpoint of a proxy.

All RDS Proxy metrics are in the group proxy.

Each proxy endpoint has its own CloudWatch metrics. You can monitor the usage of each proxy endpoint
independently. For more information about proxy endpoints, see Working with Amazon RDS Proxy
endpoints (p. 1232).

You can aggregate the values for each metric using one of the following dimension sets. For example,
by using the ProxyName dimension set, you can analyze all the traffic for a particular proxy. By using
the other dimension sets, you can split the metrics in different ways. You can split the metrics based on
the different endpoints or target databases of each proxy, or the read/write and read-only traffic to each
database.

• Dimension set 1: ProxyName


• Dimension set 2: ProxyName, EndpointName

1239
Amazon Relational Database Service User Guide
Monitoring RDS Proxy with CloudWatch

• Dimension set 3: ProxyName, TargetGroup, Target


• Dimension set 4: ProxyName, TargetGroup, TargetRole

Metric Description Valid period CloudWatch dimension


set

The percentage of time


AvailabilityPercentage 1 minute Dimension set
for which the target 4 (p. 1240)
group was available
in the role indicated
by the dimension. This
metric is reported
every minute. The most
useful statistic for this
metric is Average.

ClientConnections The current number 1 minute Dimension set


of client connections. 1 (p. 1239), Dimension
This metric is reported set 2 (p. 1239)
every minute. The most
useful statistic for this
metric is Sum.

The number of client


ClientConnectionsClosed 1 minute and above Dimension set
connections closed. The 1 (p. 1239), Dimension
most useful statistic for set 2 (p. 1239)
this metric is Sum.

The current number


ClientConnectionsNoTLS 1 minute and above Dimension set
of client connections 1 (p. 1239), Dimension
without Transport set 2 (p. 1239)
Layer Security (TLS).
This metric is reported
every minute. The most
useful statistic for this
metric is Sum.

The number of client


ClientConnectionsReceived 1 minute and above Dimension set
connection requests 1 (p. 1239), Dimension
received. The most set 2 (p. 1239)
useful statistic for this
metric is Sum.

The number of
ClientConnectionsSetupFailedAuth 1 minute and above Dimension set
client connection 1 (p. 1239), Dimension
attempts that failed set 2 (p. 1239)
due to misconfigured
authentication or
TLS. The most useful
statistic for this metric
is Sum.

The number of
ClientConnectionsSetupSucceeded 1 minute and above Dimension set
client connections 1 (p. 1239), Dimension
successfully established set 2 (p. 1239)
with any authentication
mechanism with or

1240
Amazon Relational Database Service User Guide
Monitoring RDS Proxy with CloudWatch

Metric Description Valid period CloudWatch dimension


set
without TLS. The most
useful statistic for this
metric is Sum.

ClientConnectionsTLSThe current number 1 minute and above Dimension set


of client connections 1 (p. 1239), Dimension
with TLS. This metric set 2 (p. 1239)
is reported every
minute. The most
useful statistic for this
metric is Sum.

The number of requests


DatabaseConnectionRequests 1 minute and above Dimension set
to create a database 1 (p. 1239), Dimension
connection. The most set 3 (p. 1240),
useful statistic for this Dimension set
metric is Sum. 4 (p. 1240)

The number of requests


DatabaseConnectionRequestsWithTLS 1 minute and above Dimension set
to create a database 1 (p. 1239), Dimension
connection with TLS. set 3 (p. 1240),
The most useful Dimension set
statistic for this metric 4 (p. 1240)
is Sum.

DatabaseConnections The current number of 1 minute Dimension set


database connections. 1 (p. 1239), Dimension
This metric is reported set 3 (p. 1240),
every minute. The most Dimension set
useful statistic for this 4 (p. 1240)
metric is Sum.

The time in
DatabaseConnectionsBorrowLatency 1 minute and above Dimension set
microseconds that it 1 (p. 1239), Dimension
takes for the proxy set 2 (p. 1239)
being monitored
to get a database
connection. The most
useful statistic for this
metric is Average.

The current number of


DatabaseConnectionsCurrentlyBorrowed 1 minute Dimension set
database connections 1 (p. 1239), Dimension
in the borrow state. set 3 (p. 1240),
This metric is reported Dimension set
every minute. The most 4 (p. 1240)
useful statistic for this
metric is Sum.

1241
Amazon Relational Database Service User Guide
Monitoring RDS Proxy with CloudWatch

Metric Description Valid period CloudWatch dimension


set

The current number of


DatabaseConnectionsCurrentlyInTransaction 1 minute Dimension set
database connections 1 (p. 1239), Dimension
in a transaction. This set 3 (p. 1240),
metric is reported Dimension set
every minute. The most 4 (p. 1240)
useful statistic for this
metric is Sum.

The current number of


DatabaseConnectionsCurrentlySessionPinned 1 minute Dimension set
database connections 1 (p. 1239), Dimension
currently pinned set 3 (p. 1240),
because of operations Dimension set
in client requests that 4 (p. 1240)
change session state.
This metric is reported
every minute. The most
useful statistic for this
metric is Sum.

The number of
DatabaseConnectionsSetupFailed 1 minute and above Dimension set
database connection 1 (p. 1239), Dimension
requests that failed. set 3 (p. 1240),
The most useful Dimension set
statistic for this metric 4 (p. 1240)
is Sum.

The number of
DatabaseConnectionsSetupSucceeded 1 minute and above Dimension set
database connections 1 (p. 1239), Dimension
successfully established set 3 (p. 1240),
with or without TLS. Dimension set
The most useful 4 (p. 1240)
statistic for this metric
is Sum.

The current number of


DatabaseConnectionsWithTLS 1 minute Dimension set
database connections 1 (p. 1239), Dimension
with TLS. This metric set 3 (p. 1240),
is reported every Dimension set
minute. The most 4 (p. 1240)
useful statistic for this
metric is Sum.

The maximum
MaxDatabaseConnectionsAllowed 1 minute Dimension set
number of database 1 (p. 1239), Dimension
connections allowed. set 3 (p. 1240),
This metric is reported Dimension set
every minute. The most 4 (p. 1240)
useful statistic for this
metric is Sum.

1242
Amazon Relational Database Service User Guide
Monitoring RDS Proxy with CloudWatch

Metric Description Valid period CloudWatch dimension


set

The time in
QueryDatabaseResponseLatency 1 minute and above Dimension set
microseconds that 1 (p. 1239), Dimension
the database took set 2 (p. 1239),
to respond to the Dimension set
query. The most useful 3 (p. 1240), Dimension
statistic for this metric set 4 (p. 1240)
is Average.

QueryRequests The number of queries 1 minute and above Dimension set


received. A query 1 (p. 1239), Dimension
including multiple set 2 (p. 1239)
statements is counted
as one query. The most
useful statistic for this
metric is Sum.

QueryRequestsNoTLS The number of queries 1 minute and above Dimension set


received from non-TLS 1 (p. 1239), Dimension
connections. A query set 2 (p. 1239)
including multiple
statements is counted
as one query. The most
useful statistic for this
metric is Sum.

QueryRequestsTLS The number of queries 1 minute and above Dimension set


received from TLS 1 (p. 1239), Dimension
connections. A query set 2 (p. 1239)
including multiple
statements is counted
as one query. The most
useful statistic for this
metric is Sum.

QueryResponseLatencyThe time in 1 minute and above Dimension set


microseconds between 1 (p. 1239), Dimension
getting a query set 2 (p. 1239)
request and the proxy
responding to it. The
most useful statistic for
this metric is Average.

You can find logs of RDS Proxy activity under CloudWatch in the AWS Management Console. Each proxy
has an entry in the Log groups page.
Important
These logs are intended for human consumption for troubleshooting purposes and not for
programmatic access. The format and content of the logs is subject to change.
In particular, older logs don't contain any prefixes indicating the endpoint for each request. In
newer logs, each entry is prefixed with the name of the associated proxy endpoint. This name
can be the name that you specified for a user-defined endpoint, or the special name default
for requests using the default endpoint of a proxy.

1243
Amazon Relational Database Service User Guide
Working with RDS Proxy events

Working with RDS Proxy events


An event indicates a change in an environment. This can be an AWS environment or a service or
application from a software as a service (SaaS) partner. Or it can be one of your own custom applications
or services. For example, Amazon RDS generates an event when you create or modify an RDS Proxy.
Amazon RDS delivers events to CloudWatch Events and Amazon EventBridge in near-real time.
Following, you can find a list of RDS Proxy events that you can subscribe to and an example of an RDS
Proxy event.

For more information about working with events, see the following:

• For instructions on how to view events by using the AWS Management Console, AWS CLI, or RDS API,
see Viewing Amazon RDS events (p. 852).
• To learn how to configure Amazon RDS to send events to EventBridge, see Creating a rule that triggers
on an Amazon RDS event (p. 870).

RDS Proxy events


The following table shows the event category and a list of events when an RDS Proxy is the source type.

Category RDS event ID Message Notes

configuration RDS-EVENT-0204 RDS modified DB proxy name.


change

configuration RDS-EVENT-0207 RDS modified the end point


change of the DB proxy name.

configuration RDS-EVENT-0213 RDS detected the addition


change of the DB instance and
automatically added it to the
target group of the DB proxy
name.

configuration RDS-EVENT-0213 RDS detected creation


change of DB instance name and
automatically added it to
target group name of DB
proxy name.

configuration RDS-EVENT-0214 RDS detected deletion


change of DB instance name and
automatically removed it
from target group name of
DB proxy name.

configuration RDS-EVENT-0215 RDS detected deletion


change of DB cluster name and
automatically removed it
from target group name of
DB proxy name.

creation RDS-EVENT-0203 RDS created DB proxy name.

creation RDS-EVENT-0206 RDS created endpoint name


for DB proxy name.

1244
Amazon Relational Database Service User Guide
RDS Proxy examples

Category RDS event ID Message Notes

deletion RDS-EVENT-0205 RDS deleted DB proxy name.

deletion RDS-EVENT-0208 RDS deleted endpoint name


for DB proxy name.

failure RDS-EVENT-0243 RDS failed to provision To determine the


capacity for proxy name recommended number for
because there aren't enough your instance class, see
IP addresses available in Planning for IP address
your subnets: name. To capacity (p. 1208).
fix the issue, make sure
that your subnets have
the minimum number of
unused IP addresses as
recommended in the RDS
Proxy documentation.

failure RDS-EVENT-0275 RDS throttled some


connections to DB proxy (RDS
Proxy).

The following is an example of an RDS Proxy event in JSON format. The event shows that RDS modified
the endpoint named my-endpoint of the RDS Proxy named my-rds-proxy. The event ID is RDS-
EVENT-0207.

{
"version": "0",
"id": "68f6e973-1a0c-d37b-f2f2-94a7f62ffd4e",
"detail-type": "RDS DB Proxy Event",
"source": "aws.rds",
"account": "123456789012",
"time": "2018-09-27T22:36:43Z",
"region": "us-east-1",
"resources": [
"arn:aws:rds:us-east-1:123456789012:db-proxy:my-rds-proxy"
],
"detail": {
"EventCategories": [
"configuration change"
],
"SourceType": "DB_PROXY",
"SourceArn": "arn:aws:rds:us-east-1:123456789012:db-proxy:my-rds-proxy",
"Date": "2018-09-27T22:36:43.292Z",
"Message": "RDS modified endpoint my-endpoint of DB Proxy my-rds-proxy.",
"SourceIdentifier": "my-endpoint",
"EventID": "RDS-EVENT-0207"
}
}

RDS Proxy command-line examples


To see how combinations of connection commands and SQL statements interact with RDS Proxy, look at
the following examples.

1245
Amazon Relational Database Service User Guide
RDS Proxy examples

Examples

• Preserving Connections to a MySQL Database Across a Failover


• Adjusting the max_connections Setting for an Aurora DB Cluster

Example Preserving connections to a MySQL database across a failover

This MySQL example demonstrates how open connections continue working during a failover. An
example is when you reboot a database or it becomes unavailable due to a problem. This example
uses a proxy named the-proxy and an Aurora DB cluster with DB instances instance-8898 and
instance-9814. When you run the failover-db-cluster command from the Linux command line,
the writer instance that the proxy is connected to changes to a different DB instance. You can see that
the DB instance associated with the proxy changes while the connection remains open.

$ mysql -h the-proxy.proxy-demo.us-east-1.rds.amazonaws.com -u admin_user -p


Enter password:
...

mysql> select @@aurora_server_id;


+--------------------+
| @@aurora_server_id |
+--------------------+
| instance-9814 |
+--------------------+
1 row in set (0.01 sec)

mysql>
[1]+ Stopped mysql -h the-proxy.proxy-demo.us-east-1.rds.amazonaws.com -
u admin_user -p
$ # Initially, instance-9814 is the writer.
$ aws rds failover-db-cluster --db-cluster-identifier cluster-56-2019-11-14-1399
JSON output
$ # After a short time, the console shows that the failover operation is complete.
$ # Now instance-8898 is the writer.
$ fg
mysql -h the-proxy.proxy-demo.us.us-east-1.rds.amazonaws.com -u admin_user -p

mysql> select @@aurora_server_id;


+--------------------+
| @@aurora_server_id |
+--------------------+
| instance-8898 |
+--------------------+
1 row in set (0.01 sec)

mysql>
[1]+ Stopped mysql -h the-proxy.proxy-demo.us-east-1.rds.amazonaws.com -
u admin_user -p
$ aws rds failover-db-cluster --db-cluster-identifier cluster-56-2019-11-14-1399
JSON output
$ # After a short time, the console shows that the failover operation is complete.
$ # Now instance-9814 is the writer again.
$ fg
mysql -h the-proxy.proxy-demo.us-east-1.rds.amazonaws.com -u admin_user -p

mysql> select @@aurora_server_id;


+--------------------+
| @@aurora_server_id |
+--------------------+
| instance-9814 |
+--------------------+
1 row in set (0.01 sec)

1246
Amazon Relational Database Service User Guide
Troubleshooting RDS Proxy

+---------------+---------------+
| Variable_name | Value |
+---------------+---------------+
| hostname | ip-10-1-3-178 |
+---------------+---------------+
1 row in set (0.02 sec)

Example Adjusting the max_connections setting for an Aurora DB cluster

This example demonstrates how you can adjust the max_connections setting for an Aurora MySQL
DB cluster. To do so, you create your own DB cluster parameter group based on the default parameter
settings for clusters that are compatible with MySQL 5.7. You specify a value for the max_connections
setting, overriding the formula that sets the default value. You associate the DB cluster parameter group
with your DB cluster.

export REGION=us-east-1
export CLUSTER_PARAM_GROUP=rds-proxy-mysql-57-max-connections-demo
export CLUSTER_NAME=rds-proxy-mysql-57

aws rds create-db-parameter-group --region $REGION \


--db-parameter-group-family aurora-mysql5.7 \
--db-parameter-group-name $CLUSTER_PARAM_GROUP \
--description "Aurora MySQL 5.7 cluster parameter group for RDS Proxy demo."

aws rds modify-db-cluster --region $REGION \


--db-cluster-identifier $CLUSTER_NAME \
--db-cluster-parameter-group-name $CLUSTER_PARAM_GROUP

echo "New cluster param group is assigned to cluster:"


aws rds describe-db-clusters --region $REGION \
--db-cluster-identifier $CLUSTER_NAME \
--query '*[*].{DBClusterParameterGroup:DBClusterParameterGroup}'

echo "Current value for max_connections:"


aws rds describe-db-cluster-parameters --region $REGION \
--db-cluster-parameter-group-name $CLUSTER_PARAM_GROUP \
--query '*[*].{ParameterName:ParameterName,ParameterValue:ParameterValue}' \
--output text | grep "^max_connections"

echo -n "Enter number for max_connections setting: "


read answer

aws rds modify-db-cluster-parameter-group --region $REGION --db-cluster-parameter-group-


name $CLUSTER_PARAM_GROUP \
--parameters "ParameterName=max_connections,ParameterValue=$
$answer,ApplyMethod=immediate"

echo "Updated value for max_connections:"


aws rds describe-db-cluster-parameters --region $REGION \
--db-cluster-parameter-group-name $CLUSTER_PARAM_GROUP \
--query '*[*].{ParameterName:ParameterName,ParameterValue:ParameterValue}' \
--output text | grep "^max_connections"

Troubleshooting for RDS Proxy


Following, you can find troubleshooting ideas for some common RDS Proxy issues and information on
CloudWatch logs for RDS Proxy.

In the RDS Proxy logs, each entry is prefixed with the name of the associated proxy endpoint. This
name can be the name that you specified for a user-defined endpoint. Or it can be the special name

1247
Amazon Relational Database Service User Guide
Verifying connectivity for a proxy

default for read/write requests using the default endpoint of a proxy. For more information about
proxy endpoints, see Working with Amazon RDS Proxy endpoints (p. 1232).

Topics
• Verifying connectivity for a proxy (p. 1248)
• Common issues and solutions (p. 1249)

Verifying connectivity for a proxy


You can use the following commands to verify that all components of the connection mechanism can
communicate with the other components.

Examine the proxy itself using the describe-db-proxies command. Also examine the associated target
group using the describe-db-proxy-target-groups command. Check that the details of the targets match
the RDS DB instance or Aurora DB cluster that you intend to associate with the proxy. Use commands
such as the following.

aws rds describe-db-proxies --db-proxy-name $DB_PROXY_NAME


aws rds describe-db-proxy-target-groups --db-proxy-name $DB_PROXY_NAME

To confirm that the proxy can connect to the underlying database, examine the targets specified in the
target groups using the describe-db-proxy-targets command. Use a command such as the following.

aws rds describe-db-proxy-targets --db-proxy-name $DB_PROXY_NAME

The output of the describe-db-proxy-targets command includes a TargetHealth field. You can
examine the fields State, Reason, and Description inside TargetHealth to check if the proxy can
communicate with the underlying DB instance.

• A State value of AVAILABLE indicates that the proxy can connect to the DB instance.
• A State value of UNAVAILABLE indicates a temporary or permanent connection problem. In
this case, examine the Reason and Description fields. For example, if Reason has a value of
PENDING_PROXY_CAPACITY, try connecting again after the proxy finishes its scaling operation. If
Reason has a value of UNREACHABLE, CONNECTION_FAILED, or AUTH_FAILURE, use the explanation
from the Description field to help you diagnose the issue.
• The State field might have a value of REGISTERING for a brief time before changing to AVAILABLE
or UNAVAILABLE.

If the following Netcat command (nc) is successful, you can access the proxy endpoint from the EC2
instance or other system where you're logged in. This command reports failure if you're not in the same
VPC as the proxy and the associated database. You might be able to log directly in to the database
without being in the same VPC. However, you can't log into the proxy unless you're in the same VPC.

nc -zx MySQL_proxy_endpoint 3306

nc -zx PostgreSQL_proxy_endpoint 5432

You can use the following commands to make sure that your EC2 instance has the required properties. In
particular, the VPC for the EC2 instance must be the same as the VPC for the RDS DB instance or Aurora
DB cluster that the proxy connects to.

aws ec2 describe-instances --instance-ids your_ec2_instance_id

Examine the Secrets Manager secrets used for the proxy.

1248
Amazon Relational Database Service User Guide
Common issues and solutions

aws secretsmanager list-secrets


aws secretsmanager get-secret-value --secret-id your_secret_id

Make sure that the SecretString field displayed by get-secret-value is encoded as a JSON
string that includes username and password fields. The following example shows the format of the
SecretString field.

{
"ARN": "some_arn",
"Name": "some_name",
"VersionId": "some_version_id",
"SecretString": '{"username":"some_username","password":"some_password"}',
"VersionStages": [ "some_stage" ],
"CreatedDate": some_timestamp
}

Common issues and solutions


For possible causes and solutions to some common problems that you might encounter using RDS Proxy,
see the following.

After running aws rds describe-db-proxy-targets, if the TargetHealth description states


Proxy does not have any registered credentials, verify the following:

• There are credentials registered for the user to access the proxy.
• The IAM role to access the proxy secret from Secrets Manager is valid.
• The DB proxy is using an authentication method.

You might encounter the following RDS events while creating or connecting to a DB proxy.

Category RDS event ID Description

failure RDS-EVENT-0243 RDS couldn't provision capacity


for the proxy because there
aren't enough IP addresses
available in your subnets. To
fix the issue, make sure that
your subnets have the minimum
number of unused IP addresses.
To determine the recommended
number for your instance class,
see Planning for IP address
capacity (p. 1208).

failure RDS-EVENT-0275 RDS throttled some connections


to DB proxy (RDS Proxy).

You might encounter the following issues while creating a new proxy or connecting to a proxy.

Error Causes or workarounds

403: The security Select an existing IAM role instead of choosing to create a new one.
token included

1249
Amazon Relational Database Service User Guide
Common issues and solutions

Error Causes or workarounds


in the request is
invalid

You might encounter the following issues while connecting to a MySQL proxy.

Error Causes or workarounds

ERROR 1040 The rate of connection requests from the client to the proxy has exceeded
(HY000): the limit.
Connections rate
limit exceeded
(limit_value)

ERROR 1040 The number of simultaneous requests with IAM authentication from the
(HY000): IAM client to the proxy has exceeded the limit.
authentication
rate limit
exceeded

ERROR 1040 The number of simultaneous connection requests from the client to the
(HY000): Number proxy exceeded the limit.
simultaneous
connections
exceeded
(limit_value)

ERROR 1045 Some possible reasons include the following:


(28000): Access
denied for user • The Secrets Manager secret used by the proxy doesn't match the user
'DB_USER'@'%' (using name and password of an existing database user. Either update the
password: YES) credentials in the Secrets Manager secret, or make sure the database user
exists and has the same password as in the secret.

ERROR 1105 An unknown error occurred.


(HY000): Unknown
error

ERROR 1231 The value set for the character_set_client parameter is not valid. For
(42000): Variable example, the value ucs2 is not valid because it can crash the MySQL server.
''character_set_client''
can't be set to
the value of value

ERROR 3159 You enabled the setting Require Transport Layer Security in the proxy
(HY000): This RDS but your connection included the parameter ssl-mode=DISABLED in the
Proxy requires TLS MySQL client. Do either of the following:
connections.
• Disable the setting Require Transport Layer Security for the proxy.
• Connect to the database using the minimum setting of ssl-
mode=REQUIRED in the MySQL client.

ERROR 2026 The TLS handshake to the proxy failed. Some possible reasons include the
(HY000): SSL following:
connection error:
• SSL is required but the server doesn't support it.

1250
Amazon Relational Database Service User Guide
Common issues and solutions

Error Causes or workarounds


Internal Server • An internal server error occurred.
Error • A bad handshake occurred.

ERROR 9501 The proxy timed-out waiting to acquire a database connection. Some
(HY000): Timed- possible reasons include the following:
out waiting to
acquire database • The proxy is unable to establish a database connection because the
connection maximum connections have been reached
• The proxy is unable to establish a database connection because the
database is unavailable.

You might encounter the following issues while connecting to a PostgreSQL proxy.

Error Cause Solution

IAM authentication is The user tried to connect The user needs to connect to the
allowed only with SSL to the database using IAM database using the minimum
connections. authentication with the setting setting of sslmode=require in
sslmode=disable in the the PostgreSQL client. For more
PostgreSQL client. information, see the PostgreSQL
SSL support documentation.

This RDS Proxy requires The user enabled the option To fix this error, do one of the
TLS connections. Require Transport Layer following:
Security but tried to connect
with sslmode=disable in the • Disable the proxy's Require
PostgreSQL client. Transport Layer Security
option.
• Connect to the database
using the minimum setting
of sslmode=allow in the
PostgreSQL client.

IAM authentication This error might be due to the To fix this error, do the
failed for user following reasons: following:
user_name. Check the IAM
token for this user and • The client supplied the 1. Confirm that the provided
try again. incorrect IAM user name. IAM user exists.
• The client supplied an 2. Confirm that the IAM
incorrect IAM authorization authorization token belongs
token for the user. to the provided IAM user.
• The client is using an IAM 3. Confirm that the IAM policy
policy that does not have the has adequate permissions for
necessary permissions. RDS.
• The client supplied an expired 4. Check the validity of the IAM
IAM authorization token for authorization token used.
the user.

This RDS proxy has no There is no Secrets Manager Add a Secrets Manager secret for
credentials for the role secret for this role. this role. For more information,
role_name. Check the see Setting up AWS Identity
credentials for this and Access Management (IAM)
role and try again. policies (p. 1210).

1251
Amazon Relational Database Service User Guide
Common issues and solutions

Error Cause Solution

RDS supports only The database client being used If you're not using IAM
IAM, MD5, or SCRAM to connect to the proxy is using authentication, use the MD5 or
authentication. an authentication mechanism SCRAM password authentication.
not currently supported by the
proxy.

A user name is missing The database client being used Make sure to define a user name
from the connection to connect to the proxy isn't when setting up a connection to
startup packet. Provide sending a user name when the proxy using the PostgreSQL
a user name for this trying to establish a connection. client of your choice.
connection.

Feature not supported: The PostgreSQL client used Use a newer PostgreSQL client
RDS Proxy supports to connect to the proxy uses a that supports the 3.0 messaging
only version 3.0 of the protocol older than 3.0. protocol. If you're using the
PostgreSQL messaging PostgreSQL psql CLI, use a
protocol. version greater than or equal to
7.4.

Feature not supported: The PostgreSQL client used to Turn off the streaming
RDS Proxy currently connect to the proxy is trying replication mode in the
doesn't support to use the streaming replication PostgreSQL client being used to
streaming replication mode, which isn't currently connect.
mode. supported by RDS Proxy.

Feature not supported: Through the startup message, Turn off the option being
RDS Proxy currently the PostgreSQL client used shown as not supported from
doesn't support the to connect to the proxy is the message above in the
option option_name. requesting an option that isn't PostgreSQL client being used to
currently supported by RDS connect.
Proxy.

The IAM authentication The number of simultaneous Reduce the rate in which
failed because of too requests with IAM connections using IAM
many competing requests. authentication from the client to authentication from a
the proxy has exceeded the limit. PostgreSQL client are
established.

The maximum number The number of simultaneous Reduce the number of active
of client connections connection requests from the connections from PostgreSQL
to the proxy exceeded client to the proxy exceeded the clients to this RDS proxy.
number_value. limit.

Rate of connection The rate of connection requests Reduce the rate in which
to proxy exceeded from the client to the proxy has connections from a PostgreSQL
number_value. exceeded the limit. client are established.

The password that was The password for this role Check the secret for this role in
provided for the role doesn't match the Secrets Secrets Manager to see if the
role_name is wrong. Manager secret. password is the same as what's
being used in your PostgreSQL
client.

1252
Amazon Relational Database Service User Guide
Using RDS Proxy with AWS CloudFormation

Error Cause Solution

The IAM authentication There is a problem with Generate a new authentication


failed for the role the IAM token used for IAM token and use it in a new
role_name. Check the IAM authentication. connection.
token for this role and
try again.

IAM is allowed only with A client tried to connect using Enable SSL in the PostgreSQL
SSL connections. IAM authentication, but SSL client.
wasn't enabled.

Unknown error. An unknown error occurred. Reach out to AWS Support to


investigate the issue.

Timed-out waiting The proxy timed-out waiting to Possible solutions are the
to acquire database acquire a database connection. following:
connection. Some possible reasons include
the following: • Check the target of the RDS
DB instance or Aurora DB
• The proxy can't establish a cluster status to see if it's
database connection because unavailable.
the maximum connections • Check if there are long-
have been reached. running transactions and/or
• The proxy can't establish a queries being executed. They
database connection because can use database connections
the database is unavailable. from the connection pool for a
long time.

Request returned an The database connection The solution depends on the


error: database_error. established from the proxy specific database error. One
returned an error. example is: Request returned
an error: database
"your-database-name"
does not exist. This means
that the specified database
name doesn't exist on the
database server. Or it means
that the user name used as a
database name (if a database
name isn't specified) doesn't
exist on the server.

Using RDS Proxy with AWS CloudFormation


You can use RDS Proxy with AWS CloudFormation. Doing so helps you to create groups of related
resources. Such a group can include a proxy that can connect to a newly created Amazon RDS DB
instance or Aurora DB cluster. RDS Proxy support in AWS CloudFormation involves two new registry
types: DBProxy and DBProxyTargetGroup.

The following listing shows a sample AWS CloudFormation template for RDS Proxy.

Resources:
DBProxy:
Type: AWS::RDS::DBProxy
Properties:

1253
Amazon Relational Database Service User Guide
Using RDS Proxy with AWS CloudFormation

DBProxyName: CanaryProxy
EngineFamily: MYSQL
RoleArn:
Fn::ImportValue: SecretReaderRoleArn
Auth:
- {AuthScheme: SECRETS, SecretArn: !ImportValue ProxySecret, IAMAuth: DISABLED}
VpcSubnetIds:
Fn::Split: [",", "Fn::ImportValue": SubnetIds]

ProxyTargetGroup:
Type: AWS::RDS::DBProxyTargetGroup
Properties:
DBProxyName: CanaryProxy
TargetGroupName: default
DBInstanceIdentifiers:
- Fn::ImportValue: DBInstanceName
DependsOn: DBProxy

For more information about the resources in this sample, see DBProxy and DBProxyTargetGroup.

For more information about the Amazon RDS and Aurora resources that you can create using AWS
CloudFormation, see RDS resource type reference.

1254
Amazon Relational Database Service User Guide

Amazon RDS for MariaDB


Amazon RDS supports DB instances that run the following versions of MariaDB:

• MariaDB 10.11
• MariaDB 10.6
• MariaDB 10.5
• MariaDB 10.4
• MariaDB 10.3 (RDS end of standard support scheduled for October 23, 2023)

For more information about minor version support, see MariaDB on Amazon RDS versions (p. 1265).

To create a MariaDB DB instance, use the Amazon RDS management tools or interfaces. You can then use
the Amazon RDS tools to perform management actions for the DB instance. These include actions such
as the following:

• Reconfiguring or resizing the DB instance


• Authorizing connections to the DB instance
• Creating and restoring from backups or snapshots
• Creating Multi-AZ secondaries
• Creating read replicas
• Monitoring the performance of your DB instance

To store and access the data in your DB instance, use standard MariaDB utilities and applications.

MariaDB is available in all of the AWS Regions. For more information about AWS Regions, see Regions,
Availability Zones, and Local Zones (p. 110).

You can use Amazon RDS for MariaDB databases to build HIPAA-compliant applications. You can store
healthcare-related information, including protected health information (PHI), under a Business Associate
Agreement (BAA) with AWS. For more information, see HIPAA compliance. AWS Services in Scope have
been fully assessed by a third-party auditor and result in a certification, attestation of compliance, or
Authority to Operate (ATO). For more information, see AWS services in scope by compliance program.

Before creating a DB instance, complete the steps in Setting up for Amazon RDS (p. 174). When you
create a DB instance, the RDS master user gets DBA privileges, with some limitations. Use this account
for administrative tasks such as creating additional database accounts.

You can create the following:

• DB instances
• DB snapshots
• Point-in-time restores
• Automated backups
• Manual backups

You can use DB instances running MariaDB inside a virtual private cloud (VPC) based on Amazon VPC.
You can also add features to your MariaDB DB instance by enabling various options. Amazon RDS
supports Multi-AZ deployments for MariaDB as a high-availability, failover solution.
Important
To deliver a managed service experience, Amazon RDS doesn't provide shell access to DB
instances. It also restricts access to certain system procedures and tables that need advanced

1255
Amazon Relational Database Service User Guide
MariaDB feature support

privileges. You can access your database using standard SQL clients such as the mysql client.
However, you can't access the host directly by using Telnet or Secure Shell (SSH).

Topics
• MariaDB feature support on Amazon RDS (p. 1256)
• MariaDB on Amazon RDS versions (p. 1265)
• Connecting to a DB instance running the MariaDB database engine (p. 1269)
• Securing MariaDB DB instance connections (p. 1274)
• Improving query performance for RDS for MariaDB with Amazon RDS Optimized Reads (p. 1281)
• Improving write performance with Amazon RDS Optimized Writes for MariaDB (p. 1284)
• Upgrading the MariaDB DB engine (p. 1289)
• Importing data into a MariaDB DB instance (p. 1296)
• Working with MariaDB replication in Amazon RDS (p. 1318)
• Options for MariaDB database engine (p. 1334)
• Parameters for MariaDB (p. 1338)
• Migrating data from a MySQL DB snapshot to a MariaDB DB instance (p. 1341)
• MariaDB on Amazon RDS SQL reference (p. 1344)
• Local time zone for MariaDB DB instances (p. 1349)
• Known issues and limitations for RDS for MariaDB (p. 1352)

MariaDB feature support on Amazon RDS


RDS for MariaDB supports most of the features and capabilities of MariaDB. Some features might have
limited support or restricted privileges.

You can filter new Amazon RDS features on the What's New with Database? page. For Products, choose
Amazon RDS. Then search using keywords such as MariaDB 2023.
Note
The following lists are not exhaustive.

Topics
• MariaDB feature support on Amazon RDS for MariaDB major versions (p. 1256)
• Supported storage engines for MariaDB on Amazon RDS (p. 1261)
• Cache warming for MariaDB on Amazon RDS (p. 1262)
• MariaDB features not supported by Amazon RDS (p. 1263)

MariaDB feature support on Amazon RDS for


MariaDB major versions
In the following sections, find information about MariaDB feature support on Amazon RDS for MariaDB
major versions:

Topics
• MariaDB 10.11 support on Amazon RDS (p. 1257)
• MariaDB 10.6 support on Amazon RDS (p. 1258)
• MariaDB 10.5 support on Amazon RDS (p. 1259)
• MariaDB 10.4 support on Amazon RDS (p. 1260)

1256
Amazon Relational Database Service User Guide
MariaDB major versions

• MariaDB 10.3 support on Amazon RDS (p. 1261)

For information about supported minor versions of Amazon RDS for MariaDB, see MariaDB on Amazon
RDS versions (p. 1265).

MariaDB 10.11 support on Amazon RDS


Amazon RDS supports the following new features for your DB instances running MariaDB version 10.11
or higher.

• Password Reuse Check plugin – You can use the MariaDB Password Reuse Check plugin to prevent
users from reusing passwords and to set the retention period of passwords. For more information, see
Password Reuse Check Plugin.
• GRANT TO PUBLIC authorization – You can grant privileges to all users who have access to your
server. For more information, see GRANT TO PUBLIC.
• Separation of SUPER and READ ONLY ADMIN privileges – You can remove READ ONLY ADMIN
privileges from all users, even users that previously had SUPER privileges.
• Security – You can now set option --ssl as the default for your MariaDB client. MariaDB no longer
silently disables SSL if the configuration is incorrect.
• SQL commands and functions – You can now use the SHOW ANALYZE FORMAT=JSON command and
the functions ROW_NUMBER, SFORMAT, and RANDOM_BYTES. SFORMAT allows string formatting and
is enabled by default. You can convert partition to table and table to partition in a single command.
There are also several improvements around JSON_*() functions. DES_ENCRYPT and DES_DECRYPT
functions were deprecated for version 10.10 and higher. For more information, see SFORMAT.
• InnoDB enhancements – These enhancements include the following items:
• Performance improvements in the redo log to reduce write amplification and to improve
concurrency.
• The ability for you to change the undo tablespace without reinitializing the data directory.
This enhancement reduces control plane overhead. It requires restarting but it doesn't require
reinitialization after changing undo tablespace.
• Support for CHECK TABLE … EXTENDED and for descending indexes internally.
• Improvements to bulk insert.
• Binlog changes – These changes include the following items:
• Logging ALTER in two phases to decrease replication latency. The binlog_alter_two_phase
parameter is disabled by default, but can be enabled through parameter groups.
• Logging explicit_defaults_for_timestamp.
• No longer logging INCIDENT_EVENT if the transaction can be safely rolled back.
• Replication improvements – MariaDB version 10.11 DB instances use GTID replication by default if the
master supports it. Also, Seconds_Behind_Master is more precise.
• Clients – You can use new command-line options for mysqlbinglog and mariadb-dump. You can use
mariadb-dump to dump and restore historical data.
• System versioning – You can modify history. MariaDB automatically creates new partitions.
• Atomic DDL – CREATE OR REPLACE is now atomic. Either the statement succeeds or it's completely
reversed.
• Redo log write – Redo log writes asynchronously.
• Stored functions – Stored functions now support the same IN, OUT, and INOUT parameters as in
stored procedures.
• Deprecated or removed parameters – The following parameters have been deprecated or removed for
MariaDB version 10.11 DB instances:
• innodb_change_buffering
• innodb_disallow_writes

1257
Amazon Relational Database Service User Guide
MariaDB major versions

• innodb_log_write_ahead_size
• innodb_prefix_index_cluster_optimization
• keep_files_on_create
• old
• Dynamic parameters – The following parameters are now dynamic for MariaDB version 10.11 DB
instances:
• innodb_log_file_size
• innodb_write_io_threads
• innodb_read_io_threads
• New default values for parameters – The following parameters have new default values for MariaDB
version 10.11 DB instances:
• The default value of the explicit_defaults_for_timestamp parameter changed from OFF to ON.
• The default value of the optimizer_prune_level parameter changed from 1 to 2.
• New valid values for parameters – The following parameters have new valid values for MariaDB
version 10.11 DB instances:
• The valid values for the old parameter were merged into those for the old_mode parameter.
• The valid values for the histogram_type parameter now include JSON_HB.
• The valid value range for the innodb_log_buffer_size parameter is now 262144 to 4294967295
(256KB to 4096MB).
• The valid value range for the innodb_log_file_size parameter is now 4194304 to 512GB (4MB to
512GB).
• The valid values for the optimizer_prune_level parameter now include 2.
• New parameters – The following parameters are new for MariaDB version 10.11 DB instances:
• The binlog_alter_two_phase parameter can improve replication performance.
• The log_slow_min_examined_row_limit parameter can improve performance.
• The log_slow_query parameter and the log_slow_query_file parameter are aliases for
slow_query_log and slow_query_log_file, respectively.
• optimizer_extra_pruning_depth
• system_versioning_insert_history

For a list of all features and documentation, see the following information on the MariaDB website.

Versions Changes and improvements Release notes

MariaDB 10.7 Changes and improvements in MariaDB 10.7 Release notes - MariaDB 10.7 series

MariaDB 10.8 Changes and improvements in MariaDB 10.8 Release notes - MariaDB 10.8 series

MariaDB 10.9 Changes and improvements in MariaDB 10.9 Release notes - MariaDB 10.9 series

MariaDB 10.10 Changes and improvements in MariaDB 10.10 Release notes - MariaDB 10.10 series

MariaDB 10.11 Changes and improvements in MariaDB 10.11 Release notes - MariaDB 10.11 series

For a list of unsupported features, see MariaDB features not supported by Amazon RDS (p. 1263).

MariaDB 10.6 support on Amazon RDS


Amazon RDS supports the following new features for your DB instances running MariaDB version 10.6 or
higher:

1258
Amazon Relational Database Service User Guide
MariaDB major versions

• MyRocks storage engine – You can use the MyRocks storage engine with RDS for MariaDB to
optimize storage consumption of your write-intensive, high-performance web applications. For more
information, see Supported storage engines for MariaDB on Amazon RDS (p. 1261) and MyRocks.
• AWS Identity and Access Management (IAM) DB authentication – You can use IAM DB authentication
for better security and central management of connections to your MariaDB DB instances. For more
information, see IAM database authentication for MariaDB, MySQL, and PostgreSQL (p. 2642).
• Upgrade options – You can now upgrade to RDS for MariaDB version 10.6 from any prior major release
(10.3, 10.4, 10.5). You can also restore a snapshot of an existing MySQL 5.6 or 5.7 DB instance to a
MariaDB 10.6 instance. For more information, see Upgrading the MariaDB DB engine (p. 1289).
• Delayed replication – You can now set a configurable time period for which a read replica lags behind
the source database. In a standard MariaDB replication configuration, there is minimal replication
delay between the source and the replica. With delayed replication, you can set an intentional delay
as a strategy for disaster recovery. For more information, see Configuring delayed replication with
MariaDB (p. 1324).
• Oracle PL/SQL compatibility – By using RDS for MariaDB version 10.6, you can more easily migrate
your legacy Oracle applications to Amazon RDS. For more information, see SQL_MODE=ORACLE.
• Atomic DDL – Your dynamic data language (DDL) statements can be relatively crash-safe with
RDS for MariaDB version 10.6. CREATE TABLE, ALTER TABLE, RENAME TABLE, DROP TABLE,
DROP DATABASE and related DDL statements are now atomic. Either the statement succeeds, or it's
completely reversed. For more information, see Atomic DDL.
• Other enhancements – These enhancements include a JSON_TABLE function for transforming JSON
data to relational format within SQL, and faster empty table data load with Innodb. They also include
new sys_schema for analysis and troubleshooting, optimizer enhancement for ignoring unused
indexes, and performance improvements. For more information, see JSON_TABLE.
• New default values for parameters – The following parameters have new default values for MariaDB
version 10.6 DB instances:
• The default value for the following parameters has changed from utf8 to utf8mb3:
• character_set_client
• character_set_connection
• character_set_results
• character_set_system

Although the default values have changed for these parameters, there is no functional change. For
more information, see Supported Character Sets and Collations in the MariaDB documentation.
• The default value of the collation_connection parameter has changed from utf8_general_ci
to utf8mb3_general_ci. Although the default value has changed for this parameter, there is no
functional change.
• The default value of the old_mode parameter has changed from unset to UTF8_IS_UTF8MB3.
Although the default value has changed for this parameter, there is no functional change.

For a list of all MariaDB 10.6 features and their documentation, see Changes and improvements in
MariaDB 10.6 and Release notes - MariaDB 10.6 series on the MariaDB website.

For a list of unsupported features, see MariaDB features not supported by Amazon RDS (p. 1263).

MariaDB 10.5 support on Amazon RDS


Amazon RDS supports the following new features for your DB instances running MariaDB version 10.5 or
later:

• InnoDB enhancements – MariaDB version 10.5 includes InnoDB enhancements. For more information,
see InnoDB: Performance Improvements etc. in the MariaDB documentation.

1259
Amazon Relational Database Service User Guide
MariaDB major versions

• Performance schema updates – MariaDB version 10.5 includes performance schema updates. For
more information, see Performance Schema Updates to Match MySQL 5.7 Instrumentation and Tables
in the MariaDB documentation.
• One file in the InnoDB redo log – In versions of MariaDB before version 10.5, the value of the
innodb_log_files_in_group parameter was set to 2. In MariaDB version 10.5, the value of this
parameter is set to 1.

If you are upgrading from a prior version to MariaDB version 10.5, and you don't modify the
parameters, the innodb_log_file_size parameter value is unchanged. However, it applies to
one log file instead of two. The result is that your upgraded MariaDB version 10.5 DB instance uses
half of the redo log size that it was using before the upgrade. This change can have a noticeable
performance impact. To address this issue, you can double the value of the innodb_log_file_size
parameter. For information about modifying parameters, see Modifying parameters in a DB parameter
group (p. 352).
• SHOW SLAVE STATUS command not supported – In versions of MariaDB before version 10.5, the
SHOW SLAVE STATUS command required the REPLICATION SLAVE privilege. In MariaDB version
10.5, the equivalent SHOW REPLICA STATUS command requires the REPLICATION REPLICA ADMIN
privilege. This new privilege isn't granted to the RDS master user.

Instead of using the SHOW REPLICA STATUS command, run the new mysql.rds_replica_status
stored procedure to return similar information. For more information, see
mysql.rds_replica_status (p. 1344).
• SHOW RELAYLOG EVENTS command not supported – In versions of MariaDB before version 10.5, the
SHOW RELAYLOG EVENTS command required the REPLICATION SLAVE privilege. In MariaDB version
10.5, this command requires the REPLICATION REPLICA ADMIN privilege. This new privilege isn't
granted to the RDS master user.
• New default values for parameters – The following parameters have new default values for MariaDB
version 10.5 DB instances:
• The default value of the max_connections parameter has changed to
LEAST({DBInstanceClassMemory/25165760},12000). For information about the LEAST
parameter function, see DB parameter functions (p. 371).
• The default value of the innodb_adaptive_hash_index parameter has changed to OFF (0).
• The default value of the innodb_checksum_algorithm parameter has changed to full_crc32.
• The default value of the innodb_log_file_size parameter has changed to 2 GB.

For a list of all MariaDB 10.5 features and their documentation, see Changes and improvements in
MariaDB 10.5 and Release notes - MariaDB 10.5 series on the MariaDB website.

For a list of unsupported features, see MariaDB features not supported by Amazon RDS (p. 1263).

MariaDB 10.4 support on Amazon RDS


Amazon RDS supports the following new features for your DB instances running MariaDB version 10.4 or
later:

• User account security enhancements – Password expiration and account locking improvements
• Optimizer enhancements – Optimizer trace feature
• InnoDB enhancements – Instant DROP COLUMN support and instant VARCHAR extension for
ROW_FORMAT=DYNAMIC and ROW_FORMAT=COMPACT
• New parameters – Including tcp_nodedelay, tls_version, and gtid_cleanup_batch_size

For a list of all MariaDB 10.4 features and their documentation, see Changes and improvements in
MariaDB 10.4 and Release notes - MariaDB 10.4 series on the MariaDB website.

1260
Amazon Relational Database Service User Guide
Supported storage engines

For a list of unsupported features, see MariaDB features not supported by Amazon RDS (p. 1263).

MariaDB 10.3 support on Amazon RDS


Amazon RDS supports the following new features for your DB instances running MariaDB version 10.3 or
later:

• Oracle compatibility – PL/SQL compatibility parser, sequences, INTERSECT and EXCEPT to


complement UNION, new TYPE OF and ROW TYPE OF declarations, and invisible columns
• Temporal data processing – System versioned tables for querying of past and present states of the
database
• Flexibility – User-defined aggregates, storage-independent column compression, and proxy protocol
support to relay the client IP address to the server
• Manageability – Instant ADD COLUMN operations and fast-fail data definition language (DDL)
operations

For a list of all MariaDB 10.3 features and their documentation, see Changes & improvements in MariaDB
10.3 and Release notes - MariaDB 10.3 series on the MariaDB website.

For a list of unsupported features, see MariaDB features not supported by Amazon RDS (p. 1263).

Supported storage engines for MariaDB on Amazon


RDS
RDS for MariaDB supports the following storage engines.

Topics
• The InnoDB storage engine (p. 1261)
• The MyRocks storage engine (p. 1261)

Other storage engines aren't currently supported by RDS for MariaDB.

The InnoDB storage engine


Although MariaDB supports multiple storage engines with varying capabilities, not all of them are
optimized for recovery and data durability. InnoDB is the recommended storage engine for MariaDB DB
instances on Amazon RDS. Amazon RDS features such as point-in-time restore and snapshot restore
require a recoverable storage engine and are supported only for the recommended storage engine for
the MariaDB version.

For more information, see InnoDB.

The MyRocks storage engine


The MyRocks storage engine is available in RDS for MariaDB version 10.6 and higher. Before using
the MyRocks storage engine in a production database, we recommend that you perform thorough
benchmarking and testing to verify any potential benefits over InnoDB for your use case.

The default parameter group for MariaDB version 10.6 includes MyRocks parameters. For more
information, see Parameters for MariaDB (p. 1338) and Working with parameter groups (p. 347).

To create a table that uses the MyRocks storage engine, specify ENGINE=RocksDB in the CREATE TABLE
statement. The following example creates a table that uses the MyRocks storage engine.

1261
Amazon Relational Database Service User Guide
Cache warming

CREATE TABLE test (a INT NOT NULL, b CHAR(10)) ENGINE=RocksDB;

We strongly recommend that you don't run transactions that span both InnoDB and MyRocks tables.
MariaDB doesn't guarantee ACID (atomicity, consistency, isolation, durability) for transactions across
storage engines. Although it is possible to have both InnoDB and MyRocks tables in a DB instance, we
don't recommend this approach except during a migration from one storage engine to the other. When
both InnoDB and MyRocks tables exist in a DB instance, each storage engine has its own buffer pool,
which might cause performance to degrade.

MyRocks doesn’t support SERIALIZABLE isolation or gap locks. So, generally you can't use MyRocks
with statement-based replication. For more information, see MyRocks and Replication.

Currently, you can modify only the following MyRocks parameters:

• rocksdb_block_cache_size
• rocksdb_bulk_load
• rocksdb_bulk_load_size
• rocksdb_deadlock_detect
• rocksdb_deadlock_detect_depth
• rocksdb_max_latest_deadlocks

The MyRocks storage engine and the InnoDB storage engine can compete for memory based on the
settings for the rocksdb_block_cache_size and innodb_buffer_pool_size parameters. In some
cases, you might only intend to use the MyRocks storage engine on a particular DB instance. If so, we
recommend setting the innodb_buffer_pool_size minimal parameter to a minimal value and
setting the rocksdb_block_cache_size as high as possible.

You can access MyRocks log files by using the DescribeDBLogFiles and
DownloadDBLogFilePortion operations.

For more information about MyRocks, see MyRocks on the MariaDB website.

Cache warming for MariaDB on Amazon RDS


InnoDB cache warming can provide performance gains for your MariaDB DB instance by saving the
current state of the buffer pool when the DB instance is shut down, and then reloading the buffer pool
from the saved information when the DB instance starts up. This approach bypasses the need for the
buffer pool to "warm up" from normal database use and instead preloads the buffer pool with the pages
for known common queries. For more information on cache warming, see Dumping and restoring the
buffer pool in the MariaDB documentation.

Cache warming is enabled by default on MariaDB 10.3 and higher DB instances. To enable it, set the
innodb_buffer_pool_dump_at_shutdown and innodb_buffer_pool_load_at_startup
parameters to 1 in the parameter group for your DB instance. Changing these parameter values in
a parameter group affects all MariaDB DB instances that use that parameter group. To enable cache
warming for specific MariaDB DB instances, you might need to create a new parameter group for those
DB instances. For information on parameter groups, see Working with parameter groups (p. 347).

Cache warming primarily provides a performance benefit for DB instances that use standard storage. If
you use PIOPS storage, you don't commonly see a significant performance benefit.
Important
If your MariaDB DB instance doesn't shut down normally, such as during a failover, then the
buffer pool state isn't saved to disk. In this case, MariaDB loads whatever buffer pool file is
available when the DB instance is restarted. No harm is done, but the restored buffer pool might

1262
Amazon Relational Database Service User Guide
Features not supported

not reflect the most recent state of the buffer pool before the restart. To ensure that you have
a recent state of the buffer pool available to warm the cache on startup, we recommend that
you periodically dump the buffer pool "on demand." You can dump or load the buffer pool on
demand.
You can create an event to dump the buffer pool automatically and at a regular interval. For
example, the following statement creates an event named periodic_buffer_pool_dump
that dumps the buffer pool every hour.

CREATE EVENT periodic_buffer_pool_dump


ON SCHEDULE EVERY 1 HOUR
DO CALL mysql.rds_innodb_buffer_pool_dump_now();

For more information, see Events in the MariaDB documentation.

Dumping and loading the buffer pool on demand


You can save and load the cache on demand using the following stored procedures:

• To dump the current state of the buffer pool to disk, call the
mysql.rds_innodb_buffer_pool_dump_now (p. 1784) stored procedure.
• To load the saved state of the buffer pool from disk, call the
mysql.rds_innodb_buffer_pool_load_now (p. 1784) stored procedure.
• To cancel a load operation in progress, call the mysql.rds_innodb_buffer_pool_load_abort (p. 1784)
stored procedure.

MariaDB features not supported by Amazon RDS


The following MariaDB features are not supported on Amazon RDS:

• S3 storage engine
• Authentication plugin – GSSAPI
• Authentication plugin – Unix Socket
• AWS Key Management encryption plugin
• Delayed replication for MariaDB versions lower than 10.6
• Native MariaDB encryption at rest for InnoDB and Aria

You can enable encryption at rest for a MariaDB DB instance by following the instructions in
Encrypting Amazon RDS resources (p. 2586).
• HandlerSocket
• JSON table type for MariaDB versions lower than 10.6
• MariaDB ColumnStore
• MariaDB Galera Cluster
• Multisource replication
• MyRocks storage engine for MariaDB versions lower than 10.6
• Password validation plugin, simple_password_check, and cracklib_password_check
• Spider storage engine
• Sphinx storage engine
• TokuDB storage engine
• Storage engine-specific object attributes, as described in Engine-defined new Table/Field/Index
attributes in the MariaDB documentation
• Table and tablespace encryption

1263
Amazon Relational Database Service User Guide
Features not supported

• Hashicorp Key Management plugin


• Running two upgrades in parallel

To deliver a managed service experience, Amazon RDS doesn't provide shell access to DB instances, and
it restricts access to certain system procedures and tables that require advanced privileges. Amazon RDS
supports access to databases on a DB instance using any standard SQL client application. Amazon RDS
doesn't allow direct host access to a DB instance by using Telnet, Secure Shell (SSH), or Windows Remote
Desktop Connection.

1264
Amazon Relational Database Service User Guide
MariaDB versions

MariaDB on Amazon RDS versions


For MariaDB, version numbers are organized as version X.Y.Z. In Amazon RDS terminology, X.Y denotes
the major version, and Z is the minor version number. For Amazon RDS implementations, a version
change is considered major if the major version number changes, for example going from version 10.5 to
10.6. A version change is considered minor if only the minor version number changes, for example going
from version 10.6.10 to 10.6.12.

Topics
• Supported MariaDB minor versions on Amazon RDS (p. 1265)
• Supported MariaDB major versions on Amazon RDS (p. 1267)
• MariaDB 10.3 RDS end of standard support (p. 1267)
• MariaDB 10.2 RDS end of standard support (p. 1268)
• Deprecated versions for Amazon RDS for MariaDB (p. 1268)

Supported MariaDB minor versions on Amazon RDS


Amazon RDS currently supports the following minor versions of MariaDB.
Note
Dates with only a month and a year are approximate and are updated with an exact date when
it’s known.

MariaDB engine Community release RDS release date RDS end of standard
version date support date

10.11

10.11.5 14 August 2023 7 September 2023 September 2024

10.11.4 7 June 2023 21 August 2023 September 2024

10.6

10.6.15 14 August 2023 7 September 2023 September 2024

10.6.14 7 June 2023 22 June 2023 September 2024

10.6.13 10 May 2023 15 June 2023 September 2024

10.6.12 6 February 2023 28 February 2023 March 2024

10.6.11 7 November 2022 18 November 2022 March 2024

10.6.10 19 September 2022 30 September 2022 March 2024

10.6.8 20 May 2022 8 July 2022 25 September 2023

10.5

10.5.22 14 August 2023 7 September 2023 September 2024

10.5.21 7 June 2023 22 June 2023 September 2024

10.5.20 10 May 2023 15 June 2023 September 2024

1265
Amazon Relational Database Service User Guide
Supported MariaDB minor versions

MariaDB engine Community release RDS release date RDS end of standard
version date support date

10.5.19 6 February 2023 28 February 2023 March 2024

10.5.18 7 November 2022 18 November 2022 March 2024

10.5.17 15 August 2022 16 September 2022 25 September 2023

10.5.16 20 May 2022 8 July 2022 25 September 2023

10.4

10.4.31 14 August 2023 7 September 2023 September 2024

10.4.30 7 June 2023 22 June 2023 September 2024

10.4.29 10 May 2023 15 June 2023 September 2024

10.4.28 6 February 2023 28 February 2023 March 2024

10.4.27 7 November 2022 18 November 2022 March 2024

10.4.26 15 August 2022 16 September 2022 25 September 2023

10.4.25 20 May 2022 8 July 2022 25 September 2023

10.3

10.3.39 10 May 2023 15 June 2023 23 October 2023

10.3.38 6 February 2023 28 February 2023 23 October 2023

10.3.37 7 November 2022 18 November 2022 23 October 2023

10.3.36 15 August 2022 16 September 2022 23 October 2023

10.3.35 20 May 2022 8 July 2022 25 September 2023

You can specify any currently supported MariaDB version when creating a new DB instance. You can
specify the major version (such as MariaDB 10.5), and any supported minor version for the specified
major version. If no version is specified, Amazon RDS defaults to a supported version, typically the most
recent version. If a major version is specified but a minor version is not, Amazon RDS defaults to a recent
release of the major version you have specified. To see a list of supported versions, as well as defaults for
newly created DB instances, use the describe-db-engine-versions AWS CLI command.

For example, to list the supported engine versions for RDS for MariaDB, run the following CLI command:

aws rds describe-db-engine-versions --engine mariadb --query "*[].


{Engine:Engine,EngineVersion:EngineVersion}" --output text

The default MariaDB version might vary by AWS Region. To create a DB instance with a specific minor
version, specify the minor version during DB instance creation. You can determine the default minor
version for an AWS Region using the following AWS CLI command:

aws rds describe-db-engine-versions --default-only --engine mariadb --engine-version major-


engine-version --region region --query "*[].{Engine:Engine,EngineVersion:EngineVersion}" --
output text

1266
Amazon Relational Database Service User Guide
Supported MariaDB major versions

Replace major-engine-version with the major engine version, and replace region with the AWS
Region. For example, the following AWS CLI command returns the default MariaDB minor engine version
for the 10.5 major version and the US West (Oregon) AWS Region (us-west-2):

aws rds describe-db-engine-versions --default-only --engine mariadb --engine-version 10.5


--region us-west-2 --query "*[].{Engine:Engine,EngineVersion:EngineVersion}" --output text

Supported MariaDB major versions on Amazon RDS


RDS for MariaDB major versions remain available at least until community end of life for the
corresponding community version. You can use the following dates to plan your testing and upgrade
cycles. If Amazon extends support for an RDS for MariaDB version for longer than originally stated, we
plan to update this table to reflect the later date.
Note
Dates with only a month and a year are approximate and are updated with an exact date when
it’s known.

MariaDB major Community RDS release date Community end RDS end of
version release date of life date standard support
date

MariaDB 10.11 16 February 2023 21 August 2023 16 February 2028 February 2028

MariaDB 10.6 6 July 2021 3 February 2022 6 July 2026 July 2026

MariaDB 10.5 24 June 2020 21 January 2021 24 June 2025 June 2025

MariaDB 10.4 18 June 2019 6 April 2020 18 June 2024 June 2024

MariaDB 10.3 25 May 2018 23 October 2018 25 May 2023 23 October 2023

MariaDB 10.2 23 May 2017 5 Jan 2018 23 May 2022 15 Oct 2022

MariaDB 10.3 RDS end of standard support


On October 23, 2023, Amazon RDS is starting the RDS end of standard support process for MariaDB
version 10.3 using the following schedule, which includes upgrade recommendations. We recommend
that you upgrade all MariaDB 10.3 DB instances to MariaDB 10.6 as soon as possible. For more
information, see Upgrading the MariaDB DB engine (p. 1289).

Action or recommendation Dates

We recommend that you manually upgrade MariaDB Now–October 23, 2023


10.3 DB instances to the version of your choice. You can
upgrade directly to MariaDB version 10.6 or higher.

We recommend that you manually upgrade MariaDB Now–October 23, 2023


10.3 snapshots to the version of your choice.

You can no longer create new MariaDB 10.3 DB August 23, 2023
instances.

You can still create read replicas of existing MariaDB


10.3 DB instances and change them from Single-AZ
deployments to Multi-AZ deployments.

1267
Amazon Relational Database Service User Guide
MariaDB 10.2 RDS end of standard support

Action or recommendation Dates

Amazon RDS starts automatic upgrades of your October 23, 2023


MariaDB 10.3 DB instances to version 10.6.

Amazon RDS starts automatic upgrades to version October 23, 2023


10.6 for any MariaDB 10.3 DB instances restored from
snapshots.

Amazon RDS automatically upgrades any remaining January 23, 2024


MariaDB 10.3 DB instances to version 10.6 whether or
not they are in a scheduled maintenance window.

MariaDB 10.2 RDS end of standard support


On October 15, 2022, Amazon RDS is starting the RDS end of standard support process for MariaDB
version 10.2 using the following schedule, which includes upgrade recommendations. We recommend
that you upgrade all MariaDB 10.2 DB instances to MariaDB 10.3 or higher as soon as possible. For more
information, see Upgrading the MariaDB DB engine (p. 1289).

Action or recommendation Dates

We recommend that you upgrade MariaDB 10.2 DB Now–October 15, 2022


instances manually to the version of your choice. You
can upgrade directly to MariaDB version 10.3 or 10.6.

We recommend that you upgrade MariaDB 10.2 Now–October 15, 2022


snapshots manually to the version of your choice.

You can no longer create new MariaDB 10.2 DB July 15, 2022
instances.

You can still create read replicas of existing MariaDB


10.2 DB instances and change them from Single-AZ
deployments to Multi-AZ deployments.

Amazon RDS starts automatic upgrades of your October 15, 2022


MariaDB 10.2 DB instances to version 10.3.

Amazon RDS starts automatic upgrades to version October 15, 2022


10.3 for any MariaDB 10.2 DB instances restored from
snapshots.

Amazon RDS automatically upgrades any remaining January 15, 2023


MariaDB 10.2 DB instances to version 10.3 whether or
not they are in a scheduled maintenance window.

For more information about Amazon RDS for MariaDB 10.2 RDS end of standard support, see
Announcement: Amazon Relational Database Service (Amazon RDS) for MariaDB 10.2 End-of-Life date is
October 15, 2022.

Deprecated versions for Amazon RDS for MariaDB


Amazon RDS for MariaDB version 10.0, 10.1, and 10.2 are deprecated.

For information about the Amazon RDS deprecation policy for MariaDB, see Amazon RDS FAQs.

1268
Amazon Relational Database Service User Guide
Connecting to a DB instance running MariaDB

Connecting to a DB instance running the MariaDB


database engine
After Amazon RDS provisions your DB instance, you can use any standard MariaDB client application or
utility to connect to the instance. In the connection string, you specify the Domain Name System (DNS)
address from the DB instance endpoint as the host parameter. You also specify the port number from the
DB instance endpoint as the port parameter.

You can connect to an Amazon RDS for MariaDB DB instance by using tools like the MySQL command-
line client. For more information on using the MySQL command-line client, see mysql command-line
client in the MariaDB documentation. One GUI-based application that you can use to connect is Heidi.
For more information, see the Download HeidiSQL page. For information about installing MySQL
(including the MySQL command-line client), see Installing and upgrading MySQL.

Most Linux distributions include the MariaDB client instead of the Oracle MySQL client. To install the
MySQL command-line client on Amazon Linux 2023, run the following command:

sudo dnf install mariadb105

To install the MySQL command-line client on Amazon Linux 2, run the following command:

sudo yum install mariadb

To install the MySQL command-line client on most DEB-based Linux distributions, run the following
command.

apt-get install mariadb-client

To check the version of your MySQL command-line client, run the following command.

mysql --version

To read the MySQL documentation for your current client version, run the following command.

man mysql

To connect to a DB instance from outside of a virtual private cloud (VPC) based on Amazon VPC, the DB
instance must be publicly accessible. Also, access must be granted using the inbound rules of the DB
instance's security group, and other requirements must be met. For more information, see Can't connect
to Amazon RDS DB instance (p. 2727).

You can use SSL encryption on connections to a MariaDB DB instance. For information, see Using SSL/
TLS with a MariaDB DB instance (p. 1275).

Topics
• Finding the connection information for a MariaDB DB instance (p. 1270)
• Connecting from the MySQL command-line client (unencrypted) (p. 1272)
• Troubleshooting connections to your MariaDB DB instance (p. 1273)

1269
Amazon Relational Database Service User Guide
Finding the connection information

Finding the connection information for a MariaDB DB


instance
The connection information for a DB instance includes its endpoint, port, and a valid database user,
such as the master user. For example, suppose that an endpoint value is mydb.123456789012.us-
east-1.rds.amazonaws.com. In this case, the port value is 3306, and the database user is admin.
Given this information, you specify the following values in a connection string:

• For host or host name or DNS name, specify mydb.123456789012.us-


east-1.rds.amazonaws.com.
• For port, specify 3306.
• For user, specify admin.

To connect to a DB instance, use any client for the MariaDB DB engine. For example, you might use the
MySQL command-line client or MySQL Workbench.

To find the connection information for a DB instance, you can use the AWS Management Console, the
AWS Command Line Interface (AWS CLI) describe-db-instances command, or the Amazon RDS API
DescribeDBInstances operation to list its details.

Console

To find the connection information for a DB instance in the AWS Management Console

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases to display a list of your DB instances.
3. Choose the name of the MariaDB DB instance to display its details.
4. On the Connectivity & security tab, copy the endpoint. Also, note the port number. You need both
the endpoint and the port number to connect to the DB instance.

1270
Amazon Relational Database Service User Guide
Finding the connection information

5. If you need to find the master user name, choose the Configuration tab and view the Master
username value.

AWS CLI
To find the connection information for a MariaDB DB instance by using the AWS CLI, call the describe-db-
instances command. In the call, query for the DB instance ID, endpoint, port, and master user name.

1271
Amazon Relational Database Service User Guide
Connecting from the MySQL
command-line client (unencrypted)

For Linux, macOS, or Unix:

aws rds describe-db-instances \


--filters "Name=engine,Values=mariadb" \
--query "*[].[DBInstanceIdentifier,Endpoint.Address,Endpoint.Port,MasterUsername]"

For Windows:

aws rds describe-db-instances ^


--filters "Name=engine,Values=mariadb" ^
--query "*[].[DBInstanceIdentifier,Endpoint.Address,Endpoint.Port,MasterUsername]"

Your output should be similar to the following.

[
[
"mydb1",
"mydb1.123456789012.us-east-1.rds.amazonaws.com",
3306,
"admin"
],
[
"mydb2",
"mydb2.123456789012.us-east-1.rds.amazonaws.com",
3306,
"admin"
]
]

RDS API
To find the connection information for a DB instance by using the Amazon RDS API, call the
DescribeDBInstances operation. In the output, find the values for the endpoint address, endpoint port,
and master user name.

Connecting from the MySQL command-line client


(unencrypted)
Important
Only use an unencrypted MySQL connection when the client and server are in the same VPC and
the network is trusted. For information about using encrypted connections, see Connecting from
the MySQL command-line client with SSL/TLS (encrypted) (p. 1276).

To connect to a DB instance using the MySQL command-line client, enter the following command at a
command prompt on a client computer. Doing this connects you to a database on a MariaDB DB instance.
Substitute the DNS name (endpoint) for your DB instance for <endpoint> and the master user name
that you used for <mymasteruser>. Provide the master password that you used when prompted for a
password.

mysql -h <endpoint> -P 3306 -u <mymasteruser> -p

After you enter the password for the user, you see output similar to the following.

Welcome to the MariaDB monitor. Commands end with ; or \g.


Your MariaDB connection id is 31
Server version: 10.6.10-MariaDB-log Source distribution

1272
Amazon Relational Database Service User Guide
Troubleshooting

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]>

Troubleshooting connections to your MariaDB DB


instance
Two common causes of connection failures to a new DB instance are the following:

• The DB instance was created using a security group that doesn't authorize connections from the device
or Amazon EC2 instance where the MariaDB application or utility is running. The DB instance must
have a VPC security group that authorizes the connections. For more information, see Amazon VPC
VPCs and Amazon RDS (p. 2688).

You can add or edit an inbound rule in the security group. For Source, choose My IP. This allows access
to the DB instance from the IP address detected in your browser.
• The DB instance was created using the default port of 3306, and your company has firewall rules
blocking connections to that port from devices in your company network. To fix this failure, recreate
the instance with a different port.

For more information on connection issues, see Can't connect to Amazon RDS DB instance (p. 2727).

1273
Amazon Relational Database Service User Guide
Securing MariaDB connections

Securing MariaDB DB instance connections


You can manage the security of your MariaDB DB instances.

Topics
• MariaDB security on Amazon RDS (p. 1274)
• Encrypting client connections to MariaDB DB instances with SSL/TLS (p. 1275)
• Updating applications to connect to MariaDB instances using new SSL/TLS certificates (p. 1277)

MariaDB security on Amazon RDS


Security for MariaDB DB instances is managed at three levels:

• AWS Identity and Access Management controls who can perform Amazon RDS management actions
on DB instances. When you connect to AWS using IAM credentials, your IAM account must have IAM
policies that grant the permissions required to perform Amazon RDS management operations. For
more information, see Identity and access management for Amazon RDS (p. 2606).
• When you create a DB instance, you use a VPC security group to control which devices and Amazon
EC2 instances can open connections to the endpoint and port of the DB instance. These connections
can be made using Secure Socket Layer (SSL) and Transport Layer Security (TLS). In addition, firewall
rules at your company can control whether devices running at your company can open connections to
the DB instance.
• Once a connection has been opened to a MariaDB DB instance, authentication of the login and
permissions are applied the same way as in a stand-alone instance of MariaDB. Commands such as
CREATE USER, RENAME USER, GRANT, REVOKE, and SET PASSWORD work just as they do in stand-
alone databases, as does directly modifying database schema tables.

When you create an Amazon RDS DB instance, the master user has the following default privileges:

• alter
• alter routine
• create
• create routine
• create temporary tables
• create user
• create view
• delete
• drop
• event
• execute
• grant option
• index
• insert
• lock tables
• process
• references
• reload

1274
Amazon Relational Database Service User Guide
Encrypting with SSL/TLS

This privilege is limited on MariaDB DB instances. It doesn't grant access to the FLUSH LOGS or FLUSH
TABLES WITH READ LOCK operations.
• replication client
• replication slave
• select
• show databases
• show view
• trigger
• update

For more information about these privileges, see User account management in the MariaDB
documentation.
Note
Although you can delete the master user on a DB instance, we don't recommend doing so. To
recreate the master user, use the ModifyDBInstance API or the modify-db-instance AWS
CLI and specify a new master user password with the appropriate parameter. If the master user
does not exist in the instance, the master user is created with the specified password.

To provide management services for each DB instance, the rdsadmin user is created when the DB
instance is created. Attempting to drop, rename, change the password for, or change privileges for the
rdsadmin account results in an error.

To allow management of the DB instance, the standard kill and kill_query commands have
been restricted. The Amazon RDS commands mysql.rds_kill, mysql.rds_kill_query, and
mysql.rds_kill_query_id are provided for use in MariaDB and also MySQL so that you can end user
sessions or queries on DB instances.

Encrypting client connections to MariaDB DB


instances with SSL/TLS
Secure Sockets Layer (SSL) is an industry-standard protocol for securing network connections between
client and server. After SSL version 3.0, the name was changed to Transport Layer Security (TLS).
Amazon RDS supports SSL/TLS encryption for MariaDB DB instances. Using SSL/TLS, you can encrypt a
connection between your application client and your MariaDB DB instance. SSL/TLS support is available
in all AWS Regions.

Topics
• Using SSL/TLS with a MariaDB DB instance (p. 1275)
• Requiring SSL/TLS for all connections to a MariaDB DB instance (p. 1276)
• Connecting from the MySQL command-line client with SSL/TLS (encrypted) (p. 1276)

Using SSL/TLS with a MariaDB DB instance


Amazon RDS creates an SSL/TLS certificate and installs the certificate on the DB instance when Amazon
RDS provisions the instance. These certificates are signed by a certificate authority. The SSL/TLS
certificate includes the DB instance endpoint as the Common Name (CN) for the SSL/TLS certificate to
guard against spoofing attacks.

An SSL/TLS certificate created by Amazon RDS is the trusted root entity and should work in most cases
but might fail if your application does not accept certificate chains. If your application does not accept
certificate chains, you might need to use an intermediate certificate to connect to your AWS Region. For

1275
Amazon Relational Database Service User Guide
Encrypting with SSL/TLS

example, you must use an intermediate certificate to connect to the AWS GovCloud (US) Regions using
SSL/TLS.

For information about downloading certificates, see Using SSL/TLS to encrypt a connection to a DB
instance (p. 2591). For more information about using SSL/TLS with MySQL, see Updating applications to
connect to MariaDB instances using new SSL/TLS certificates (p. 1277).

Amazon RDS for MariaDB supports Transport Layer Security (TLS) versions 1.0, 1.1, 1.2, and 1.3 for all
MariaDB versions.

You can require SSL/TLS connections for specific users accounts. For example, you can use one of the
following statements, depending on your MariaDB version, to require SSL/TLS connections on the user
account encrypted_user.

Use the following statement.

ALTER USER 'encrypted_user'@'%' REQUIRE SSL;

For more information on SSL/TLS connections with MariaDB, see Securing Connections for Client and
Server in the MariaDB documentation.

Requiring SSL/TLS for all connections to a MariaDB DB instance


Use the require_secure_transport parameter to require that all user connections to your MariaDB
DB instance use SSL/TLS. By default, the require_secure_transport parameter is set to OFF. You
can set the require_secure_transport parameter to ON to require SSL/TLS for connections to your
DB instance.
Note
The require_secure_transport parameter is only supported for MariaDB version 10.5 and
higher.

You can set the require_secure_transport parameter value by updating the DB parameter group
for your DB instance. You don't need to reboot your DB instance for the change to take effect.

When the require_secure_transport parameter is set to ON for a DB instance, a database client


can connect to it if it can establish an encrypted connection. Otherwise, an error message similar to the
following is returned to the client:

ERROR 1045 (28000): Access denied for user 'USER'@'localhost' (using password: YES | NO)

For information about setting parameters, see Modifying parameters in a DB parameter group (p. 352).

For more information about the require_secure_transport parameter, see the MariaDB
documentation.

Connecting from the MySQL command-line client with SSL/TLS


(encrypted)
The mysql client program parameters are slightly different if you are using the MySQL 5.7 version, the
MySQL 8.0 version, or the MariaDB version.

To find out which version you have, run the mysql command with the --version option. In the
following example, the output shows that the client program is from MariaDB.

$ mysql --version
mysql Ver 15.1 Distrib 10.5.15-MariaDB, for osx10.15 (x86_64) using readline 5.1

1276
Amazon Relational Database Service User Guide
Using new SSL/TLS certificates

Most Linux distributions, such as Amazon Linux, CentOS, SUSE, and Debian have replaced MySQL with
MariaDB, and the mysql version in them is from MariaDB.

To connect to your DB instance using SSL/TLS, follow these steps:

To connect to a DB instance with SSL/TLS using the MySQL command-line client

1. Download a root certificate that works for all AWS Regions.

For information about downloading certificates, see Using SSL/TLS to encrypt a connection to a DB
instance (p. 2591).
2. Use a MySQL command-line client to connect to a DB instance with SSL/TLS encryption. For the -h
parameter, substitute the DNS name (endpoint) for your DB instance. For the --ssl-ca parameter,
substitute the SSL/TLS certificate file name. For the -P parameter, substitute the port for your DB
instance. For the -u parameter, substitute the user name of a valid database user, such as the master
user. Enter the master user password when prompted.

The following example shows how to launch the client using the --ssl-ca parameter using the
MariaDB client:

mysql -h mysql–instance1.123456789012.us-east-1.rds.amazonaws.com --ssl-ca=global-


bundle.pem --ssl -P 3306 -u myadmin -p

To require that the SSL/TLS connection verifies the DB instance endpoint against the endpoint in the
SSL/TLS certificate, enter the following command:

mysql -h mysql–instance1.123456789012.us-east-1.rds.amazonaws.com --ssl-ca=global-


bundle.pem --ssl-verify-server-cert -P 3306 -u myadmin -p

The following example shows how to launch the client using the --ssl-ca parameter using the
MySQL 5.7 client or later:

mysql -h mysql–instance1.123456789012.us-east-1.rds.amazonaws.com --ssl-ca=global-


bundle.pem --ssl-mode=REQUIRED -P 3306 -u myadmin -p

3. Enter the master user password when prompted.

You should see output similar to the following.

Welcome to the MariaDB monitor. Commands end with ; or \g.


Your MariaDB connection id is 31
Server version: 10.6.10-MariaDB-log Source distribution

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]>

Updating applications to connect to MariaDB


instances using new SSL/TLS certificates
As of January 13, 2023, Amazon RDS has published new Certificate Authority (CA) certificates for
connecting to your RDS DB instances using Secure Socket Layer or Transport Layer Security (SSL/TLS).
Following, you can find information about updating your applications to use the new certificates.

1277
Amazon Relational Database Service User Guide
Using new SSL/TLS certificates

This topic can help you to determine whether your applications require certificate verification to connect
to your DB instances.
Note
Some applications are configured to connect to MariaDB only if they can successfully verify the
certificate on the server. For such applications, you must update your client application trust
stores to include the new CA certificates.
You can specify the following SSL modes: disabled, preferred, and required. When
you use the preferred SSL mode and the CA certificate doesn't exist or isn't up to date, the
connection falls back to not using SSL and still connects successfully.
We recommend avoiding preferred mode. In preferred mode, if the connection encounters
an invalid certificate, it stops using encryption and proceeds unencrypted.

After you update your CA certificates in the client application trust stores, you can rotate the certificates
on your DB instances. We strongly recommend testing these procedures in a development or staging
environment before implementing them in your production environments.

For more information about certificate rotation, see Rotating your SSL/TLS certificate (p. 2596). For
more information about downloading certificates, see Using SSL/TLS to encrypt a connection to a DB
instance (p. 2591). For information about using SSL/TLS with MariaDB DB instances, see Using SSL/TLS
with a MariaDB DB instance (p. 1275).

Topics
• Determining whether a client requires certificate verification in order to connect (p. 1278)
• Updating your application trust store (p. 1279)
• Example Java code for establishing SSL connections (p. 1280)

Determining whether a client requires certificate verification in


order to connect
You can check whether JDBC clients and MySQL clients require certificate verification to connect.

JDBC
The following example with MySQL Connector/J 8.0 shows one way to check an application's JDBC
connection properties to determine whether successful connections require a valid certificate. For more
information on all of the JDBC connection options for MySQL, see Configuration properties in the
MySQL documentation.

When using the MySQL Connector/J 8.0, an SSL connection requires verification against the server CA
certificate if your connection properties have sslMode set to VERIFY_CA or VERIFY_IDENTITY, as in
the following example.

Properties properties = new Properties();


properties.setProperty("sslMode", "VERIFY_IDENTITY");
properties.put("user", DB_USER);
properties.put("password", DB_PASSWORD);

Note
If you use either the MySQL Java Connector v5.1.38 or later, or the MySQL Java Connector
v8.0.9 or later to connect to your databases, even if you haven't explicitly configured your
applications to use SSL/TLS when connecting to your databases, these client drivers default to
using SSL/TLS. In addition, when using SSL/TLS, they perform partial certificate verification and
fail to connect if the database server certificate is expired.

1278
Amazon Relational Database Service User Guide
Using new SSL/TLS certificates

Specify a password other than the prompt shown here as a security best practice.

MySQL
The following examples with the MySQL Client show two ways to check a script's MySQL connection to
determine whether successful connections require a valid certificate. For more information on all of the
connection options with the MySQL Client, see Client-side configuration for encrypted connections in
the MySQL documentation.

When using the MySQL 5.7 or MySQL 8.0 Client, an SSL connection requires verification against the
server CA certificate if for the --ssl-mode option you specify VERIFY_CA or VERIFY_IDENTITY, as in
the following example.

mysql -h mysql-database.rds.amazonaws.com -uadmin -ppassword --ssl-ca=/tmp/ssl-cert.pem --


ssl-mode=VERIFY_CA

When using the MySQL 5.6 Client, an SSL connection requires verification against the server CA
certificate if you specify the --ssl-verify-server-cert option, as in the following example.

mysql -h mysql-database.rds.amazonaws.com -uadmin -ppassword --ssl-ca=/tmp/ssl-cert.pem --


ssl-verify-server-cert

Updating your application trust store


For information about updating the trust store for MySQL applications, see Using TLS/SSL with MariaDB
Connector/J in the MariaDB documentation.

For information about downloading the root certificate, see Using SSL/TLS to encrypt a connection to a
DB instance (p. 2591).

For sample scripts that import certificates, see Sample script for importing certificates into your trust
store (p. 2603).
Note
When you update the trust store, you can retain older certificates in addition to adding the new
certificates.

If you are using the MariaDB Connector/J JDBC driver in an application, set the following properties in
the application.

System.setProperty("javax.net.ssl.trustStore", certs);
System.setProperty("javax.net.ssl.trustStorePassword", "password");

When you start the application, set the following properties.

java -Djavax.net.ssl.trustStore=/path_to_truststore/MyTruststore.jks -
Djavax.net.ssl.trustStorePassword=my_truststore_password com.companyName.MyApplication

1279
Amazon Relational Database Service User Guide
Using new SSL/TLS certificates

Note
Specify passwords other than the prompts shown here as a security best practice.

Example Java code for establishing SSL connections


The following code example shows how to set up the SSL connection using JDBC.

private static final String DB_USER = "admin";

private static final String DB_USER = "user name";


private static final String DB_PASSWORD = "password";
// This key store has only the prod root ca.
private static final String KEY_STORE_FILE_PATH = "file-path-to-keystore";
private static final String KEY_STORE_PASS = "keystore-password";

public static void main(String[] args) throws Exception {


Class.forName("org.mariadb.jdbc.Driver");

System.setProperty("javax.net.ssl.trustStore", KEY_STORE_FILE_PATH);
System.setProperty("javax.net.ssl.trustStorePassword", KEY_STORE_PASS);

Properties properties = new Properties();


properties.put("user", DB_USER);
properties.put("password", DB_PASSWORD);

Connection connection = DriverManager.getConnection("jdbc:mysql://ssl-mariadb-


public.cni62e2e7kwh.us-east-1.rds.amazonaws.com:3306?useSSL=true",properties);
Statement stmt=connection.createStatement();

ResultSet rs=stmt.executeQuery("SELECT 1 from dual");

return;
}

Important
After you have determined that your database connections use SSL/TLS and have updated
your application trust store, you can update your database to use the rds-ca-rsa2048-g1
certificates. For instructions, see step 3 in Updating your CA certificate by modifying your DB
instance (p. 2597).
Specify a password other than the prompt shown here as a security best practice.

1280
Amazon Relational Database Service User Guide
Improving query performance with RDS Optimized Reads

Improving query performance for RDS for MariaDB


with Amazon RDS Optimized Reads
You can achieve faster query processing for RDS for MariaDB with Amazon RDS Optimized Reads. An RDS
for MariaDB DB instance that uses RDS Optimized Reads can achieve up to 2x faster query processing
compared to a DB instance that doesn't use it.

Topics
• Overview of RDS Optimized Reads (p. 1281)
• Use cases for RDS Optimized Reads (p. 1281)
• Best practices for RDS Optimized Reads (p. 1282)
• Using RDS Optimized Reads (p. 1282)
• Monitoring DB instances that use RDS Optimized Reads (p. 1283)
• Limitations for RDS Optimized Reads (p. 1283)

Overview of RDS Optimized Reads


When you use an RDS for MariaDB DB instance that has RDS Optimized Reads turned on, your DB
instance achieves faster query performance through the use of an instance store. An instance store
provides temporary block-level storage for your DB instance. The storage is located on Non-Volatile
Memory Express (NVMe) solid state drives (SSDs) that are physically attached to the host server.
This storage is optimized for low latency, high random I/O performance, and high sequential read
throughput.

RDS Optimized Reads is turned on by default when a DB instance uses a DB instance class with an
instance store, such as db.m5d or db.m6gd. With RDS Optimized Reads, some temporary objects are
stored on the instance store. These temporary objects include internal temporary files, internal on-disk
temp tables, memory map files, and binary log (binlog) cache files. For more information about the
instance store, see Amazon EC2 instance store in the Amazon Elastic Compute Cloud User Guide for Linux
Instances.

The workloads that generate temporary objects in MariaDB for query processing can take advantage
of the instance store for faster query processing. This type of workload includes queries involving
sorts, hash aggregations, high-load joins, Common Table Expressions (CTEs), and queries on unindexed
columns. These instance store volumes provide higher IOPS and performance, regardless of the storage
configurations used for persistent Amazon EBS storage. Because RDS Optimized Reads offloads
operations on temporary objects to the instance store, the input/output operations per second (IOPS)
or throughput of the persistent storage (Amazon EBS) can now be used for operations on persistent
objects. These operations include regular data file reads and writes and background engine operations,
such as flushing and insert buffer merges.
Note
Both manual and automated RDS snapshots contain only the engine files for persistent objects.
The temporary objects created in the instance store aren't included in RDS snapshots.

Use cases for RDS Optimized Reads


If you have workloads that rely heavily on temporary objects, such as internal tables or files, for their
query execution, then you can benefit from turning on RDS Optimized Reads. The following use cases are
candidates for RDS Optimized Reads:

• Applications that run analytical queries with complex common table expressions (CTEs), derived tables,
and grouping operations

1281
Amazon Relational Database Service User Guide
Best practices

• Read replicas that serve heavy read traffic with unoptimized queries
• Applications that run on-demand or dynamic reporting queries that involve complex operations, such
as queries with GROUP BY and ORDER BY clauses
• Workloads that use internal temporary tables for query processing

You can monitor the engine status variable created_tmp_disk_tables to determine the number of
disk-based temporary tables created on your DB instance.
• Applications that create large temporary tables, either directly or in procedures, to store intermediate
results
• Database queries that perform grouping or ordering on non-indexed columns

Best practices for RDS Optimized Reads


Use the following best practices for RDS Optimized Reads:

• Add retry logic for read-only queries in case they fail because the instance store is full during the
execution.
• Monitor the storage space available on the instance store with the CloudWatch metric
FreeLocalStorage. If the instance store is reaching its limit because of workload on the DB instance,
modify the DB instance to use a larger DB instance class.
• When your DB instance has sufficient memory but is still reaching the storage limit on the instance
store, increase the binlog_cache_size value to maintain the session-specific binlog entries in
memory. This configuration prevents writing the binlog entries to temporary binlog cache files on disk.

The binlog_cache_size parameter is session-specific. You can change the value for each new
session. The setting for this parameter can increase the memory utilization on the DB instance during
peak workload. Therefore, consider increasing the parameter value based on the workload pattern of
your application and available memory on the DB instance.
• Use the default value of MIXED for the binlog_format. Depending on the size of the transactions,
setting binlog_format to ROW can result in large binlog cache files on the instance store.
• Avoid performing bulk changes in a single transaction. These types of transactions can generate large
binlog cache files on the instance store and can cause issues when the instance store is full. Consider
splitting writes into multiple small transactions to minimize storage use for binlog cache files.

Using RDS Optimized Reads


When you provision an RDS for MariaDB DB instance with one of the following DB instance classes in a
Single-AZ DB instance deployment or Multi-AZ DB instance deployment, the DB instance automatically
uses RDS Optimized Reads.

To turn on RDS Optimized Reads, do one of the following:

• Create an RDS for MariaDB DB instance using one of these DB instance classes. For more information,
see Creating an Amazon RDS DB instance (p. 300).
• Modify an existing RDS for MariaDB DB instance to use one of these DB instance classes. For more
information, see Modifying an Amazon RDS DB instance (p. 401).

RDS Optimized Reads is available in all AWS Regions where one or more of the DB instance classes with
local NVMe SSD storage are supported. For information about DB instance classes, see the section called
“DB instance classes” (p. 11).

1282
Amazon Relational Database Service User Guide
Monitoring

DB instance class availability differs for AWS Regions. To determine whether a DB instance class is
supported in a specific AWS Region, see the section called “Determining DB instance class support in
AWS Regions” (p. 68).

If you don't want to use RDS Optimized Reads, modify your DB instance so that it doesn't use a DB
instance class that supports the feature.

Monitoring DB instances that use RDS Optimized


Reads
You can monitor DB instances that use RDS Optimized Reads with the following CloudWatch metrics:

• FreeLocalStorage
• ReadIOPSLocalStorage
• ReadLatencyLocalStorage
• ReadThroughputLocalStorage
• WriteIOPSLocalStorage
• WriteLatencyLocalStorage
• WriteThroughputLocalStorage

These metrics provide data about available instance store storage, IOPS, and throughput. For
more information about these metrics, see Amazon CloudWatch instance-level metrics for Amazon
RDS (p. 806).

Limitations for RDS Optimized Reads


The following limitations apply to RDS Optimized Reads:

• RDS Optimized Reads is supported for the following RDS for MariaDB versions:
• 10.11.4 and higher 10.11 versions
• 10.6.7 and higher 10.6 versions
• 10.5.16 and higher 10.5 versions
• 10.4.25 and higher 10.4 versions

For information about RDS for MariaDB versions, see MariaDB on Amazon RDS versions (p. 1265).
• You can't change the location of temporary objects to persistent storage (Amazon EBS) on the DB
instance classes that support RDS Optimized Reads.
• When binary logging is enabled on a DB instance, the maximum transaction size is limited by the
size of the instance store. In MariaDB, any session that requires more storage than the value of
binlog_cache_size writes transaction changes to temporary binlog cache files, which are created
on the instance store.
• Transactions can fail when the instance store is full.

1283
Amazon Relational Database Service User Guide
Improving write performance with
RDS Optimized Writes for MariaDB

Improving write performance with Amazon RDS


Optimized Writes for MariaDB
You can improve the performance of write transactions with Amazon RDS Optimized Writes for MariaDB.
When your RDS for MariaDB database uses RDS Optimized Writes, it can achieve up to two times higher
write transaction throughput.

Topics
• Overview of RDS Optimized Writes (p. 1284)
• Using RDS Optimized Writes (p. 1285)
• Limitations for RDS Optimized Writes (p. 1288)

Overview of RDS Optimized Writes


When you turn on Amazon RDS Optimized Writes, your RDS for MariaDB databases write only once when
flushing data to durable storage without the need for the doublewrite buffer. The databases continue to
provide ACID property protections for reliable database transactions, along with improved performance.

Relational databases, like MariaDB, provide the ACID properties of atomicity, consistency, isolation, and
durability for reliable database transactions. To help provide these properties, MariaDB uses a data
storage area called the doublewrite buffer that prevents partial page write errors. These errors occur
when there is a hardware failure while the database is updating a page, such as in the case of a power
outage. A MariaDB database can detect partial page writes and recover with a copy of the page in the
doublewrite buffer. While this technique provides protection, it also results in extra write operations. For
more information about the MariaDB doublewrite buffer, see InnoDB Doublewrite Buffer in the MariaDB
documentation.

With RDS Optimized Writes turned on, RDS for MariaDB databases write only once when flushing data
to durable storage without using the doublewrite buffer. RDS Optimized Writes is useful if you run write-
heavy workloads on your RDS for MariaDB databases. Examples of databases with write-heavy workloads
include ones that support digital payments, financial trading, and gaming applications.

These databases run on DB instance classes that use the AWS Nitro System. Because of the hardware
configuration in these systems, the database can write 16-KiB pages directly to data files reliably and
durably in one step. The AWS Nitro System makes RDS Optimized Writes possible.

You can set the new database parameter rds.optimized_writes to control the RDS Optimized Writes
feature for RDS for MariaDB databases. Access this parameter in the DB parameter groups of RDS for
MariaDB for the following versions:

• 10.11.4 and higher 10.11 versions


• 10.6.10 and higher 10.6 versions

Set the parameter using the following values:

• AUTO – Turn on RDS Optimized Writes if the database supports it. Turn off RDS Optimized Writes if the
database doesn't support it. This setting is the default.
• OFF – Turn off RDS Optimized Writes even if the database supports it.

If you migrate an RDS for MariaDB database that is configured to use RDS Optimized Writes to a DB
instance class that doesn't support the feature, RDS automatically turns off RDS Optimized Writes for the
database.

1284
Amazon Relational Database Service User Guide
Using

When RDS Optimized Writes is turned off, the database uses the MariaDB doublewrite buffer.

To determine whether an RDS for MariaDB database is using RDS Optimized Writes, view the current
value of the innodb_doublewrite parameter for the database. If the database is using RDS Optimized
Writes, this parameter is set to FALSE (0).

Using RDS Optimized Writes


You can turn on RDS Optimized Writes when you create an RDS for MariaDB database with the RDS
console, the AWS CLI, or the RDS API. RDS Optimized Writes is turned on automatically when both of the
following conditions apply during database creation:

• You specify a DB engine version and DB instance class that support RDS Optimized Writes.
• RDS Optimized Writes is supported for the following RDS for MariaDB versions:
• 10.11.4 and higher 10.11 versions
• 10.6.10 and higher 10.6 versions

For information about RDS for MariaDB versions, see MariaDB on Amazon RDS versions (p. 1265).
• RDS Optimized Writes is supported for RDS for MariaDB databases that use the following DB
instance classes:
• db.m7g
• db.m6g
• db.m6gd
• db.m6i
• db.m5d
• db.r7g
• db.r6g
• db.r6gd
• db.r6i
• db.r5
• db.r5b
• db.r5d
• db.x2idn
• db.x2iedn

For information about DB instance classes, see the section called “DB instance classes” (p. 11).

DB instance class availability differs for AWS Regions. To determine whether a DB instance class is
supported in a specific AWS Region, see the section called “Determining DB instance class support in
AWS Regions” (p. 68).
• In the parameter group associated with the database, the rds.optimized_writes parameter is set
to AUTO. In default parameter groups, this parameter is always set to AUTO.

If you want to use a DB engine version and DB instance class that support RDS Optimized Writes, but you
don't want to use this feature, then specify a custom parameter group when you create the database. In
this parameter group, set the rds.optimized_writes parameter to OFF. If you want the database to
use RDS Optimized Writes later, you can set the parameter to AUTO to turn it on. For information about
creating custom parameter groups and setting parameters, see Working with parameter groups (p. 347).

For information about creating a DB instance, see Creating an Amazon RDS DB instance (p. 300).
1285
Amazon Relational Database Service User Guide
Using

Console
When you use the RDS console to create an RDS for MariaDB database, you can filter for the DB engine
versions and DB instance classes that support RDS Optimized Writes. After you turn on the filters, you
can choose from the available DB engine versions and DB instance classes.

To choose a DB engine version that supports RDS Optimized Writes, filter for the RDS for MariaDB DB
engine versions that support it in Engine version, and then choose a version.

In the Instance configuration section, filter for the DB instance classes that support RDS Optimized
Writes, and then choose a DB instance class.

1286
Amazon Relational Database Service User Guide
Using

After you make these selections, you can choose other settings that meet your requirements and finish
creating the RDS for MariaDB database with the console.

AWS CLI
To create a DB instance by using the AWS CLI, use the create-db-instance command. Make sure the --
engine-version and --db-instance-class values support RDS Optimized Writes. In addition, make
sure the parameter group associated with the DB instance has the rds.optimized_writes parameter
set to AUTO. This example associates the default parameter group with the DB instance.

Example Creating a DB instance that uses RDS Optimized Writes

For Linux, macOS, or Unix:

aws rds create-db-instance \


--db-instance-identifier mydbinstance \
--engine mariadb \
--engine-version 10.6.10 \
--db-instance-class db.r5b.large \
--manage-master-user-password \
--master-username admin \
--allocated-storage 200

For Windows:

aws rds create-db-instance ^


--db-instance-identifier mydbinstance ^
--engine mariadb ^
--engine-version 10.6.10 ^
--db-instance-class db.r5b.large ^
--manage-master-user-password ^
--master-username admin ^
--allocated-storage 200

RDS API
You can create a DB instance using the CreateDBInstance operation. When you use this operation, make
sure the EngineVersion and DBInstanceClass values support RDS Optimized Writes. In addition,
make sure the parameter group associated with the DB instance has the rds.optimized_writes
parameter set to AUTO.

1287
Amazon Relational Database Service User Guide
Limitations

Limitations for RDS Optimized Writes


The following limitations apply to RDS Optimized Writes:

• You can only modify a database to turn on RDS Optimized Writes if the database was created with a
DB engine version and DB instance class that support the feature. In this case, if RDS Optimized Writes
is turned off for the database, you can turn it on by setting the rds.optimized_writes parameter
to AUTO. For more information, see Using RDS Optimized Writes (p. 1285).
• You can only modify a database to turn on RDS Optimized Writes if the database was created after
the feature was released. The underlying file system format and organization that RDS Optimized
Writes needs is incompatible with the file system format of databases created before the feature was
released. By extension, you can't use any snapshots of previously created instances with this feature
because the snapshots use the older, incompatible file system.
Important
To convert from the old format to the new format, you need to perform a full database
migration. If you want to use this feature on DB instances that were created before the feature
was released, create a new empty DB instance and manually migrate your older DB instance to
the newer DB instance. You can migrate your older DB instance using the native mysqldump
tool, replication, or AWS Database Migration Service. For more information, see mariadb-
dump/mysqldump in the MariaDB documentation, Working with MariaDB replication in
Amazon RDS (p. 1318), and the AWS Database Migration Service User Guide. For help with
migrating using AWS tools, contact support.
• When you are restoring an RDS for MariaDB database from a snapshot, you can only turn on RDS
Optimized Writes for the database if all of the following conditions apply:
• The snapshot was created from a database that supports RDS Optimized Writes.
• The snapshot was created from a database that was created after RDS Optimized Writes was
released.
• The snapshot is restored to a database that supports RDS Optimized Writes.
• The restored database is associated with a parameter group that has the rds.optimized_writes
parameter set to AUTO.

1288
Amazon Relational Database Service User Guide
Upgrading the MariaDB DB engine

Upgrading the MariaDB DB engine


When Amazon RDS supports a new version of a database engine, you can upgrade your DB instances to
the new version. There are two kinds of upgrades for MariaDB DB instances: major version upgrades and
minor version upgrades.

Major version upgrades can contain database changes that are not backward-compatible with existing
applications. As a result, you must manually perform major version upgrades of your DB instances. You
can initiate a major version upgrade by modifying your DB instance. However, before you perform a
major version upgrade, we recommend that you follow the instructions in Major version upgrades for
MariaDB (p. 1290).

In contrast, minor version upgrades include only changes that are backward-compatible with existing
applications. You can initiate a minor version upgrade manually by modifying your DB instance. Or
you can enable the Auto minor version upgrade option when creating or modifying a DB instance.
Doing so means that your DB instance is automatically upgraded after Amazon RDS tests and approves
the new version. For information about performing an upgrade, see Upgrading a DB instance engine
version (p. 429).

If your MariaDB DB instance is using read replicas, you must upgrade all of the read replicas before
upgrading the source instance. If your DB instance is in a Multi-AZ deployment, both the writer and
standby replicas are upgraded. Your DB instance might not be available until the upgrade is complete.

For more information about MariaDB supported versions and version management, see MariaDB on
Amazon RDS versions (p. 1265).

Database engine upgrades require downtime. The duration of the downtime varies based on the size of
your DB instance.
Tip
You can minimize the downtime required for DB instance upgrade by using a blue/green
deployment. For more information, see Using Amazon RDS Blue/Green Deployments for
database updates (p. 566).

Topics
• Overview of upgrading (p. 1289)
• Major version upgrades for MariaDB (p. 1290)
• Upgrading a MariaDB DB instance (p. 1291)
• Automatic minor version upgrades for MariaDB (p. 1291)
• Using a read replica to reduce downtime when upgrading a MariaDB database (p. 1293)

Overview of upgrading
When you use the AWS Management Console to upgrade a DB instance, it shows the valid upgrade
targets for the DB instance. You can also use the following AWS CLI command to identify the valid
upgrade targets for a DB instance:

For Linux, macOS, or Unix:

aws rds describe-db-engine-versions \


--engine mariadb \
--engine-version version-number \
--query "DBEngineVersions[*].ValidUpgradeTarget[*].{EngineVersion:EngineVersion}" --
output text

For Windows:

1289
Amazon Relational Database Service User Guide
Major version upgrades

aws rds describe-db-engine-versions ^


--engine mariadb ^
--engine-version version-number ^
--query "DBEngineVersions[*].ValidUpgradeTarget[*].{EngineVersion:EngineVersion}" --
output text

For example, to identify the valid upgrade targets for a MariaDB version 10.5.17 DB instance, run the
following AWS CLI command:

For Linux, macOS, or Unix:

aws rds describe-db-engine-versions \


--engine mariadb \
--engine-version 10.5.17 \
--query "DBEngineVersions[*].ValidUpgradeTarget[*].{EngineVersion:EngineVersion}" --
output text

For Windows:

aws rds describe-db-engine-versions ^


--engine mariadb ^
--engine-version 10.5.17 ^
--query "DBEngineVersions[*].ValidUpgradeTarget[*].{EngineVersion:EngineVersion}" --
output text

Amazon RDS takes two or more DB snapshots during the upgrade process. Amazon RDS takes up to
two snapshots of the DB instance before making any upgrade changes. If the upgrade doesn't work for
your databases, you can restore one of these snapshots to create a DB instance running the old version.
Amazon RDS takes another snapshot of the DB instance when the upgrade completes. Amazon RDS
takes these snapshots regardless of whether AWS Backup manages the backups for the DB instance.
Note
Amazon RDS only takes DB snapshots if you have set the backup retention period for your DB
instance to a number greater than 0. To change your backup retention period, see Modifying an
Amazon RDS DB instance (p. 401).

After the upgrade is complete, you can't revert to the previous version of the database engine. If you
want to return to the previous version, restore the first DB snapshot taken to create a new DB instance.

You control when to upgrade your DB instance to a new version supported by Amazon RDS. This level of
control helps you maintain compatibility with specific database versions and test new versions with your
application before deploying in production. When you are ready, you can perform version upgrades at
the times that best fit your schedule.

If your DB instance is using read replication, you must upgrade all of the Read Replicas before upgrading
the source instance.

If your DB instance is in a Multi-AZ deployment, both the primary and standby DB instances are
upgraded. The primary and standby DB instances are upgraded at the same time and you will experience
an outage until the upgrade is complete. The time for the outage varies based on your database engine,
engine version, and the size of your DB instance.

Major version upgrades for MariaDB


Major version upgrades can contain database changes that are not backward-compatible with existing
applications. As a result, Amazon RDS doesn't apply major version upgrades automatically. You must
manually modify your DB instance. We recommend that you thoroughly test any upgrade before
applying it to your production instances.

1290
Amazon Relational Database Service User Guide
Upgrading a MariaDB DB instance

Amazon RDS supports the following in-place upgrades for major versions of the MariaDB database
engine:

• Any MariaDB version to MariaDB 10.11


• Any MariaDB version to MariaDB 10.6
• MariaDB 10.4 to MariaDB 10.5
• MariaDB 10.3 to MariaDB 10.4

To perform a major version upgrade to a MariaDB version lower than 10.6, upgrade to each major
version in order. For example, to upgrade from version 10.3 to version 10.5, upgrade in the following
order: 10.3 to 10.4 and then 10.4 to 10.5.

If you are using a custom parameter group, and you perform a major version upgrade, you must specify
either a default parameter group for the new DB engine version or create your own custom parameter
group for the new DB engine version. Associating the new parameter group with the DB instance requires
a customer-initiated database reboot after the upgrade completes. The instance's parameter group
status will show pending-reboot if the instance needs to be rebooted to apply the parameter group
changes. An instance's parameter group status can be viewed in the AWS Management Console or by
using a "describe" call such as describe-db-instances.

Upgrading a MariaDB DB instance


For information about manually or automatically upgrading a MariaDB DB instance, see Upgrading a DB
instance engine version (p. 429).

Automatic minor version upgrades for MariaDB


If you specify the following settings when creating or modifying a DB instance, you can have your DB
instance automatically upgraded.

• The Auto minor version upgrade setting is enabled.


• The Backup retention period setting is greater than 0.

In the AWS Management Console, these settings are under Additional configuration. The following
image shows the Auto minor version upgrade setting.

1291
Amazon Relational Database Service User Guide
Automatic minor version upgrades

For more information about these settings, see Settings for DB instances (p. 402).

For some RDS for MariaDB major versions in some AWS Regions, one minor version is designated
by RDS as the automatic upgrade version. After a minor version has been tested and approved by
Amazon RDS, the minor version upgrade occurs automatically during your maintenance window. RDS
doesn't automatically set newer released minor versions as the automatic upgrade version. Before RDS
designates a newer automatic upgrade version, several criteria are considered, such as the following:

• Known security issues


• Bugs in the MariaDB community version
• Overall fleet stability since the minor version was released

You can use the following AWS CLI command to determine the current automatic minor upgrade target
version for a specified MariaDB minor version in a specific AWS Region.

For Linux, macOS, or Unix:

aws rds describe-db-engine-versions \


--engine mariadb \
--engine-version minor-version \
--region region \
--query "DBEngineVersions[*].ValidUpgradeTarget[*].
{AutoUpgrade:AutoUpgrade,EngineVersion:EngineVersion}" \
--output text

For Windows:

aws rds describe-db-engine-versions ^


--engine mariadb ^
--engine-version minor-version ^
--region region ^
--query "DBEngineVersions[*].ValidUpgradeTarget[*].
{AutoUpgrade:AutoUpgrade,EngineVersion:EngineVersion}" ^
--output text

For example, the following AWS CLI command determines the automatic minor upgrade target for
MariaDB minor version 10.5.16 in the US East (Ohio) AWS Region (us-east-2).

For Linux, macOS, or Unix:

aws rds describe-db-engine-versions \


--engine mariadb \
--engine-version 10.5.16 \
--region us-east-2 \
--query "DBEngineVersions[*].ValidUpgradeTarget[*].
{AutoUpgrade:AutoUpgrade,EngineVersion:EngineVersion}" \
--output table

For Windows:

aws rds describe-db-engine-versions ^


--engine mariadb ^
--engine-version 10.5.16 ^
--region us-east-2 ^
--query "DBEngineVersions[*].ValidUpgradeTarget[*].
{AutoUpgrade:AutoUpgrade,EngineVersion:EngineVersion}" ^
--output table

1292
Amazon Relational Database Service User Guide
Upgrading with reduced downtime

Your output is similar to the following.

----------------------------------
| DescribeDBEngineVersions |
+--------------+-----------------+
| AutoUpgrade | EngineVersion |
+--------------+-----------------+
| True | 10.5.17 |
| False | 10.5.18 |
| False | 10.5.19 |
| False | 10.6.5 |
| False | 10.6.7 |
| False | 10.6.8 |
| False | 10.6.10 |
| False | 10.6.11 |
| False | 10.6.12 |
+--------------+-----------------+

In this example, the AutoUpgrade value is True for MariaDB version 10.5.17. So, the automatic minor
upgrade target is MariaDB version 10.5.17, which is highlighted in the output.

A MariaDB DB instance is automatically upgraded during your maintenance window if the following
criteria are met:

• The Auto minor version upgrade setting is enabled.


• The Backup retention period setting is greater than 0.
• The DB instance is running a minor DB engine version that is less than the current automatic upgrade
minor version.

For more information, see Automatically upgrading the minor engine version (p. 431).

Using a read replica to reduce downtime when


upgrading a MariaDB database
In most cases, a blue/green deployment is the best option to reduce downtime when upgrading a
MariaDB DB instance. For more information, see Using Amazon RDS Blue/Green Deployments for
database updates (p. 566).

If you can't use a blue/green deployment and your MariaDB DB instance is currently in use with a
production application, you can use the following procedure to upgrade the database version for your DB
instance. This procedure can reduce the amount of downtime for your application.

By using a read replica, you can perform most of the maintenance steps ahead of time and minimize the
necessary changes during the actual outage. With this technique, you can test and prepare the new DB
instance without making any changes to your existing DB instance.

The following procedure shows an example of upgrading from MariaDB version 10.5 to MariaDB version
10.6. You can use the same general steps for upgrades to other major versions.

To upgrade a MariaDB database while a DB instance is in use

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. Create a read replica of your MariaDB 10.5 DB instance. This process creates an upgradable copy of
your database. Other read replicas of the DB instance might also exist.

a. In the console, choose Databases, and then choose the DB instance that you want to upgrade.

1293
Amazon Relational Database Service User Guide
Upgrading with reduced downtime

b. For Actions, choose Create read replica.


c. Provide a value for DB instance identifier for your read replica and ensure that the DB instance
class and other settings match your MariaDB 10.5 DB instance.
d. Choose Create read replica.
3. (Optional) When the read replica has been created and Status shows Available, convert the read
replica into a Multi-AZ deployment and enable backups.

By default, a read replica is created as a Single-AZ deployment with backups disabled. Because the
read replica ultimately becomes the production DB instance, it is a best practice to configure a Multi-
AZ deployment and enable backups now.

a. In the console, choose Databases, and then choose the read replica that you just created.
b. Choose Modify.
c. For Multi-AZ deployment, choose Create a standby instance.
d. For Backup Retention Period, choose a positive nonzero value, such as 3 days, and then choose
Continue.
e. For Scheduling of modifications, choose Apply immediately.
f. Choose Modify DB instance.
4. When the read replica Status shows Available, upgrade the read replica to MariaDB 10.6.

a. In the console, choose Databases, and then choose the read replica that you just created.
b. Choose Modify.
c. For DB engine version, choose the MariaDB 10.6 version to upgrade to, and then choose
Continue.
d. For Scheduling of modifications, choose Apply immediately.
e. Choose Modify DB instance to start the upgrade.
5. When the upgrade is complete and Status shows Available, verify that the upgraded read replica is
up-to-date with the source MariaDB 10.5 DB instance. To verify, connect to the read replica and run
the SHOW REPLICA STATUS command. If the Seconds_Behind_Master field is 0, then replication
is up-to-date.
Note
Previous versions of MariaDB used SHOW SLAVE STATUS instead of SHOW REPLICA
STATUS. If you are using a MariaDB version before 10.6, then use SHOW SLAVE STATUS.
6. (Optional) Create a read replica of your read replica.

If you want the DB instance to have a read replica after it is promoted to a standalone DB instance,
you can create the read replica now.

a. In the console, choose Databases, and then choose the read replica that you just upgraded.
b. For Actions, choose Create read replica.
c. Provide a value for DB instance identifier for your read replica and ensure that the DB instance
class and other settings match your MariaDB 10.5 DB instance.
d. Choose Create read replica.
7. (Optional) Configure a custom DB parameter group for the read replica.

If you want the DB instance to use a custom parameter group after it is promoted to a standalone
DB instance, you can create the DB parameter group now and associate it with the read replica.

a. Create a custom DB parameter group for MariaDB 10.6. For instructions, see Creating a DB
parameter group (p. 350).
b. Modify the parameters that you want to change in the DB parameter group you just created. For
instructions, see Modifying parameters in a DB parameter group (p. 352).
1294
Amazon Relational Database Service User Guide
Upgrading with reduced downtime

c. In the console, choose Databases, and then choose the read replica.
d. Choose Modify.
e. For DB parameter group, choose the MariaDB 10.6 DB parameter group you just created, and
then choose Continue.
f. For Scheduling of modifications, choose Apply immediately.
g. Choose Modify DB instance to start the upgrade.
8. Make your MariaDB 10.6 read replica a standalone DB instance.
Important
When you promote your MariaDB 10.6 read replica to a standalone DB instance, it is no
longer a replica of your MariaDB 10.5 DB instance. We recommend that you promote
your MariaDB 10.6 read replica during a maintenance window when your source MariaDB
10.5 DB instance is in read-only mode and all write operations are suspended. When the
promotion is completed, you can direct your write operations to the upgraded MariaDB 10.6
DB instance to ensure that no write operations are lost.
In addition, we recommend that, before promoting your MariaDB 10.6 read replica, you
perform all necessary data definition language (DDL) operations on your MariaDB 10.6
read replica. An example is creating indexes. This approach avoids negative effects on the
performance of the MariaDB 10.6 read replica after it has been promoted. To promote a
read replica, use the following procedure.

a. In the console, choose Databases, and then choose the read replica that you just upgraded.
b. For Actions, choose Promote.
c. Choose Yes to enable automated backups for the read replica instance. For more information,
see Working with backups (p. 591).
d. Choose Continue.
e. Choose Promote Read Replica.
9. You now have an upgraded version of your MariaDB database. At this point, you can direct your
applications to the new MariaDB 10.6 DB instance.

1295
Amazon Relational Database Service User Guide
Importing data into a MariaDB DB instance

Importing data into a MariaDB DB instance


You can use several different techniques to import data into an RDS for MariaDB DB instance. The best
approach depends on the source of the data, the amount of data, and whether the import is done one
time or is ongoing. If you are migrating an application along with the data, also consider the amount of
downtime that you are willing to experience.

Find techniques to import data into an RDS for MariaDB DB instance in the following table.

Source Amount One ApplicationTechnique More


of data time or downtime information
ongoing

Existing Any One Minimal Create a read replica for ongoing replication. Working
MariaDB time or Promote the read replica for one-time with DB
DB ongoing creation of a new DB instance. instance
instance read
replicas (p. 438)

Existing Small One time Some Copy the data directly to your MySQL DB Importing
MariaDB instance using a command-line utility. data
or from a
MySQL MariaDB
database or
MySQL
database
to a
MariaDB
or
MySQL
DB
instance (p. 1297)

Data not Medium One time Some Create flat files and import them using the Importing
stored mysqlimport utility. data
in an from any
existing source
database to a
MariaDB
or
MySQL
DB
instance (p. 1313)

Existing Any Ongoing Minimal Configure replication with an existing Configuring


MariaDB MariaDB or MySQL database as the binary
or replication source. log file
MySQL position
database You can configure replication into a replication
on MariaDB DB instance using MariaDB global with an
premises transaction identifiers (GTIDs) when the external
or on external instance is MariaDB version 10.0.24 source
Amazon or higher, or using binary log coordinates for instance (p. 1331)
EC2 MySQL instances or MariaDB instances on
earlier versions than 10.0.24. MariaDB GTIDs Importing
are implemented differently than MySQL data

1296
Amazon Relational Database Service User Guide
Importing data from an external database

Source Amount One ApplicationTechnique More


of data time or downtime information
ongoing
GTIDs, which aren't supported by Amazon to an
RDS. Amazon
RDS
MariaDB
or
MySQL
DB
instance
with
reduced
downtime (p. 1299)

Any Any One Minimal Use AWS Database Migration Service What
existing time or to migrate the database with minimal is AWS
database ongoing downtime and, for many database DB Database
engines, continue ongoing replication. Migration
Service
and
Using a
MySQL-
compatible
database
as a
target
for AWS
DMS in
the AWS
Database
Migration
Service
User
Guide

Note
The mysql system database contains authentication and authorization information required
to log into your DB instance and access your data. Dropping, altering, renaming, or truncating
tables, data, or other contents of the mysql database in your DB instance can result in errors and
might render the DB instance and your data inaccessible. If this occurs, the DB instance can be
restored from a snapshot using the AWS CLI restore-db-instance-from-db-snapshot or
recovered using restore-db-instance-to-point-in-time commands.

Importing data from a MariaDB or MySQL database


to a MariaDB or MySQL DB instance
You can also import data from an existing MariaDB or MySQL database to a MySQL or MariaDB DB
instance. You do so by copying the database with mysqldump and piping it directly into the MariaDB
or MySQL DB instance. The mysqldump command line utility is commonly used to make backups and
transfer data from one MariaDB or MySQL server to another. It's included with MySQL and MariaDB client
software.

1297
Amazon Relational Database Service User Guide
Importing data from an external database

Note
If you are using a MySQL DB instance and your scenario supports it, it's easier to move data
in and out of Amazon RDS by using backup files and Amazon S3. For more information, see
Restoring a backup into a MySQL DB instance (p. 1680).

A typical mysqldump command to move data from an external database to an Amazon RDS DB instance
looks similar to the following.

mysqldump -u local_user \
--databases database_name \
--single-transaction \
--compress \
--order-by-primary \
-plocal_password | mysql -u RDS_user \
--port=port_number \
--host=host_name \
-pRDS_password

Important
Make sure not to leave a space between the -p option and the entered password.
Specify credentials other than the prompts shown here as a security best practice.

Make sure that you're aware of the following recommendations and considerations:

• Exclude the following schemas from the dump file: sys, performance_schema, and
information_schema. The mysqldump utility excludes these schemas by default.
• If you need to migrate users and privileges, consider using a tool that generates the data control
language (DCL) for recreating them, such as the pt-show-grants utility.
• To perform the import, make sure the user doing so has access to the DB instance. For more
information, see Controlling access with security groups (p. 2680).

The parameters used are as follows:

• -u local_user – Use to specify a user name. In the first usage of this parameter, you specify the
name of a user account on the local MariaDB or MySQL database identified by the --databases
parameter.
• --databases database_name – Use to specify the name of the database on the local MariaDB or
MySQL instance that you want to import into Amazon RDS.
• --single-transaction – Use to ensure that all of the data loaded from the local database is
consistent with a single point in time. If there are other processes changing the data while mysqldump
is reading it, using this parameter helps maintain data integrity.
• --compress – Use to reduce network bandwidth consumption by compressing the data from the local
database before sending it to Amazon RDS.
• --order-by-primary – Use to reduce load time by sorting each table's data by its primary key.
• -plocal_password – Use to specify a password. In the first usage of this parameter, you specify the
password for the user account identified by the first -u parameter.
• -u RDS_user – Use to specify a user name. In the second usage of this parameter, you specify the
name of a user account on the default database for the MariaDB or MySQL DB instance identified by
the --host parameter.
• --port port_number – Use to specify the port for your MariaDB or MySQL DB instance. By default,
this is 3306 unless you changed the value when creating the instance.
• --host host_name – Use to specify the Domain Name System (DNS) name from the Amazon RDS DB
instance endpoint, for example, myinstance.123456789012.us-east-1.rds.amazonaws.com.
You can find the endpoint value in the instance details in the Amazon RDS Management Console.

1298
Amazon Relational Database Service User Guide
Importing data to a DB instance with reduced downtime

• -pRDS_password – Use to specify a password. In the second usage of this parameter, you specify the
password for the user account identified by the second -u parameter.

Make sure to create any stored procedures, triggers, functions, or events manually in your Amazon RDS
database. If you have any of these objects in the database that you are copying, then exclude them when
you run mysqldump. To do so, include the following parameters with your mysqldump command: --
routines=0 --triggers=0 --events=0.

The following example copies the world sample database on the local host to a MySQL DB instance.

For Linux, macOS, or Unix:

sudo mysqldump -u localuser \


--databases world \
--single-transaction \
--compress \
--order-by-primary \
--routines=0 \
--triggers=0 \
--events=0 \
-plocalpassword | mysql -u rdsuser \
--port=3306 \
--host=myinstance.123456789012.us-east-1.rds.amazonaws.com \
-prdspassword

For Windows, run the following command in a command prompt that has been opened by right-clicking
Command Prompt on the Windows programs menu and choosing Run as administrator:

mysqldump -u localuser ^
--databases world ^
--single-transaction ^
--compress ^
--order-by-primary ^
--routines=0 ^
--triggers=0 ^
--events=0 ^
-plocalpassword | mysql -u rdsuser ^
--port=3306 ^
--host=myinstance.123456789012.us-east-1.rds.amazonaws.com ^
-prdspassword

Note
Specify credentials other than the prompts shown here as a security best practice.

Importing data to an Amazon RDS MariaDB or MySQL


DB instance with reduced downtime
In some cases, you might need to import data from an external MariaDB or MySQL database that
supports a live application to a MariaDB DB instance, a MySQL DB instance, or a MySQL Multi-AZ
DB cluster. Use the following procedure to minimize the impact on availability of applications. This
procedure can also help if you are working with a very large database. Using this procedure, you can
reduce the cost of the import by reducing the amount of data that is passed across the network to AWS.

In this procedure, you transfer a copy of your database data to an Amazon EC2 instance and import the
data into a new Amazon RDS database. You then use replication to bring the Amazon RDS database
up-to-date with your live external instance, before redirecting your application to the Amazon RDS

1299
Amazon Relational Database Service User Guide
Importing data to a DB instance with reduced downtime

database. Configure MariaDB replication based on global transaction identifiers (GTIDs) if the external
instance is MariaDB 10.0.24 or higher and the target instance is RDS for MariaDB. Otherwise, configure
replication based on binary log coordinates. We recommend GTID-based replication if your external
database supports it because GTID-based replication is a more reliable method. For more information,
see Global transaction ID in the MariaDB documentation.
Note
If you want to import data into a MySQL DB instance and your scenario supports it, we
recommend moving data in and out of Amazon RDS by using backup files and Amazon S3. For
more information, see Restoring a backup into a MySQL DB instance (p. 1680).

Note
We don't recommend that you use this procedure with source MySQL databases from MySQL
versions earlier than version 5.5 because of potential replication issues. For more information,
see Replication compatibility between MySQL versions in the MySQL documentation.

Create a copy of your existing database


The first step in the process of migrating a large amount of data to an RDS for MariaDB or RDS for
MySQL database with minimal downtime is to create a copy of the source data.

1300
Amazon Relational Database Service User Guide
Importing data to a DB instance with reduced downtime

You can use the mysqldump utility to create a database backup in either SQL or delimited-text format.
We recommend that you do a test run with each format in a non-production environment to see which
method minimizes the amount of time that mysqldump runs.

We also recommend that you weigh mysqldump performance against the benefit offered by using the
delimited-text format for loading. A backup using delimited-text format creates a tab-separated text
file for each table being dumped. To reduce the amount of time required to import your database, you
can load these files in parallel using the LOAD DATA LOCAL INFILE command. For more information
about choosing a mysqldump format and then loading the data, see Using mysqldump for backups in
the MySQL documentation.

Before you start the backup operation, make sure to set the replication options on the MariaDB or
MySQL database that you are copying to Amazon RDS. The replication options include turning on
binary logging and setting a unique server ID. Setting these options causes your server to start logging
database transactions and prepares it to be a source replication instance later in this process.
Note
Use the --single-transaction option with mysqldump because it dumps a consistent
state of the database. To ensure a valid dump file, don't run data definition language (DDL)
statements while mysqldump is running. You can schedule a maintenance window for these
operations.
Exclude the following schemas from the dump file: sys, performance_schema, and
information_schema. The mysqldump utility excludes these schemas by default.
To migrate users and privileges, consider using a tool that generates the data control language
(DCL) for recreating them, such as the pt-show-grants utility.

To set replication options


1. Edit the my.cnf file (this file is usually under /etc).

sudo vi /etc/my.cnf

1301
Amazon Relational Database Service User Guide
Importing data to a DB instance with reduced downtime

Add the log_bin and server_id options to the [mysqld] section. The log_bin option provides
a file name identifier for binary log files. The server_id option provides a unique identifier for the
server in source-replica relationships.

The following example shows the updated [mysqld] section of a my.cnf file.

[mysqld]
log-bin=mysql-bin
server-id=1

For more information, see the MySQL documentation.


2. For replication with a Multi-AZ DB cluster, set the ENFORCE_GTID_CONSISTENCY and the GTID_MODE
parameter to ON.

mysql> SET @@GLOBAL.ENFORCE_GTID_CONSISTENCY = ON;

mysql> SET @@GLOBAL.GTID_MODE = ON;

These settings aren't required for replication with a DB instance.


3. Restart the mysql service.

sudo service mysqld restart

To create a backup copy of your existing database


1. Create a backup of your data using the mysqldump utility, specifying either SQL or delimited-text
format.

Specify --master-data=2 to create a backup file that can be used to start replication between
servers. For more information, see the mysqldump documentation.

To improve performance and ensure data integrity, use the --order-by-primary and --single-
transaction options of mysqldump.

To avoid including the MySQL system database in the backup, do not use the --all-databases
option with mysqldump. For more information, see Creating a data snapshot using mysqldump in the
MySQL documentation.

Use chmod if necessary to make sure that the directory where the backup file is being created is
writeable.
Important
On Windows, run the command window as an administrator.
• To produce SQL output, use the following command.

For Linux, macOS, or Unix:

sudo mysqldump \
--databases database_name \
--master-data=2 \
--single-transaction \
--order-by-primary \
-r backup.sql \
-u local_user \

1302
Amazon Relational Database Service User Guide
Importing data to a DB instance with reduced downtime

-p password

Note
Specify credentials other than the prompts shown here as a security best practice.

For Windows:

mysqldump ^
--databases database_name ^
--master-data=2 ^
--single-transaction ^
--order-by-primary ^
-r backup.sql ^
-u local_user ^
-p password

Note
Specify credentials other than the prompts shown here as a security best practice.
• To produce delimited-text output, use the following command.

For Linux, macOS, or Unix:

sudo mysqldump \
--tab=target_directory \
--fields-terminated-by ',' \
--fields-enclosed-by '"' \
--lines-terminated-by 0x0d0a \
database_name \
--master-data=2 \
--single-transaction \
--order-by-primary \
-p password

For Windows:

mysqldump ^
--tab=target_directory ^
--fields-terminated-by "," ^
--fields-enclosed-by """ ^
--lines-terminated-by 0x0d0a ^
database_name ^
--master-data=2 ^
--single-transaction ^
--order-by-primary ^
-p password

Note
Specify credentials other than the prompts shown here as a security best practice.
Make sure to create any stored procedures, triggers, functions, or events manually in
your Amazon RDS database. If you have any of these objects in the database that you
are copying, exclude them when you run mysqldump. To do so, include the following
arguments with your mysqldump command: --routines=0 --triggers=0 --
events=0.

When using the delimited-text format, a CHANGE MASTER TO comment is returned when you
run mysqldump. This comment contains the master log file name and position. If the external
instance is other than MariaDB version 10.0.24 or higher, note the values for MASTER_LOG_FILE
and MASTER_LOG_POS. You need these values when setting up replication.

1303
Amazon Relational Database Service User Guide
Importing data to a DB instance with reduced downtime

-- Position to start replication or point-in-time recovery from


--
-- CHANGE MASTER TO MASTER_LOG_FILE='mysql-bin-changelog.000031', MASTER_LOG_POS=107;

If you are using SQL format, you can get the master log file name and position in the CHANGE
MASTER TO comment in the backup file. If the external instance is MariaDB version 10.0.24 or
higher, you can get the GTID in the next step.
2. If the external instance you are using is MariaDB version 10.0.24 or higher, you use GTID-based
replication. Run SHOW MASTER STATUS on the external MariaDB instance to get the binary log file
name and position, then convert them to a GTID by running BINLOG_GTID_POS on the external
MariaDB instance.

SELECT BINLOG_GTID_POS('binary log file name', binary log file position);

Note the GTID returned; you need it to configure replication.


3. Compress the copied data to reduce the amount of network resources needed to copy your data to the
Amazon RDS database. Note the size of the backup file. You need this information when determining
how large an Amazon EC2 instance to create. When you are done, compress the backup file using GZIP
or your preferred compression utility.
• To compress SQL output, use the following command.

gzip backup.sql

• To compress delimited-text output, use the following command.

tar -zcvf backup.tar.gz target_directory

Create an Amazon EC2 instance and copy the compressed


database
Copying your compressed database backup file to an Amazon EC2 instance takes fewer network
resources than doing a direct copy of uncompressed data between database instances. After your data is
in Amazon EC2, you can copy it from there directly to your MariaDB or MySQL database. For you to save
on the cost of network resources, your Amazon EC2 instance must be in the same AWS Region as your
Amazon RDS DB instance. Having the Amazon EC2 instance in the same AWS Region as your Amazon
RDS database also reduces network latency during the import.

1304
Amazon Relational Database Service User Guide
Importing data to a DB instance with reduced downtime

To create an Amazon EC2 instance and copy your data


1. In the AWS Region where you plan to create the RDS database, create a virtual private cloud (VPC),
a VPC security group, and a VPC subnet. Ensure that the inbound rules for your VPC security group
allow the IP addresses required for your application to connect to AWS. You can specify a range
of IP addresses (for example, 203.0.113.0/24), or another VPC security group. You can use the
Amazon VPC Management Console to create and manage VPCs, subnets, and security groups. For
more information, see Getting started with Amazon VPC in the Amazon Virtual Private Cloud Getting
Started Guide.
2. Open the Amazon EC2 Management Console and choose the AWS Region to contain both your
Amazon EC2 instance and your Amazon RDS database. Launch an Amazon EC2 instance using the VPC,
subnet, and security group that you created in Step 1. Ensure that you select an instance type with
enough storage for your database backup file when it is uncompressed. For details on Amazon EC2
instances, see Getting started with Amazon EC2 Linux instances in the Amazon Elastic Compute Cloud
User Guide for Linux.
3. To connect to your Amazon RDS database from your Amazon EC2 instance, edit your VPC security
group. Add an inbound rule specifying the private IP address of your EC2 instance. You can find the
private IP address on the Details tab of the Instance pane in the EC2 console window. To edit the VPC
security group and add an inbound rule, choose Security Groups in the EC2 console navigation pane,
choose your security group, and then add an inbound rule for MySQL or Aurora specifying the private
IP address of your EC2 instance. To learn how to add an inbound rule to a VPC security group, see
Adding and removing rules in the Amazon VPC User Guide.
4. Copy your compressed database backup file from your local system to your Amazon EC2 instance.
Use chmod if necessary to make sure that you have write permission for the target directory of the
Amazon EC2 instance. You can use scp or a Secure Shell (SSH) client to copy the file. The following is
an example.

$ scp -r -i key pair.pem backup.sql.gz ec2-user@EC2 DNS:/target_directory/backup.sql.gz

Important
Be sure to copy sensitive data using a secure network transfer protocol.

1305
Amazon Relational Database Service User Guide
Importing data to a DB instance with reduced downtime

5. Connect to your Amazon EC2 instance and install the latest updates and the MySQL client tools using
the following commands.

sudo yum update -y


sudo yum install mysql -y

For more information, see Connect to your instance in the Amazon Elastic Compute Cloud User Guide
for Linux.
Important
This example installs the MySQL client on an Amazon Machine Image (AMI) for an Amazon
Linux distribution. To install the MySQL client on a different distribution, such as Ubuntu or
Red Hat Enterprise Linux, this example doesn't work. For information about installing MySQL,
see Installing and Upgrading MySQL in the MySQL documentation.
6. While connected to your Amazon EC2 instance, decompress your database backup file. The following
are examples.
• To decompress SQL output, use the following command.

gzip backup.sql.gz -d

• To decompress delimited-text output, use the following command.

tar xzvf backup.tar.gz

Create a MySQL or MariaDB database and import data from


your Amazon EC2 instance
By creating a MariaDB DB instance, a MySQL DB instance, or a MySQL Multi-AZ DB cluster in the same
AWS Region as your Amazon EC2 instance, you can import the database backup file from EC2 faster than
over the internet.

1306
Amazon Relational Database Service User Guide
Importing data to a DB instance with reduced downtime

To create a MariaDB or MySQL database and import your data


1. Determine which DB instance class and what amount of storage space is required to support the
expected workload for this Amazon RDS database. As part of this process, decide what is sufficient
space and processing capacity for your data load procedures. Also decide what is required to handle
the production workload. You can estimate this based on the size and resources of the source
MariaDB or MySQL database. For more information, see DB instance classes (p. 11).
2. Create a DB instance or Multi-AZ DB cluster in the AWS Region that contains your Amazon EC2
instance.

To create a MySQL Multi-AZ DB cluster, follow the instructions in Creating a Multi-AZ DB


cluster (p. 508).

To create a MariaDB or MySQL DB instance, follow the instructions in Creating an Amazon RDS DB
instance (p. 300) and use the following guidelines:

• Specify a DB engine version that is compatible with your source DB instance, as follows:
• If your source instance is MySQL 5.5.x, the Amazon RDS DB instance must be MySQL.
• If your source instance is MySQL 5.6.x or 5.7.x, the Amazon RDS DB instance must be MySQL or
MariaDB.
• If your source instance is MySQL 8.0.x, the Amazon RDS DB instance must be MySQL 8.0.x.
• If your source instance is MariaDB 5.5 or higher, the Amazon RDS DB instance must be MariaDB.
• Specify the same virtual private cloud (VPC) and VPC security group as for your Amazon EC2
instance. This approach ensures that your Amazon EC2 instance and your Amazon RDS instance
are visible to each other over the network. Make sure your DB instance is publicly accessible. To
set up replication with your source database as described later, your DB instance must be publicly
accessible.
• Don't configure multiple Availability Zones, backup retention, or read replicas until after you have
imported the database backup. When that import is completed, you can configure Multi-AZ and
backup retention for the production instance.
3. Review the default configuration options for the Amazon RDS database. If the default parameter
group for the database doesn't have the configuration options that you want, find a different one
that does or create a new parameter group. For more information on creating a parameter group,
see Working with parameter groups (p. 347).
4. Connect to the new Amazon RDS database as the master user. Create the users required to support
the administrators, applications, and services that need to access the instance. The hostname for the
Amazon RDS database is the Endpoint value for this instance without including the port number.
An example is mysampledb.123456789012.us-west-2.rds.amazonaws.com. You can find the
endpoint value in the database details in the Amazon RDS Management Console.
5. Connect to your Amazon EC2 instance. For more information, see Connect to your instance in the
Amazon Elastic Compute Cloud User Guide for Linux.
6. Connect to your Amazon RDS database as a remote host from your Amazon EC2 instance using the
mysql command. The following is an example.

mysql -h host_name -P 3306 -u db_master_user -p

The hostname is the Amazon RDS database endpoint.


7. At the mysql prompt, run the source command and pass it the name of your database dump file to
load the data into the Amazon RDS DB instance:

• For SQL format, use the following command.

mysql> source backup.sql;

1307
Amazon Relational Database Service User Guide
Importing data to a DB instance with reduced downtime

• For delimited-text format, first create the database, if it isn't the default database you created
when setting up the Amazon RDS database.

mysql> create database database_name;


$ mysql> use database_name;

Then create the tables.

mysql> source table1.sql


$ mysql> source table2.sql
etc...

Then import the data.

mysql> LOAD DATA LOCAL INFILE 'table1.txt' INTO TABLE table1 FIELDS TERMINATED BY ','
ENCLOSED BY '"' LINES TERMINATED BY '0x0d0a';
$ mysql> LOAD DATA LOCAL INFILE 'table2.txt' INTO TABLE table2 FIELDS TERMINATED BY
',' ENCLOSED BY '"' LINES TERMINATED BY '0x0d0a';
etc...

To improve performance, you can perform these operations in parallel from multiple connections
so that all of your tables are created and then loaded at the same time.
Note
If you used any data-formatting options with mysqldump when you initially dumped
the table, make sure to use the same options with mysqlimport or LOAD DATA LOCAL
INFILE to ensure proper interpretation of the data file contents.
8. Run a simple SELECT query against one or two of the tables in the imported database to verify that
the import was successful.

If you no longer need the Amazon EC2 instance used in this procedure, terminate the EC2 instance
to reduce your AWS resource usage. To terminate an EC2 instance, see Terminating an instance in the
Amazon EC2 User Guide.

Replicate between your external database and new Amazon RDS


database
Your source database was likely updated during the time that it took to copy and transfer the data to the
MariaDB or MySQL database. Thus, you can use replication to bring the copied database up-to-date with
the source database.

1308
Amazon Relational Database Service User Guide
Importing data to a DB instance with reduced downtime

The permissions required to start replication on an Amazon RDS database are restricted and not
available to your Amazon RDS master user. Because of this, make sure to use either the Amazon RDS
mysql.rds_set_external_master (p. 1769) command or the mysql.rds_set_external_master_gtid (p. 1345)
command to configure replication, and the mysql.rds_start_replication (p. 1780) command to start
replication between your live database and your Amazon RDS database.

To start replication
Earlier, you turned on binary logging and set a unique server ID for your source database. Now you can
set up your Amazon RDS database as a replica with your live database as the source replication instance.

1. In the Amazon RDS Management Console, add the IP address of the server that hosts the source
database to the VPC security group for the Amazon RDS database. For more information on modifying
a VPC security group, see Security groups for your VPC in the Amazon Virtual Private Cloud User Guide.

You might also need to configure your local network to permit connections from the IP address of
your Amazon RDS database, so that it can communicate with your source instance. To find the IP
address of the Amazon RDS database, use the host command.

host rds_db_endpoint

The hostname is the DNS name from the Amazon RDS database endpoint, for example
myinstance.123456789012.us-east-1.rds.amazonaws.com. You can find the endpoint value
in the instance details in the Amazon RDS Management Console.
2. Using the client of your choice, connect to the source instance and create a user to be used for
replication. This account is used solely for replication and must be restricted to your domain to
improve security. The following is an example.

MySQL 5.5, 5.6, and 5.7

CREATE USER 'repl_user'@'mydomain.com' IDENTIFIED BY 'password';

MySQL 8.0

1309
Amazon Relational Database Service User Guide
Importing data to a DB instance with reduced downtime

CREATE USER 'repl_user'@'mydomain.com' IDENTIFIED WITH mysql_native_password BY


'password';

Note
Specify credentials other than the prompts shown here as a security best practice.
3. For the source instance, grant REPLICATION CLIENT and REPLICATION SLAVE privileges to
your replication user. For example, to grant the REPLICATION CLIENT and REPLICATION SLAVE
privileges on all databases for the 'repl_user' user for your domain, issue the following command.

MySQL 5.5, 5.6, and 5.7

GRANT REPLICATION CLIENT, REPLICATION SLAVE ON *.* TO 'repl_user'@'mydomain.com'


IDENTIFIED BY 'password';

MySQL 8.0

GRANT REPLICATION CLIENT, REPLICATION SLAVE ON *.* TO 'repl_user'@'mydomain.com';

Note
Specify credentials other than the prompts shown here as a security best practice.
4. If you used SQL format to create your backup file and the external instance is not MariaDB 10.0.24 or
higher, look at the contents of that file.

cat backup.sql

The file includes a CHANGE MASTER TO comment that contains the master log file name and
position. This comment is included in the backup file when you use the --master-data option with
mysqldump. Note the values for MASTER_LOG_FILE and MASTER_LOG_POS.

--
-- Position to start replication or point-in-time recovery from
--

-- CHANGE MASTER TO MASTER_LOG_FILE='mysql-bin-changelog.000031', MASTER_LOG_POS=107;

If you used delimited text format to create your backup file and the external instance isn't MariaDB
10.0.24 or higher, you should already have binary log coordinates from step 1 of the procedure at "To
create a backup copy of your existing database" in this topic.

If the external instance is MariaDB 10.0.24 or higher, you should already have the GTID from which to
start replication from step 2 of the procedure at "To create a backup copy of your existing database" in
this topic.
5. Make the Amazon RDS database the replica. If the external instance isn't MariaDB 10.0.24 or higher,
connect to the Amazon RDS database as the master user and identify the source database as the
source replication instance by using the mysql.rds_set_external_master (p. 1769) command. Use the
master log file name and master log position that you determined in the previous step if you have a
SQL format backup file. Or use the name and position that you determined when creating the backup
files if you used delimited-text format. The following is an example.

CALL mysql.rds_set_external_master ('myserver.mydomain.com', 3306,


'repl_user', 'password', 'mysql-bin-changelog.000031', 107, 0);

1310
Amazon Relational Database Service User Guide
Importing data to a DB instance with reduced downtime

Note
Specify credentials other than the prompts shown here as a security best practice.

If the external instance is MariaDB 10.0.24 or higher, connect to the Amazon RDS database as
the master user and identify the source database as the source replication instance by using the
mysql.rds_set_external_master_gtid (p. 1345) command. Use the GTID that you determined in step 2
of the procedure at "To create a backup copy of your existing database" in this topic.. The following is
an example.

CALL mysql.rds_set_external_master_gtid ('source_server_ip_address', 3306,


'ReplicationUser', 'password', 'GTID', 0);

The source_server_ip_address is the IP address of source replication instance. An EC2 private


DNS address is currently not supported.
Note
Specify credentials other than the prompts shown here as a security best practice.
6. On the Amazon RDS database, issue the mysql.rds_start_replication (p. 1780) command to start
replication.

CALL mysql.rds_start_replication;

7. On the Amazon RDS database, run the SHOW REPLICA STATUS command to determine when the
replica is up-to-date with the source replication instance. The results of the SHOW REPLICA STATUS
command include the Seconds_Behind_Master field. When the Seconds_Behind_Master field
returns 0, then the replica is up-to-date with the source replication instance.
Note
Previous versions of MySQL used SHOW SLAVE STATUS instead of SHOW REPLICA STATUS.
If you are using a MySQL version before 8.0.23, then use SHOW SLAVE STATUS.

For a MariaDB 10.5, 10.6, or 10.11 DB instance, run the mysql.rds_replica_status (p. 1344) procedure
instead of the MySQL command.
8. After the Amazon RDS database is up-to-date, turn on automated backups so you can restore
that database if needed. You can turn on or modify automated backups for your Amazon RDS
database using the Amazon RDS Management Console. For more information, see Working with
backups (p. 591).

Redirect your live application to your Amazon RDS instance


After the MariaDB or MySQL database is up-to-date with the source replication instance, you can now
update your live application to use the Amazon RDS instance.

1311
Amazon Relational Database Service User Guide
Importing data to a DB instance with reduced downtime

To redirect your live application to your MariaDB or MySQL database and stop
replication
1. To add the VPC security group for the Amazon RDS database, add the IP address of the server that
hosts the application. For more information on modifying a VPC security group, see Security groups
for your VPC in the Amazon Virtual Private Cloud User Guide.
2. Verify that the Seconds_Behind_Master field in the SHOW REPLICA STATUS command results is 0,
which indicates that the replica is up-to-date with the source replication instance.

SHOW REPLICA STATUS;

Note
Previous versions of MySQL used SHOW SLAVE STATUS instead of SHOW REPLICA STATUS.
If you are using a MySQL version before 8.0.23, then use SHOW SLAVE STATUS.

For a MariaDB 10.5, 10.6, or 10.11 DB instance, run the mysql.rds_replica_status (p. 1344) procedure
instead of the MySQL command.
3. Close all connections to the source when their transactions complete.
4. Update your application to use the Amazon RDS database. This update typically involves changing the
connection settings to identify the hostname and port of the Amazon RDS database, the user account
and password to connect with, and the database to use.
5. Connect to the DB instance.

For a Multi-AZ DB cluster, connect to the writer DB instance.


6. Stop replication for the Amazon RDS instance using the mysql.rds_stop_replication (p. 1782)
command.

CALL mysql.rds_stop_replication;

7. Run the mysql.rds_reset_external_master (p. 1769) command on your Amazon RDS database to reset
the replication configuration so this instance is no longer identified as a replica.

1312
Amazon Relational Database Service User Guide
Importing data from any source

CALL mysql.rds_reset_external_master;

8. Turn on additional Amazon RDS features such as Multi-AZ support and read replicas. For more
information, see Configuring and managing a Multi-AZ deployment (p. 492) and Working with DB
instance read replicas (p. 438).

Importing data from any source to a MariaDB or


MySQL DB instance
If you have more than 1 GiB of data to load, or if your data is coming from somewhere other than a
MariaDB or MySQL database, we recommend creating flat files and loading them with mysqlimport.
The mysqlimport utility is another command line utility bundled with the MySQL and MariaDB client
software. Its purpose is to load flat files into MySQL or MariaDB. For information about mysqlimport, see
mysqlimport - a data import program in the MySQL documentation.

We also recommend creating DB snapshots of the target Amazon RDS DB instance before and after the
data load. Amazon RDS DB snapshots are complete backups of your DB instance that can be used to
restore your DB instance to a known state. When you initiate a DB snapshot, I/O operations to your DB
instance are momentarily suspended while your database is backed up.

Creating a DB snapshot immediately before the load makes it possible for you to restore the database
to its state before the load, if you need to. A DB snapshot taken immediately after the load protects
you from having to load the data again in case of a mishap and can also be used to seed new database
instances.

The following list shows the steps to take. Each step is discussed in more detail following.

1. Create flat files containing the data to be loaded.


2. Stop any applications accessing the target DB instance.
3. Create a DB snapshot.
4. Consider turning off Amazon RDS automated backups.
5. Load the data using mysqlimport.
6. Enable automated backups again.

Step 1: Create flat files containing the data to be loaded


Use a common format, such as comma-separated values (CSV), to store the data to be loaded. Each table
must have its own file; you can't combine data for multiple tables in the same file. Give each file the
same name as the table it corresponds to. The file extension can be anything you like. For example, if the
table name is sales, the file name might be sales.csv or sales.txt, but not sales_01.csv.

Whenever possible, order the data by the primary key of the table being loaded. Doing this drastically
improves load times and minimizes disk storage requirements.

The speed and efficiency of this procedure depends on keeping the size of the files small. If the
uncompressed size of any individual file is larger than 1 GiB, split it into multiple files and load each one
separately.

On Unix-like systems (including Linux), use the split command. For example, the following command
splits the sales.csv file into multiple files of less than 1 GiB, splitting only at line breaks (-C 1024m).
The new files are named sales.part_00, sales.part_01, and so on.

1313
Amazon Relational Database Service User Guide
Importing data from any source

split -C 1024m -d sales.csv sales.part_

Similar utilities are available for other operating systems.

Step 2: Stop any applications accessing the target DB instance


Before starting a large load, stop all application activity accessing the target DB instance that you plan
to load to. We recommend this particularly if other sessions will be modifying the tables being loaded
or tables that they reference. Doing this reduces the risk of constraint violations occurring during the
load and improves load performance. It also makes it possible to restore the DB instance to the point just
before the load without losing changes made by processes not involved in the load.

Of course, this might not be possible or practical. If you can't stop applications from accessing the DB
instance before the load, take steps to ensure the availability and integrity of your data. The specific
steps required vary greatly depending upon specific use cases and site requirements.

Step 3: Create a DB snapshot


If you plan to load data into a new DB instance that contains no data, you can skip this step. Otherwise,
creating a DB snapshot of your DB instance makes it possible for you to restore the DB instance to the
point just before the load, if it becomes necessary. As previously mentioned, when you initiate a DB
snapshot, I/O operations to your DB instance are suspended for a few minutes while the database is
backed up.

The example following uses the AWS CLI create-db-snapshot command to create a DB snapshot of
the AcmeRDS instance and give the DB snapshot the identifier "preload".

For Linux, macOS, or Unix:

aws rds create-db-snapshot \


--db-instance-identifier AcmeRDS \
--db-snapshot-identifier preload

For Windows:

aws rds create-db-snapshot ^


--db-instance-identifier AcmeRDS ^
--db-snapshot-identifier preload

You can also use the restore from DB snapshot functionality to create test DB instances for dry runs or to
undo changes made during the load.

Keep in mind that restoring a database from a DB snapshot creates a new DB instance that, like all
DB instances, has a unique identifier and endpoint. To restore the DB instance without changing the
endpoint, first delete the DB instance so that you can reuse the endpoint.

For example, to create a DB instance for dry runs or other testing, you give the DB instance its own
identifier. In the example, AcmeRDS-2" is the identifier. The example connects to the DB instance using
the endpoint associated with AcmeRDS-2.

For Linux, macOS, or Unix:

aws rds restore-db-instance-from-db-snapshot \


--db-instance-identifier AcmeRDS-2 \
--db-snapshot-identifier preload

1314
Amazon Relational Database Service User Guide
Importing data from any source

For Windows:

aws rds restore-db-instance-from-db-snapshot ^


--db-instance-identifier AcmeRDS-2 ^
--db-snapshot-identifier preload

To reuse the existing endpoint, first delete the DB instance and then give the restored database the same
identifier.

For Linux, macOS, or Unix:

aws rds delete-db-instance \


--db-instance-identifier AcmeRDS \
--final-db-snapshot-identifier AcmeRDS-Final

aws rds restore-db-instance-from-db-snapshot \


--db-instance-identifier AcmeRDS \
--db-snapshot-identifier preload

For Windows:

aws rds delete-db-instance ^


--db-instance-identifier AcmeRDS ^
--final-db-snapshot-identifier AcmeRDS-Final

aws rds restore-db-instance-from-db-snapshot ^


--db-instance-identifier AcmeRDS ^
--db-snapshot-identifier preload

The preceding example takes a final DB snapshot of the DB instance before deleting it. This is optional
but recommended.

Step 4: Consider turning off Amazon RDS automated backups


Warning
Do not turn off automated backups if you need to perform point-in-time recovery.

Turning off automated backups erases all existing backups, so point-in-time recovery isn't possible after
automated backups have been turned off. Disabling automated backups is a performance optimization
and isn't required for data loads. Manual DB snapshots aren't affected by turning off automated backups.
All existing manual DB snapshots are still available for restore.

Turning off automated backups reduces load time by about 25 percent and reduces the amount of
storage space required during the load. If you plan to load data into a new DB instance that contains
no data, turning off backups is an easy way to speed up the load and avoid using the additional storage
needed for backups. However, in some cases you might plan to load into a DB instance that already
contains data. If so, weigh the benefits of turning off backups against the impact of losing the ability to
perform point-in-time-recovery.

DB instances have automated backups turned on by default (with a one day retention period). To turn off
automated backups, set the backup retention period to zero. After the load, you can turn backups back
on by setting the backup retention period to a nonzero value. To turn on or turn off backups, Amazon
RDS shuts the DB instance down and restarts it to turn MariaDB or MySQL logging on or off.

Use the AWS CLI modify-db-instance command to set the backup retention to zero and apply the
change immediately. Setting the retention period to zero requires a DB instance restart, so wait until the
restart has completed before proceeding.

For Linux, macOS, or Unix:

1315
Amazon Relational Database Service User Guide
Importing data from any source

aws rds modify-db-instance \


--db-instance-identifier AcmeRDS \
--apply-immediately \
--backup-retention-period 0

For Windows:

aws rds modify-db-instance ^


--db-instance-identifier AcmeRDS ^
--apply-immediately ^
--backup-retention-period 0

You can check the status of your DB instance with the AWS CLI describe-db-instances command.
The following example displays the DB instance status of the AcmeRDS DB instance.

aws rds describe-db-instances --db-instance-identifier AcmeRDS --query "*[].


{DBInstanceStatus:DBInstanceStatus}"

When the DB instance status is available, you're ready to proceed.

Step 5: Load the data


Use the mysqlimport utility to load the flat files into Amazon RDS. The following example tells
mysqlimport to load all of the files named "sales" with an extension starting with "part_". This is a
convenient way to load all of the files created in the "split" example.

Use the --compress option to minimize network traffic. The --fields-terminated-by=',' option is used for
CSV files, and the --local option specifies that the incoming data is located on the client. Without the --
local option, the Amazon RDS DB instance looks for the data on the database host, so always specify the
--local option. For the --host option, specify the DB instance endpoint of the RDS for MySQL DB instance.

In the following examples, replace master_user with the master username for your DB instance.

Replace hostname with the endpoint for your DB instance. An example of a DB instance endpoint is my-
db-instance.123456789012.us-west-2.rds.amazonaws.com.

For RDS for MySQL version 8.0.15 and higher, run the following statement before using the mysqlimport
utility.

GRANT SESSION_VARIABLES_ADMIN ON *.* TO master_user;

For Linux, macOS, or Unix:

mysqlimport --local \
--compress \
--user=master_user \
--password \
--host=hostname \
--fields-terminated-by=',' Acme sales.part_*

For Windows:

mysqlimport --local ^
--compress ^
--user=master_user ^
--password ^

1316
Amazon Relational Database Service User Guide
Importing data from any source

--host=hostname ^
--fields-terminated-by="," Acme sales.part_*

For very large data loads, take additional DB snapshots periodically between loading files and note
which files have been loaded. If a problem occurs, you can easily resume from the point of the last DB
snapshot, avoiding lengthy reloads.

Step 6: Turn Amazon RDS automated backups back on


After the load is finished, turn Amazon RDS automated backups on by setting the backup retention
period back to its preload value. As noted earlier, Amazon RDS restarts the DB instance, so be prepared
for a brief outage.

The following example uses the AWS CLI modify-db-instance command to turn on automated
backups for the AcmeRDS DB instance and set the retention period to one day.

For Linux, macOS, or Unix:

aws rds modify-db-instance \


--db-instance-identifier AcmeRDS \
--backup-retention-period 1 \
--apply-immediately

For Windows:

aws rds modify-db-instance ^


--db-instance-identifier AcmeRDS ^
--backup-retention-period 1 ^
--apply-immediately

1317
Amazon Relational Database Service User Guide
Working with MariaDB replication

Working with MariaDB replication in Amazon RDS


You usually use read replicas to configure replication between Amazon RDS DB instances. For general
information about read replicas, see Working with DB instance read replicas (p. 438). For specific
information about working with read replicas on Amazon RDS for MariaDB, see Working with MariaDB
read replicas (p. 1318).

You can also configure replication based on binary log coordinates for a MariaDB DB instance. For
MariaDB instances, you can also configure replication based on global transaction IDs (GTIDs), which
provides better crash safety. For more information, see Configuring GTID-based replication with an
external source instance (p. 1328).

The following are other replication options available with RDS for MariaDB:

• You can set up replication between an RDS for MariaDB DB instance and a MySQL or MariaDB instance
that is external to Amazon RDS. For information about configuring replication with an external source,
see Configuring binary log file position replication with an external source instance (p. 1331).
• You can configure replication to import databases from a MySQL or MariaDB instance that is external
to Amazon RDS, or to export databases to such instances. For more information, see Importing data to
an Amazon RDS MariaDB or MySQL DB instance with reduced downtime (p. 1299) and Exporting data
from a MySQL DB instance by using replication (p. 1728).

For any of these replication options, you can use either row-based replication, statement-based, or mixed
replication. Row-based replication only replicates the changed rows that result from a SQL statement.
Statement-based replication replicates the entire SQL statement. Mixed replication uses statement-
based replication when possible, but switches to row-based replication when SQL statements that are
unsafe for statement-based replication are run. In most cases, mixed replication is recommended. The
binary log format of the DB instance determines whether replication is row-based, statement-based, or
mixed. For information about setting the binary log format, see Binary logging format (p. 907).

Topics
• Working with MariaDB read replicas (p. 1318)
• Configuring GTID-based replication with an external source instance (p. 1328)
• Configuring binary log file position replication with an external source instance (p. 1331)

Working with MariaDB read replicas


Following, you can find specific information about working with read replicas on Amazon RDS for
MariaDB. For general information about read replicas and instructions for using them, see Working with
DB instance read replicas (p. 438).

Topics
• Configuring read replicas with MariaDB (p. 1319)
• Configuring replication filters with MariaDB (p. 1319)
• Configuring delayed replication with MariaDB (p. 1324)
• Updating read replicas with MariaDB (p. 1325)
• Working with Multi-AZ read replica deployments with MariaDB (p. 1325)
• Using cascading read replicas with RDS for MariaDB (p. 1326)
• Monitoring MariaDB read replicas (p. 1326)
• Starting and stopping replication with MariaDB read replicas (p. 1327)
• Troubleshooting a MariaDB read replica problem (p. 1327)

1318
Amazon Relational Database Service User Guide
Working with MariaDB read replicas

Configuring read replicas with MariaDB


Before a MariaDB DB instance can serve as a replication source, make sure to turn on automatic
backups on the source DB instance by setting the backup retention period to a value other than 0. This
requirement also applies to a read replica that is the source DB instance for another read replica.

You can create up to 15 read replicas from one DB instance within the same Region. For replication to
operate effectively, each read replica should have as the same amount of compute and storage resources
as the source DB instance. If you scale the source DB instance, also scale the read replicas.

RDS for MariaDB supports cascading read replicas. To learn how to configure cascading read replicas, see
Using cascading read replicas with RDS for MariaDB (p. 1326).

You can run multiple read replica create and delete actions at the same time that reference the same
source DB instance. When you perform these actions, stay within the limit of 15 read replicas for each
source instance.

Configuring replication filters with MariaDB


You can use replication filters to specify which databases and tables are replicated with a read replica.
Replication filters can include databases and tables in replication or exclude them from replication.

The following are some use cases for replication filters:

• To reduce the size of a read replica. With replication filtering, you can exclude the databases and tables
that aren't needed on the read replica.
• To exclude databases and tables from read replicas for security reasons.
• To replicate different databases and tables for specific use cases at different read replicas. For example,
you might use specific read replicas for analytics or sharding.
• For a DB instance that has read replicas in different AWS Regions, to replicate different databases or
tables in different AWS Regions.

Note
You can also use replication filters to specify which databases and tables are replicated
with a primary MariaDB DB instance that is configured as a replica in an inbound replication
topology. For more information about this configuration, see Configuring binary log file position
replication with an external source instance (p. 1724).

Topics
• Setting replication filtering parameters for RDS for MariaDB (p. 1319)
• Replication filtering limitations for RDS for MariaDB (p. 1320)
• Replication filtering examples for RDS for MariaDB (p. 1320)
• Viewing the replication filters for a read replica (p. 1323)

Setting replication filtering parameters for RDS for MariaDB


To configure replication filters, set the following replication filtering parameters on the read replica:

• replicate-do-db – Replicate changes to the specified databases. When you set this parameter for a
read replica, only the databases specified in the parameter are replicated.
• replicate-ignore-db – Don't replicate changes to the specified databases. When the replicate-
do-db parameter is set for a read replica, this parameter isn't evaluated.
• replicate-do-table – Replicate changes to the specified tables. When you set this parameter for a
read replica, only the tables specified in the parameter are replicated. Also, when the replicate-do-

1319
Amazon Relational Database Service User Guide
Working with MariaDB read replicas

db or replicate-ignore-db parameter is set, the database that includes the specified tables must
be included in replication with the read replica.
• replicate-ignore-table – Don't replicate changes to the specified tables. When the replicate-
do-table parameter is set for a read replica, this parameter isn't evaluated.
• replicate-wild-do-table – Replicate tables based on the specified database and table
name patterns. The % and _ wildcard characters are supported. When the replicate-do-db or
replicate-ignore-db parameter is set, make sure to include the database that includes the
specified tables in replication with the read replica.
• replicate-wild-ignore-table – Don't replicate tables based on the specified database and table
name patterns. The % and _ wildcard characters are supported. When the replicate-do-table or
replicate-wild-do-table parameter is set for a read replica, this parameter isn't evaluated.

The parameters are evaluated in the order that they are listed. For more information about how these
parameters work, see the MariaDB documentation.

By default, each of these parameters has an empty value. On each read replica, you can use these
parameters to set, change, and delete replication filters. When you set one of these parameters, separate
each filter from others with a comma.

You can use the % and _ wildcard characters in the replicate-wild-do-table and replicate-
wild-ignore-table parameters. The % wildcard matches any number of characters, and the _
wildcard matches only one character.

The binary logging format of the source DB instance is important for replication because it determines
the record of data changes. The setting of the binlog_format parameter determines whether the
replication is row-based or statement-based. For more information, see Binary logging format (p. 907).
Note
All data definition language (DDL) statements are replicated as statements, regardless of the
binlog_format setting on the source DB instance.

Replication filtering limitations for RDS for MariaDB


The following limitations apply to replication filtering for RDS for MariaDB:

• Each replication filtering parameter has a 2,000-character limit.


• Commas aren't supported in replication filters.
• The MariaDB binlog_do_db and binlog_ignore_db options for binary log filtering aren't
supported.
• Replication filtering doesn't support XA transactions.

For more information, see Restrictions on XA Transactions in the MySQL documentation.


• Replication filtering isn't supported for RDS for MariaDB version 10.2.

Replication filtering examples for RDS for MariaDB


To configure replication filtering for a read replica, modify the replication filtering parameters in the
parameter group associated with the read replica.
Note
You can't modify a default parameter group. If the read replica is using a default parameter
group, create a new parameter group and associate it with the read replica. For more
information on DB parameter groups, see Working with parameter groups (p. 347).

You can set parameters in a parameter group using the AWS Management Console, AWS CLI, or RDS API.
For information about setting parameters, see Modifying parameters in a DB parameter group (p. 352).

1320
Amazon Relational Database Service User Guide
Working with MariaDB read replicas

When you set parameters in a parameter group, all of the DB instances associated with the parameter
group use the parameter settings. If you set the replication filtering parameters in a parameter group,
make sure that the parameter group is associated only with read replicas. Leave the replication filtering
parameters empty for source DB instances.

The following examples set the parameters using the AWS CLI. These examples set ApplyMethod to
immediate so that the parameter changes occur immediately after the CLI command completes. If you
want a pending change to be applied after the read replica is rebooted, set ApplyMethod to pending-
reboot.

The following examples set replication filters:

• Including databases in replication


• Including tables in replication
• Including tables in replication with wildcard characters
• Escaping wildcard characters in names
• Excluding databases from replication
• Excluding tables from replication
• Excluding tables from replication using wildcard characters

Example Including databases in replication

The following example includes the mydb1 and mydb2 databases in replication. When you set
replicate-do-db for a read replica, only the databases specified in the parameter are replicated.

For Linux, macOS, or Unix:

aws rds modify-db-parameter-group \


--db-parameter-group-name myparametergroup \
--parameters "[{"ParameterName": "replicate-do-db", "ParameterValue": "mydb1,mydb2",
"ApplyMethod":"immediate"}]"

For Windows:

aws rds modify-db-parameter-group ^


--db-parameter-group-name myparametergroup ^
--parameters "[{"ParameterName": "replicate-do-db", "ParameterValue": "mydb1,mydb2",
"ApplyMethod":"immediate"}]"

Example Including tables in replication

The following example includes the table1 and table2 tables in database mydb1 in replication.

For Linux, macOS, or Unix:

aws rds modify-db-parameter-group \


--db-parameter-group-name myparametergroup \
--parameters "[{"ParameterName": "replicate-do-table", "ParameterValue":
"mydb1.table1,mydb1.table2", "ApplyMethod":"immediate"}]"

For Windows:

aws rds modify-db-parameter-group ^

1321
Amazon Relational Database Service User Guide
Working with MariaDB read replicas

--db-parameter-group-name myparametergroup ^
--parameters "[{"ParameterName": "replicate-do-table", "ParameterValue":
"mydb1.table1,mydb1.table2", "ApplyMethod":"immediate"}]"

Example Including tables in replication using wildcard characters

The following example includes tables with names that begin with orders and returns in database
mydb in replication.

For Linux, macOS, or Unix:

aws rds modify-db-parameter-group \


--db-parameter-group-name myparametergroup \
--parameters "[{"ParameterName": "replicate-wild-do-table", "ParameterValue":
"mydb.orders%,mydb.returns%", "ApplyMethod":"immediate"}]"

For Windows:

aws rds modify-db-parameter-group ^


--db-parameter-group-name myparametergroup ^
--parameters "[{"ParameterName": "replicate-wild-do-table", "ParameterValue":
"mydb.orders%,mydb.returns%", "ApplyMethod":"immediate"}]"

Example Escaping wildcard characters in names

The following example shows you how to use the escape character \ to escape a wildcard character that
is part of a name.

Assume that you have several table names in database mydb1 that start with my_table, and you want
to include these tables in replication. The table names include an underscore, which is also a wildcard
character, so the example escapes the underscore in the table names.

For Linux, macOS, or Unix:

aws rds modify-db-parameter-group \


--db-parameter-group-name myparametergroup \
--parameters "[{"ParameterName": "replicate-wild-do-table", "ParameterValue": "my\_table
%", "ApplyMethod":"immediate"}]"

For Windows:

aws rds modify-db-parameter-group ^


--db-parameter-group-name myparametergroup ^
--parameters "[{"ParameterName": "replicate-wild-do-table", "ParameterValue": "my\_table
%", "ApplyMethod":"immediate"}]"

Example Excluding databases from replication

The following example excludes the mydb1 and mydb2 databases from replication.

For Linux, macOS, or Unix:

aws rds modify-db-parameter-group \


--db-parameter-group-name myparametergroup \

1322
Amazon Relational Database Service User Guide
Working with MariaDB read replicas

--parameters "[{"ParameterName": "replicate-ignore-db", "ParameterValue": "mydb1,mydb2",


"ApplyMethod":"immediate"}]"

For Windows:

aws rds modify-db-parameter-group ^


--db-parameter-group-name myparametergroup ^
--parameters "[{"ParameterName": "replicate-ignore-db", "ParameterValue": "mydb1,mydb2",
"ApplyMethod":"immediate"}]"

Example Excluding tables from replication

The following example excludes tables table1 and table2 in database mydb1 from replication.

For Linux, macOS, or Unix:

aws rds modify-db-parameter-group \


--db-parameter-group-name myparametergroup \
--parameters "[{"ParameterName": "replicate-ignore-table", "ParameterValue":
"mydb1.table1,mydb1.table2", "ApplyMethod":"immediate"}]"

For Windows:

aws rds modify-db-parameter-group ^


--db-parameter-group-name myparametergroup ^
--parameters "[{"ParameterName": "replicate-ignore-table", "ParameterValue":
"mydb1.table1,mydb1.table2", "ApplyMethod":"immediate"}]"

Example Excluding tables from replication using wildcard characters

The following example excludes tables with names that begin with orders and returns in database
mydb from replication.

For Linux, macOS, or Unix:

aws rds modify-db-parameter-group \


--db-parameter-group-name myparametergroup \
--parameters "[{"ParameterName": "replicate-wild-ignore-table", "ParameterValue":
"mydb.orders%,mydb.returns%", "ApplyMethod":"immediate"}]"

For Windows:

aws rds modify-db-parameter-group ^


--db-parameter-group-name myparametergroup ^
--parameters "[{"ParameterName": "replicate-wild-ignore-table", "ParameterValue":
"mydb.orders%,mydb.returns%", "ApplyMethod":"immediate"}]"

Viewing the replication filters for a read replica


You can view the replication filters for a read replica in the following ways:

• Check the settings of the replication filtering parameters in the parameter group associated with the
read replica.

For instructions, see Viewing parameter values for a DB parameter group (p. 359).

1323
Amazon Relational Database Service User Guide
Working with MariaDB read replicas

• In a MariaDB client, connect to the read replica and run the SHOW REPLICA STATUS statement.

In the output, the following fields show the replication filters for the read replica:
• Replicate_Do_DB
• Replicate_Ignore_DB
• Replicate_Do_Table
• Replicate_Ignore_Table
• Replicate_Wild_Do_Table
• Replicate_Wild_Ignore_Table

For more information about these fields, see Checking Replication Status in the MySQL
documentation.
Note
Previous versions of MariaDB used SHOW SLAVE STATUS instead of SHOW REPLICA STATUS.
If you are using a MariaDB version before 10.5, then use SHOW SLAVE STATUS.

Configuring delayed replication with MariaDB


You can use delayed replication as a strategy for disaster recovery. With delayed replication, you specify
the minimum amount of time, in seconds, to delay replication from the source to the read replica. In the
event of a disaster, such as a table deleted unintentionally, you complete the following steps to recover
from the disaster quickly:

• Stop replication to the read replica before the change that caused the disaster is sent to it.

To stop replication, use the mysql.rds_stop_replication (p. 1782) stored procedure.


• Promote the read replica to be the new source DB instance by using the instructions in Promoting a
read replica to be a standalone DB instance (p. 447).

Note

• Delayed replication is supported for MariaDB 10.6 and higher.


• Use stored procedures to configure delayed replication. You can't configure delayed
replication with the AWS Management Console, the AWS CLI, or the Amazon RDS API.
• You can use replication based on global transaction identifiers (GTIDs) in a delayed replication
configuration.

Topics
• Configuring delayed replication during read replica creation (p. 1324)
• Modifying delayed replication for an existing read replica (p. 1325)
• Promoting a read replica (p. 1325)

Configuring delayed replication during read replica creation


To configure delayed replication for any future read replica created from a DB instance, run the
mysql.rds_set_configuration (p. 1758) stored procedure with the target delay parameter.

To configure delayed replication during read replica creation

1. Using a MariaDB client, connect to the MariaDB DB instance to be the source for read replicas as the
master user.

1324
Amazon Relational Database Service User Guide
Working with MariaDB read replicas

2. Run the mysql.rds_set_configuration (p. 1758) stored procedure with the target delay
parameter.

For example, run the following stored procedure to specify that replication is delayed by at least one
hour (3,600 seconds) for any read replica created from the current DB instance.

call mysql.rds_set_configuration('target delay', 3600);

Note
After running this stored procedure, any read replica you create using the AWS CLI or
Amazon RDS API is configured with replication delayed by the specified number of seconds.

Modifying delayed replication for an existing read replica


To modify delayed replication for an existing read replica, run the mysql.rds_set_source_delay (p. 1777)
stored procedure.

To modify delayed replication for an existing read replica

1. Using a MariaDB client, connect to the read replica as the master user.
2. Use the mysql.rds_stop_replication (p. 1782) stored procedure to stop replication.
3. Run the mysql.rds_set_source_delay (p. 1777) stored procedure.

For example, run the following stored procedure to specify that replication to the read replica is
delayed by at least one hour (3600 seconds).

call mysql.rds_set_source_delay(3600);

4. Use the mysql.rds_start_replication (p. 1780) stored procedure to start replication.

Promoting a read replica


After replication is stopped, in a disaster recovery scenario, you can promote a read replica to be the new
source DB instance. For information about promoting a read replica, see Promoting a read replica to be a
standalone DB instance (p. 447).

Updating read replicas with MariaDB


Read replicas are designed to support read queries, but you might need occasional updates. For example,
you might need to add an index to speed the specific types of queries accessing the replica. You can
enable updates by setting the read_only parameter to 0 in the DB parameter group for the read
replica.

Working with Multi-AZ read replica deployments with MariaDB


You can create a read replica from either single-AZ or Multi-AZ DB instance deployments. You use Multi-
AZ deployments to improve the durability and availability of critical data, but you can't use the Multi-AZ
secondary to serve read-only queries. Instead, you can create read replicas from high-traffic Multi-AZ DB
instances to offload read-only queries. If the source instance of a Multi-AZ deployment fails over to the
secondary, any associated read replicas automatically switch to use the secondary (now primary) as their
replication source. For more information, see Configuring and managing a Multi-AZ deployment (p. 492).

You can create a read replica as a Multi-AZ DB instance. Amazon RDS creates a standby of your replica in
another Availability Zone for failover support for the replica. Creating your read replica as a Multi-AZ DB
instance is independent of whether the source database is a Multi-AZ DB instance.

1325
Amazon Relational Database Service User Guide
Working with MariaDB read replicas

Using cascading read replicas with RDS for MariaDB


RDS for MariaDB supports cascading read replicas. With cascading read replicas, you can scale reads
without adding overhead to your source RDS for MariaDB DB instance.

With cascading read replicas, your RDS for MariaDB DB instance sends data to the first read replica in the
chain. That read replica then sends data to the second replica in the chain, and so on. The end result is
that all read replicas in the chain have the changes from the RDS for MariaDB DB instance, but without
the overhead solely on the source DB instance.

You can create a series of up to three read replicas in a chain from a source RDS for MariaDB DB instance.
For example, suppose that you have an RDS for MariaDB DB instance, mariadb-main. You can do the
following:

• Starting with mariadb-main, create the first read replica in the chain, read-replica-1.
• Next, from read-replica-1, create the next read replica in the chain, read-replica-2.
• Finally, from read-replica-2, create the third read replica in the chain, read-replica-3.

You can't create another read replica beyond this third cascading read replica in the series for mariadb-
main. A complete series of instances from an RDS for MariaDB source DB instance through to the end of
a series of cascading read replicas can consist of at most four DB instances.

For cascading read replicas to work, each source RDS for MariaDB DB instance must have automated
backups turned on. To turn on automatic backups on a read replica, first create the read replica, and
then modify the read replica to turn on automatic backups. For more information, see Creating a read
replica (p. 445).

As with any read replica, you can promote a read replica that's part of a cascade. Promoting a read
replica from within a chain of read replicas removes that replica from the chain. For example, suppose
that you want to move some of the workload from your mariadb-main DB instance to a new instance
for use by the accounting department only. Assuming the chain of three read replicas from the example,
you decide to promote read-replica-2. The chain is affected as follows:

• Promoting read-replica-2 removes it from the replication chain.


• It is now a full read/write DB instance.
• It continues replicating to read-replica-3, just as it was doing before promotion.
• Your mariadb-main continues replicating to read-replica-1.

For more information about promoting read replicas, see Promoting a read replica to be a standalone DB
instance (p. 447).

Monitoring MariaDB read replicas


For MariaDB read replicas, you can monitor replication lag in Amazon CloudWatch by viewing
the Amazon RDS ReplicaLag metric. The ReplicaLag metric reports the value of the
Seconds_Behind_Master field of the SHOW REPLICA STATUS command.
Note
Previous versions of MariaDB used SHOW SLAVE STATUS instead of SHOW REPLICA STATUS. If
you are using a MariaDB version before 10.5, then use SHOW SLAVE STATUS.

Common causes for replication lag for MariaDB are the following:

• A network outage.
• Writing to tables with indexes on a read replica. If the read_only parameter is not set to 0 on the
read replica, it can break replication.

1326
Amazon Relational Database Service User Guide
Working with MariaDB read replicas

• Using a nontransactional storage engine such as MyISAM. Replication is only supported for the InnoDB
storage engine on MariaDB.

When the ReplicaLag metric reaches 0, the replica has caught up to the source DB instance. If the
ReplicaLag metric returns -1, then replication is currently not active. ReplicaLag = -1 is equivalent to
Seconds_Behind_Master = NULL.

Starting and stopping replication with MariaDB read replicas


You can stop and restart the replication process on an Amazon RDS DB instance by calling the system
stored procedures mysql.rds_stop_replication (p. 1782) and mysql.rds_start_replication (p. 1780).
You can do this when replicating between two Amazon RDS instances for long-running operations
such as creating large indexes. You also need to stop and start replication when importing or
exporting databases. For more information, see Importing data to an Amazon RDS MariaDB or MySQL
database with reduced downtime (p. 1690) and Exporting data from a MySQL DB instance by using
replication (p. 1728).

If replication is stopped for more than 30 consecutive days, either manually or due to a replication error,
Amazon RDS ends replication between the source DB instance and all read replicas. It does so to prevent
increased storage requirements on the source DB instance and long failover times. The read replica DB
instance is still available. However, replication can't be resumed because the binary logs required by the
read replica are deleted from the source DB instance after replication is ended. You can create a new read
replica for the source DB instance to reestablish replication.

Troubleshooting a MariaDB read replica problem


The replication technologies for MariaDB are asynchronous. Because they are asynchronous, occasional
BinLogDiskUsage increases on the source DB instance and ReplicaLag on the read replica are to be
expected. For example, a high volume of write operations to the source DB instance can occur in parallel.
In contrast, write operations to the read replica are serialized using a single I/O thread, which can lead to
a lag between the source instance and read replica. For more information about read-only replicas in the
MariaDB documentation, go to Replication overview.

You can do several things to reduce the lag between updates to a source DB instance and the subsequent
updates to the read replica, such as the following:

• Sizing a read replica to have a storage size and DB instance class comparable to the source DB
instance.
• Ensuring that parameter settings in the DB parameter groups used by the source DB instance and
the read replica are compatible. For more information and an example, see the discussion of the
max_allowed_packet parameter later in this section.

Amazon RDS monitors the replication status of your read replicas and updates the Replication State
field of the read replica instance to Error if replication stops for any reason. An example might be if
DML queries run on your read replica conflict with the updates made on the source DB instance.

You can review the details of the associated error thrown by the MariaDB engine by viewing the
Replication Error field. Events that indicate the status of the read replica are also generated,
including RDS-EVENT-0045 (p. 887), RDS-EVENT-0046 (p. 888), and RDS-EVENT-0047 (p. 883). For
more information about events and subscribing to events, see Working with Amazon RDS event
notification (p. 855). If a MariaDB error message is returned, review the error in the MariaDB error
message documentation.

One common issue that can cause replication errors is when the value for the max_allowed_packet
parameter for a read replica is less than the max_allowed_packet parameter for the source DB

1327
Amazon Relational Database Service User Guide
Configuring GTID-based replication
with an external source instance

instance. The max_allowed_packet parameter is a custom parameter that you can set in a DB
parameter group that is used to specify the maximum size of DML code that can be run on the database.
In some cases, the max_allowed_packet parameter value in the DB parameter group associated with
a source DB instance is smaller than the max_allowed_packet parameter value in the DB parameter
group associated with the source's read replica. In these cases, the replication process can throw an error
(Packet bigger than 'max_allowed_packet' bytes) and stop replication. You can fix the error by having
the source and read replica use DB parameter groups with the same max_allowed_packet parameter
values.

Other common situations that can cause replication errors include the following:

• Writing to tables on a read replica. If you are creating indexes on a read replica, you need to have the
read_only parameter set to 0 to create the indexes. If you are writing to tables on the read replica, it
might break replication.
• Using a non-transactional storage engine such as MyISAM. read replicas require a transactional storage
engine. Replication is only supported for the InnoDB storage engine on MariaDB.
• Using unsafe nondeterministic queries such as SYSDATE(). For more information, see Determination
of safe and unsafe statements in binary logging.

If you decide that you can safely skip an error, you can follow the steps described in Skipping the current
replication error (p. 1744). Otherwise, you can delete the read replica and create an instance using the
same DB instance identifier so that the endpoint remains the same as that of your old read replica. If a
replication error is fixed, the Replication State changes to replicating.

For MariaDB DB instances, in some cases read replicas can't be switched to the secondary if some
binary log (binlog) events aren't flushed during the failure. In these cases, manually delete and recreate
the read replicas. You can reduce the chance of this happening by setting the following parameter
values: sync_binlog=1 and innodb_flush_log_at_trx_commit=1. These settings might reduce
performance, so test their impact before implementing the changes in a production environment.

Configuring GTID-based replication with an external


source instance
You can set up replication based on global transaction identifiers (GTIDs) from an external MariaDB
instance of version 10.0.24 or higher into an RDS for MariaDB DB instance. Follow these guidelines when
you set up an external source instance and a replica on Amazon RDS:

• Monitor failover events for the RDS for MariaDB DB instance that is your replica. If a failover occurs,
then the DB instance that is your replica might be recreated on a new host with a different network
address. For information on how to monitor failover events, see Working with Amazon RDS event
notification (p. 855).
• Maintain the binary logs (binlogs) on your source instance until you have verified that they have been
applied to the replica. This maintenance ensures that you can restore your source instance in the event
of a failure.
• Turn on automated backups on your MariaDB DB instance on Amazon RDS. Turning on automated
backups ensures that you can restore your replica to a particular point in time if you need to
resynchronize your source instance and replica. For information on backups and Point-In-Time Restore,
see Backing up and restoring (p. 590).

Note
The permissions required to start replication on a MariaDB DB instance are restricted and
not available to your Amazon RDS master user. Because of this, you must use the Amazon
RDS mysql.rds_set_external_master_gtid (p. 1345) and mysql.rds_start_replication (p. 1780)
commands to set up replication between your live database and your RDS for MariaDB database.

1328
Amazon Relational Database Service User Guide
Configuring GTID-based replication
with an external source instance

To start replication between an external source instance and a MariaDB DB instance on Amazon RDS, use
the following procedure.

To start replication

1. Make the source MariaDB instance read-only:

mysql> FLUSH TABLES WITH READ LOCK;


mysql> SET GLOBAL read_only = ON;

2. Get the current GTID of the external MariaDB instance. You can do this by using mysql or the query
editor of your choice to run SELECT @@gtid_current_pos;.

The GTID is formatted as <domain-id>-<server-id>-<sequence-id>. A typical GTID looks


something like 0-1234510749-1728. For more information about GTIDs and their component
parts, see Global transaction ID in the MariaDB documentation.
3. Copy the database from the external MariaDB instance to the MariaDB DB instance using
mysqldump. For very large databases, you might want to use the procedure in Importing data to an
Amazon RDS MariaDB or MySQL database with reduced downtime (p. 1690).

For Linux, macOS, or Unix:

mysqldump \
--databases database_name \
--single-transaction \
--compress \
--order-by-primary \
-u local_user \
-plocal_password | mysql \
--host=hostname \
--port=3306 \
-u RDS_user_name \
-pRDS_password

For Windows:

mysqldump ^
--databases database_name ^
--single-transaction ^
--compress ^
--order-by-primary \
-u local_user \
-plocal_password | mysql ^
--host=hostname ^
--port=3306 ^
-u RDS_user_name ^
-pRDS_password

Note
Make sure that there isn't a space between the -p option and the entered password.
Specify a password other than the prompt shown here as a security best practice.

Use the --host, --user (-u), --port and -p options in the mysql command to specify the host
name, user name, port, and password to connect to your MariaDB DB instance. The host name is the
DNS name from the MariaDB DB instance endpoint, for example myinstance.123456789012.us-
east-1.rds.amazonaws.com. You can find the endpoint value in the instance details in the
Amazon RDS Management Console.
4. Make the source MariaDB instance writeable again.

1329
Amazon Relational Database Service User Guide
Configuring GTID-based replication
with an external source instance

mysql> SET GLOBAL read_only = OFF;


mysql> UNLOCK TABLES;

5. In the Amazon RDS Management Console, add the IP address of the server that hosts the external
MariaDB database to the VPC security group for the MariaDB DB instance. For more information on
modifying a VPC security group, go to Security groups for your VPC in the Amazon Virtual Private
Cloud User Guide.

The IP address can change when the following conditions are met:

• You are using a public IP address for communication between the external source instance and the
DB instance.
• The external source instance was stopped and restarted.

If these conditions are met, verify the IP address before adding it.

You might also need to configure your local network to permit connections from the IP address of
your MariaDB DB instance, so that it can communicate with your external MariaDB instance. To find
the IP address of the MariaDB DB instance, use the host command.

host db_instance_endpoint

The host name is the DNS name from the MariaDB DB instance endpoint.
6. Using the client of your choice, connect to the external MariaDB instance and create a MariaDB user
to be used for replication. This account is used solely for replication and must be restricted to your
domain to improve security. The following is an example.

CREATE USER 'repl_user'@'mydomain.com' IDENTIFIED BY 'password';

Note
Specify a password other than the prompt shown here as a security best practice.
7. For the external MariaDB instance, grant REPLICATION CLIENT and REPLICATION SLAVE
privileges to your replication user. For example, to grant the REPLICATION CLIENT and
REPLICATION SLAVE privileges on all databases for the 'repl_user' user for your domain, issue
the following command.

GRANT REPLICATION CLIENT, REPLICATION SLAVE ON *.* TO 'repl_user'@'mydomain.com';

8. Make the MariaDB DB instance the replica. Connect to the MariaDB DB instance as the master
user and identify the external MariaDB database as the replication source instance by using the
mysql.rds_set_external_master_gtid (p. 1345) command. Use the GTID that you determined in Step
2. The following is an example.

CALL mysql.rds_set_external_master_gtid ('mymasterserver.mydomain.com', 3306,


'repl_user', 'password', 'GTID', 0);

Note
Specify a password other than the prompt shown here as a security best practice.
9. On the MariaDB DB instance, issue the mysql.rds_start_replication (p. 1780) command to start
replication.

CALL mysql.rds_start_replication;

1330
Amazon Relational Database Service User Guide
Configuring binary log file position
replication with an external source instance

Configuring binary log file position replication with


an external source instance
You can set up replication between an RDS for MySQL or MariaDB DB instance and a MySQL or MariaDB
instance that is external to Amazon RDS using binary log file replication.

Topics
• Before you begin (p. 1331)
• Configuring binary log file position replication with an external source instance (p. 1331)

Before you begin


You can configure replication using the binary log file position of replicated transactions.

The permissions required to start replication on an Amazon RDS DB instance are restricted and not
available to your Amazon RDS master user. Because of this, make sure that you use the Amazon RDS
mysql.rds_set_external_master (p. 1769) and mysql.rds_start_replication (p. 1780) commands to set up
replication between your live database and your Amazon RDS database.

To set the binary logging format for a MySQL or MariaDB database, update the binlog_format
parameter. If your DB instance uses the default DB instance parameter group, create a new DB parameter
group to modify binlog_format settings. We recommend that you use the default setting for
binlog_format, which is MIXED. However, you can also set binlog_format to ROW or STATEMENT if
you need a specific binary log (binlog) format. Reboot your DB instance for the change to take effect.

For information about setting the binlog_format parameter, see Configuring MySQL binary
logging (p. 921). For information about the implications of different MySQL replication types,
see Advantages and disadvantages of statement-based and row-based replication in the MySQL
documentation.

Configuring binary log file position replication with an external


source instance
Follow these guidelines when you set up an external source instance and a replica on Amazon RDS:

• Monitor failover events for the Amazon RDS DB instance that is your replica. If a failover occurs,
then the DB instance that is your replica might be recreated on a new host with a different network
address. For information on how to monitor failover events, see Working with Amazon RDS event
notification (p. 855).
• Maintain the binlogs on your source instance until you have verified that they have been applied to
the replica. This maintenance makes sure that you can restore your source instance in the event of a
failure.
• Turn on automated backups on your Amazon RDS DB instance. Turning on automated backups makes
sure that you can restore your replica to a particular point in time if you need to re-synchronize your
source instance and replica. For information on backups and point-in-time restore, see Backing up and
restoring (p. 590).

To configure binary log file replication with an external source instance

1. Make the source MySQL or MariaDB instance read-only.

mysql> FLUSH TABLES WITH READ LOCK;


mysql> SET GLOBAL read_only = ON;

1331
Amazon Relational Database Service User Guide
Configuring binary log file position
replication with an external source instance

2. Run the SHOW MASTER STATUS command on the source MySQL or MariaDB instance to determine
the binlog location.

You receive output similar to the following example.

File Position
------------------------------------
mysql-bin-changelog.000031 107
------------------------------------

3. Copy the database from the external instance to the Amazon RDS DB instance using mysqldump.
For very large databases, you might want to use the procedure in Importing data to an Amazon RDS
MariaDB or MySQL database with reduced downtime (p. 1690).

For Linux, macOS, or Unix:

mysqldump --databases database_name \


--single-transaction \
--compress \
--order-by-primary \
-u local_user \
-plocal_password | mysql \
--host=hostname \
--port=3306 \
-u RDS_user_name \
-pRDS_password

For Windows:

mysqldump --databases database_name ^


--single-transaction ^
--compress ^
--order-by-primary ^
-u local_user ^
-plocal_password | mysql ^
--host=hostname ^
--port=3306 ^
-u RDS_user_name ^
-pRDS_password

Note
Make sure that there isn't a space between the -p option and the entered password.

To specify the host name, user name, port, and password to connect to your Amazon RDS DB
instance, use the --host, --user (-u), --port and -p options in the mysql command. The
host name is the Domain Name Service (DNS) name from the Amazon RDS DB instance endpoint,
for example myinstance.123456789012.us-east-1.rds.amazonaws.com. You can find the
endpoint value in the instance details in the AWS Management Console.
4. Make the source MySQL or MariaDB instance writeable again.

mysql> SET GLOBAL read_only = OFF;


mysql> UNLOCK TABLES;

For more information on making backups for use with replication, see the MySQL documentation.
5. In the AWS Management Console, add the IP address of the server that hosts the external database
to the virtual private cloud (VPC) security group for the Amazon RDS DB instance. For more
information on modifying a VPC security group, see Security groups for your VPC in the Amazon
Virtual Private Cloud User Guide.

1332
Amazon Relational Database Service User Guide
Configuring binary log file position
replication with an external source instance

The IP address can change when the following conditions are met:

• You are using a public IP address for communication between the external source instance and the
DB instance.
• The external source instance was stopped and restarted.

If these conditions are met, verify the IP address before adding it.

You might also need to configure your local network to permit connections from the IP address of
your Amazon RDS DB instance. You do this so that your local network can communicate with your
external MySQL or MariaDB instance. To find the IP address of the Amazon RDS DB instance, use the
host command.

host db_instance_endpoint

The host name is the DNS name from the Amazon RDS DB instance endpoint.
6. Using the client of your choice, connect to the external instance and create a user to use for
replication. Use this account solely for replication and restrict it to your domain to improve security.
The following is an example.

CREATE USER 'repl_user'@'mydomain.com' IDENTIFIED BY 'password';

Note
Specify a password other than the prompt shown here as a security best practice.
7. For the external instance, grant REPLICATION CLIENT and REPLICATION SLAVE privileges to
your replication user. For example, to grant the REPLICATION CLIENT and REPLICATION SLAVE
privileges on all databases for the 'repl_user' user for your domain, issue the following command.

GRANT REPLICATION CLIENT, REPLICATION SLAVE ON *.* TO 'repl_user'@'mydomain.com';

8. Make the Amazon RDS DB instance the replica. To do so, first connect to the Amazon RDS DB
instance as the master user. Then identify the external MySQL or MariaDB database as the source
instance by using the mysql.rds_set_external_master (p. 1769) command. Use the master log file
name and master log position that you determined in step 2. The following is an example.

CALL mysql.rds_set_external_master ('mymasterserver.mydomain.com', 3306, 'repl_user',


'password', 'mysql-bin-changelog.000031', 107, 0);

Note
On RDS for MySQL, you can choose to use delayed replication by running the
mysql.rds_set_external_master_with_delay (p. 1774) stored procedure instead.
On RDS for MySQL, one reason to use delayed replication is to turn on disaster
recovery with the mysql.rds_start_replication_until (p. 1780) stored procedure.
Currently, RDS for MariaDB supports delayed replication but doesn't support the
mysql.rds_start_replication_until procedure.
9. On the Amazon RDS DB instance, issue the mysql.rds_start_replication (p. 1780) command to start
replication.

CALL mysql.rds_start_replication;

1333
Amazon Relational Database Service User Guide
Options for MariaDB

Options for MariaDB database engine


Following, you can find descriptions for options, or additional features, that are available for Amazon
RDS instances running the MariaDB DB engine. To turn on these options, you add them to a custom
option group, and then associate the option group with your DB instance. For more information about
working with option groups, see Working with option groups (p. 331).

Amazon RDS supports the following options for MariaDB:

Option ID Engine versions

MARIADB_AUDIT_PLUGIN MariaDB 10.3 and higher

MariaDB Audit Plugin support


Amazon RDS supports using the MariaDB Audit Plugin on MariaDB database instances. The MariaDB
Audit Plugin records database activity such as users logging on to the database, queries run against the
database, and more. The record of database activity is stored in a log file.

Audit Plugin option settings


Amazon RDS supports the following settings for the MariaDB Audit Plugin option.
Note
If you don't configure an option setting in the RDS console, RDS uses the default setting.

Option setting Valid values Default value Description

SERVER_AUDIT_FILE_PATH
/rdsdbdata/ /rdsdbdata/ The location of the log file. The log file
log/audit/ log/audit/ contains the record of the activity specified in
SERVER_AUDIT_EVENTS. For more information,
see Viewing and listing database log files (p. 895)
and MariaDB database log files (p. 902).

1–1000000000 1000000
SERVER_AUDIT_FILE_ROTATE_SIZE The size in bytes that when reached, causes the
file to rotate. For more information, see Log file
size (p. 906).

0–100 9
SERVER_AUDIT_FILE_ROTATIONS The number of log rotations to save when
server_audit_output_type=file. If set
to 0, then the log file never rotates. For more
information, see Log file size (p. 906) and
Downloading a database log file (p. 896).

SERVER_AUDIT_EVENTS
CONNECT, CONNECT, The types of activity to record in the log. Installing
QUERY, TABLE, QUERY the MariaDB Audit Plugin is itself logged.
QUERY_DDL,
QUERY_DML, • CONNECT: Log successful and unsuccessful
QUERY_DML_NO_SELECT, connections to the database, and
QUERY_DCL disconnections from the database.
• QUERY: Log the text of all queries run against
the database.
• TABLE: Log tables affected by queries when the
queries are run against the database.

1334
Amazon Relational Database Service User Guide
MariaDB Audit Plugin support

Option setting Valid values Default value Description


• QUERY_DDL: Similar to the QUERY event, but
returns only data definition language (DDL)
queries (CREATE, ALTER, and so on).
• QUERY_DML: Similar to the QUERY event, but
returns only data manipulation language (DML)
queries (INSERT, UPDATE, and so on, and also
SELECT).
• QUERY_DML_NO_SELECT: Similar to the
QUERY_DML event, but doesn't log SELECT
queries.
• QUERY_DCL: Similar to the QUERY event, but
returns only data control language (DCL)
queries (GRANT, REVOKE, and so on).

Multiple
SERVER_AUDIT_INCL_USERS None Include only activity from the specified
comma- users. By default, activity is recorded for
separated all users. SERVER_AUDIT_INCL_USERS
values and SERVER_AUDIT_EXCL_USERS are
mutually exclusive. If you add values
to SERVER_AUDIT_INCL_USERS,
make sure no values are added to
SERVER_AUDIT_EXCL_USERS.

Multiple
SERVER_AUDIT_EXCL_USERS None Exclude activity from the specified users.
comma- By default, activity is recorded for all
separated users. SERVER_AUDIT_INCL_USERS
values and SERVER_AUDIT_EXCL_USERS are
mutually exclusive. If you add values
to SERVER_AUDIT_EXCL_USERS,
make sure no values are added to
SERVER_AUDIT_INCL_USERS.

The rdsadmin user queries the database every


second to check the health of the database.
Depending on your other settings, this activity
can possibly cause the size of your log file to
grow very large, very quickly. If you don't need to
record this activity, add the rdsadmin user to the
SERVER_AUDIT_EXCL_USERS list.
Note
CONNECT activity is always recorded for
all users, even if the user is specified for
this option setting.

SERVER_AUDIT_LOGGING
ON ON Logging is active. The only valid value is ON.
Amazon RDS does not support deactivating
logging. If you want to deactivate logging, remove
the MariaDB Audit Plugin. For more information,
see Removing the MariaDB Audit Plugin (p. 1336).

0–2147483647 1024
SERVER_AUDIT_QUERY_LOG_LIMIT The limit on the length of the query string in a
record.

1335
Amazon Relational Database Service User Guide
MariaDB Audit Plugin support

Adding the MariaDB Audit Plugin


The general process for adding the MariaDB Audit Plugin to a DB instance is the following:

1. Create a new option group, or copy or modify an existing option group.


2. Add the option to the option group.
3. Associate the option group with the DB instance.

After you add the MariaDB Audit Plugin, you don't need to restart your DB instance. As soon as the
option group is active, auditing begins immediately.

To add the MariaDB Audit Plugin

1. Determine the option group you want to use. You can create a new option group or use an existing
option group. If you want to use an existing option group, skip to the next step. Otherwise, create a
custom DB option group. Choose mariadb for Engine, and choose 10.3 or higher for Major engine
version. For more information, see Creating an option group (p. 332).
2. Add the MARIADB_AUDIT_PLUGIN option to the option group, and configure the option settings.
For more information about adding options, see Adding an option to an option group (p. 335). For
more information about each setting, see Audit Plugin option settings (p. 1334).
3. Apply the option group to a new or existing DB instance.

• For a new DB instance, you apply the option group when you launch the instance. For more
information, see Creating an Amazon RDS DB instance (p. 300).
• For an existing DB instance, you apply the option group by modifying the DB instance and
attaching the new option group. For more information, see Modifying an Amazon RDS DB
instance (p. 401).

Viewing and downloading the MariaDB Audit Plugin log


After you enable the MariaDB Audit Plugin, you access the results in the log files the same way you
access any other text-based log files. The audit log files are located at /rdsdbdata/log/audit/. For
information about viewing the log file in the console, see Viewing and listing database log files (p. 895).
For information about downloading the log file, see Downloading a database log file (p. 896).

Modifying MariaDB Audit Plugin settings


After you enable the MariaDB Audit Plugin, you can modify settings for the plugin. For more information
about how to modify option settings, see Modifying an option setting (p. 340). For more information
about each setting, see Audit Plugin option settings (p. 1334).

Removing the MariaDB Audit Plugin


Amazon RDS doesn't support turning off logging in the MariaDB Audit Plugin. However, you can remove
the plugin from a DB instance. When you remove the MariaDB Audit Plugin, the DB instance is restarted
automatically to stop auditing.

To remove the MariaDB Audit Plugin from a DB instance, do one of the following:

• Remove the MariaDB Audit Plugin option from the option group it belongs to. This change affects all
DB instances that use the option group. For more information, see Removing an option from an option
group (p. 343)
• Modify the DB instance and specify a different option group that doesn't include the plugin. This
change affects a single DB instance. You can specify the default (empty) option group, or a different
custom option group. For more information, see Modifying an Amazon RDS DB instance (p. 401).

1336
Amazon Relational Database Service User Guide
MariaDB Audit Plugin support

1337
Amazon Relational Database Service User Guide
Parameters for MariaDB

Parameters for MariaDB


By default, a MariaDB DB instance uses a DB parameter group that is specific to a MariaDB database.
This parameter group contains some but not all of the parameters contained in the Amazon RDS DB
parameter groups for the MySQL database engine. It also contains a number of new, MariaDB-specific
parameters. For information about working with parameter groups and setting parameters, see Working
with parameter groups (p. 347).

Viewing MariaDB parameters


RDS for MariaDB parameters are set to the default values of the storage engine that you have selected.
For more information about MariaDB parameters, see the MariaDB documentation. For more information
about MariaDB storage engines, see Supported storage engines for MariaDB on Amazon RDS (p. 1261).

You can view the parameters available for a specific RDS for MariaDB version using the RDS console or
the AWS CLI. For information about viewing the parameters in a MariaDB parameter group in the RDS
console, see Viewing parameter values for a DB parameter group (p. 359).

Using the AWS CLI, you can view the parameters for an RDS for MariaDB version by running the
describe-engine-default-parameters command. Specify one of the following values for the --
db-parameter-group-family option:

• mariadb10.11
• mariadb10.6
• mariadb10.5
• mariadb10.4
• mariadb10.3

For example, to view the parameters for RDS for MariaDB version 10.6, run the following command.

aws rds describe-engine-default-parameters --db-parameter-group-family mariadb10.6

Your output looks similar to the following.

{
"EngineDefaults": {
"Parameters": [
{
"ParameterName": "alter_algorithm",
"Description": "Specify the alter table algorithm.",
"Source": "engine-default",
"ApplyType": "dynamic",
"DataType": "string",
"AllowedValues": "DEFAULT,COPY,INPLACE,NOCOPY,INSTANT",
"IsModifiable": true
},
{
"ParameterName": "analyze_sample_percentage",
"Description": "Percentage of rows from the table ANALYZE TABLE will sample
to collect table statistics.",
"Source": "engine-default",
"ApplyType": "dynamic",
"DataType": "float",
"AllowedValues": "0-100",
"IsModifiable": true
},

1338
Amazon Relational Database Service User Guide
MySQL parameters that aren't available

{
"ParameterName": "aria_block_size",
"Description": "Block size to be used for Aria index pages.",
"Source": "engine-default",
"ApplyType": "static",
"DataType": "integer",
"AllowedValues": "1024-32768",
"IsModifiable": false
},
{
"ParameterName": "aria_checkpoint_interval",
"Description": "Interval in seconds between automatic checkpoints.",
"Source": "engine-default",
"ApplyType": "dynamic",
"DataType": "integer",
"AllowedValues": "0-4294967295",
"IsModifiable": true
},
...

To list only the modifiable parameters for RDS for MariaDB version 10.6, run the following command.

For Linux, macOS, or Unix:

aws rds describe-engine-default-parameters --db-parameter-group-family mariadb10.6 \


--query 'EngineDefaults.Parameters[?IsModifiable==`true`]'

For Windows:

aws rds describe-engine-default-parameters --db-parameter-group-family mariadb10.6 ^


--query "EngineDefaults.Parameters[?IsModifiable==`true`]"

MySQL parameters that aren't available


The following MySQL parameters are not available in MariaDB-specific DB parameter groups:

• bind_address
• binlog_error_action
• binlog_gtid_simple_recovery
• binlog_max_flush_queue_time
• binlog_order_commits
• binlog_row_image
• binlog_rows_query_log_events
• binlogging_impossible_mode
• block_encryption_mode
• core_file
• default_tmp_storage_engine
• div_precision_increment
• end_markers_in_json
• enforce_gtid_consistency
• eq_range_index_dive_limit
• explicit_defaults_for_timestamp
• gtid_executed
• gtid-mode

1339
Amazon Relational Database Service User Guide
MySQL parameters that aren't available

• gtid_next
• gtid_owned
• gtid_purged
• log_bin_basename
• log_bin_index
• log_bin_use_v1_row_events
• log_slow_admin_statements
• log_slow_slave_statements
• log_throttle_queries_not_using_indexes
• master-info-repository
• optimizer_trace
• optimizer_trace_features
• optimizer_trace_limit
• optimizer_trace_max_mem_size
• optimizer_trace_offset
• relay_log_info_repository
• rpl_stop_slave_timeout
• slave_parallel_workers
• slave_pending_jobs_size_max
• slave_rows_search_algorithms
• storage_engine
• table_open_cache_instances
• timed_mutexes
• transaction_allow_batching
• validate-password
• validate_password_dictionary_file
• validate_password_length
• validate_password_mixed_case_count
• validate_password_number_count
• validate_password_policy
• validate_password_special_char_count

For more information on MySQL parameters, see the MySQL documentation.

1340
Amazon Relational Database Service User Guide
Migrating data from a MySQL DB
snapshot to a MariaDB DB instance

Migrating data from a MySQL DB snapshot to a


MariaDB DB instance
You can migrate an RDS for MySQL DB snapshot to a new DB instance running MariaDB using the AWS
Management Console, the AWS CLI, or Amazon RDS API. You must use a DB snapshot that was created
from an Amazon RDS DB instance running MySQL 5.6 or 5.7. To learn how to create an RDS for MySQL
DB snapshot, see Creating a DB snapshot (p. 613).

Migrating the snapshot doesn't affect the original DB instance from which the snapshot was taken. You
can test and validate the new DB instance before diverting traffic to it as a replacement for the original
DB instance.

After you migrate from MySQL to MariaDB, the MariaDB DB instance is associated with the default DB
parameter group and option group. After you restore the DB snapshot, you can associate a custom DB
parameter group with the new DB instance. However, a MariaDB parameter group has a different set
of configurable system variables. For information about the differences between MySQL and MariaDB
system variables, see System Variable Differences between MariaDB and MySQL. To learn about DB
parameter groups, see Working with parameter groups (p. 347). To learn about option groups, see
Working with option groups (p. 331).

Performing the migration


You can migrate an RDS for MySQL DB snapshot to a new MariaDB DB instance using the AWS
Management Console, the AWS CLI, or the RDS API.

Console

To migrate a MySQL DB snapshot to a MariaDB DB instance

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Snapshots, and then select the MySQL DB snapshot you want to
migrate.
3. For Actions, choose Migrate snapshot. The Migrate database page appears.
4. For Migrate to DB Engine, choose mariadb.

Amazon RDS selects the DB engine version automatically. You can't change the DB engine version.

1341
Amazon Relational Database Service User Guide
Performing the migration

5. For the remaining sections, specify your DB instance settings. For information about each setting,
see Settings for DB instances (p. 308).
6. Choose Migrate.

AWS CLI
To migrate data from a MySQL DB snapshot to a MariaDB DB instance, use the AWS CLI restore-db-
instance-from-db-snapshot command with the following parameters:

• --db-instance-identifier – Name of the DB instance to create from the DB snapshot.


• --db-snapshot-identifier – The identifier for the DB snapshot to restore from.
• --engine – The database engine to use for the new instance.

Example

For Linux, macOS, or Unix:

aws rds restore-db-instance-from-db-snapshot \


--db-instance-identifier newmariadbinstance \
--db-snapshot-identifier mysqlsnapshot \
--engine mariadb

For Windows:

aws rds restore-db-instance-from-db-snapshot ^


--db-instance-identifier newmariadbinstance ^
--db-snapshot-identifier mysqlsnapshot ^
--engine mariadb

1342
Amazon Relational Database Service User Guide
Incompatibilities between MariaDB and MySQL

API
To migrate data from a MySQL DB snapshot to a MariaDB DB instance, call the Amazon RDS API
operation RestoreDBInstanceFromDBSnapshot.

Incompatibilities between MariaDB and MySQL


Incompatibilities between MySQL and MariaDB include the following:

• You can't migrate a DB snapshot created with MySQL 8.0 to MariaDB.


• If the source MySQL database uses a SHA256 password hash, make sure to reset user passwords that
are SHA256 hashed before you connect to the MariaDB database. The following code shows how to
reset a password that is SHA256 hashed.

SET old_passwords = 0;
UPDATE mysql.user SET plugin = 'mysql_native_password',
Password = PASSWORD('new_password')
WHERE (User, Host) = ('master_user_name', %);
FLUSH PRIVILEGES;

• If your RDS master user account uses the SHA-256 password hash, make sure to reset the password
using the AWS Management Console, the modify-db-instance AWS CLI command, or the
ModifyDBInstance RDS API operation. For information about modifying a DB instance, see Modifying
an Amazon RDS DB instance (p. 401).
• MariaDB doesn't support the Memcached plugin. However, the data used by the Memcached plugin
is stored as InnoDB tables. After you migrate a MySQL DB snapshot, you can access the data used by
the Memcached plugin using SQL. For more information about the innodb_memcache database, see
InnoDB memcached Plugin Internals.

1343
Amazon Relational Database Service User Guide
MariaDB on Amazon RDS SQL reference

MariaDB on Amazon RDS SQL reference


Following, you can find descriptions of system stored procedures that are available for Amazon RDS
instances running the MariaDB DB engine.

You can use the system stored procedures that are available for MySQL DB instances and MariaDB
DB instances. These stored procedures are documented at RDS for MySQL stored procedure
reference (p. 1757). MariaDB DB instances support all of the stored procedures, except for
mysql.rds_start_replication_until and mysql.rds_start_replication_until_gtid.

Additionally, the following system stored procedures are supported only for Amazon RDS DB instances
running MariaDB:

• mysql.rds_replica_status (p. 1344)


• mysql.rds_set_external_master_gtid (p. 1345)
• mysql.rds_kill_query_id (p. 1347)

mysql.rds_replica_status
Shows the replication status of a MariaDB read replica.

Call this procedure on the read replica to show status information on essential parameters of the replica
threads.

Syntax
CALL mysql.rds_replica_status;

Usage notes
This procedure is only supported for MariaDB DB instances running MariaDB version 10.5 and higher.

This procedure is the equivalent of the SHOW REPLICA STATUS command. This command isn't
supported for MariaDB version 10.5 and higher DB instances.

In prior versions of MariaDB, the equivalent SHOW SLAVE STATUS command required the REPLICATION
SLAVE privilege. In MariaDB version 10.5 and higher, it requires the REPLICATION REPLICA ADMIN
privilege. To protect the RDS management of MariaDB 10.5 and higher DB instances, this new privilege
isn't granted to the RDS master user.

Examples
The following example shows the status of a MariaDB read replica:

call mysql.rds_replica_status;

The response is similar to the following:

*************************** 1. row ***************************


Replica_IO_State: Waiting for master to send event
Source_Host: XX.XX.XX.XXX
Source_User: rdsrepladmin
Source_Port: 3306

1344
Amazon Relational Database Service User Guide
mysql.rds_set_external_master_gtid

Connect_Retry: 60
Source_Log_File: mysql-bin-changelog.003988
Read_Source_Log_Pos: 405
Relay_Log_File: relaylog.011024
Relay_Log_Pos: 657
Relay_Source_Log_File: mysql-bin-changelog.003988
Replica_IO_Running: Yes
Replica_SQL_Running: Yes
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
mysql.rds_sysinfo,mysql.rds_history,mysql.rds_replication_status
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 0
Last_Error:
Skip_Counter: 0
Exec_Source_Log_Pos: 405
Relay_Log_Space: 1016
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Source_SSL_Allowed: No
Source_SSL_CA_File:
Source_SSL_CA_Path:
Source_SSL_Cert:
Source_SSL_Cipher:
Source_SSL_Key:
Seconds_Behind_Master: 0
Source_SSL_Verify_Server_Cert: No
Last_IO_Errno: 0
Last_IO_Error:
Last_SQL_Errno: 0
Last_SQL_Error:
Replicate_Ignore_Server_Ids:
Source_Server_Id: 807509301
Source_SSL_Crl:
Source_SSL_Crlpath:
Using_Gtid: Slave_Pos
Gtid_IO_Pos: 0-807509301-3980
Replicate_Do_Domain_Ids:
Replicate_Ignore_Domain_Ids:
Parallel_Mode: optimistic
SQL_Delay: 0
SQL_Remaining_Delay: NULL
Replica_SQL_Running_State: Reading event from the relay log
Replica_DDL_Groups: 15
Replica_Non_Transactional_Groups: 0
Replica_Transactional_Groups: 3658
1 row in set (0.000 sec)

Query OK, 0 rows affected (0.000 sec)

mysql.rds_set_external_master_gtid
Configures GTID-based replication from a MariaDB instance running external to Amazon RDS to a
MariaDB DB instance. This stored procedure is supported only where the external MariaDB instance
is version 10.0.24 or higher. When setting up replication where one or both instances do not support
MariaDB global transaction identifiers (GTIDs), use mysql.rds_set_external_master (p. 1769).

Using GTIDs for replication provides crash-safety features not offered by binary log replication, so we
recommend it in cases where the replicating instances support it.

1345
Amazon Relational Database Service User Guide
mysql.rds_set_external_master_gtid

Syntax

CALL mysql.rds_set_external_master_gtid(
host_name
, host_port
, replication_user_name
, replication_user_password
, gtid
, ssl_encryption
);

Parameters
host_name

String. The host name or IP address of the MariaDB instance running external to Amazon RDS that
will become the source instance.
host_port

Integer. The port used by the MariaDB instance running external to Amazon RDS to be configured
as the source instance. If your network configuration includes SSH port replication that converts the
port number, specify the port number that is exposed by SSH.
replication_user_name

String. The ID of a user with REPLICATION SLAVE permissions in the MariaDB DB instance to be
configured as the read replica.
replication_user_password

String. The password of the user ID specified in replication_user_name.


gtid

String. The global transaction ID on the source instance that replication should start from.

You can use @@gtid_current_pos to get the current GTID if the source instance has been locked
while you are configuring replication, so the binary log doesn't change between the points when you
get the GTID and when replication starts.

Otherwise, if you are using mysqldump version 10.0.13 or greater to populate the replica instance
prior to starting replication, you can get the GTID position in the output by using the --master-
data or --dump-slave options. If you are not using mysqldump version 10.0.13 or greater, you
can run the SHOW MASTER STATUS or use those same mysqldump options to get the binary log
file name and position, then convert them to a GTID by running BINLOG_GTID_POS on the external
MariaDB instance:

SELECT BINLOG_GTID_POS('<binary log file name>', <binary log file position>);

For more information about the MariaDB implementation of GTIDs, go to Global transaction ID in
the MariaDB documentation.
ssl_encryption

A value that specifies whether Secure Socket Layer (SSL) encryption is used on the replication
connection. 1 specifies to use SSL encryption, 0 specifies to not use encryption. The default is 0.
Note
The MASTER_SSL_VERIFY_SERVER_CERT option isn't supported. This option is set to 0,
which means that the connection is encrypted, but the certificates aren't verified.

1346
Amazon Relational Database Service User Guide
mysql.rds_kill_query_id

Usage notes
The mysql.rds_set_external_master_gtid procedure must be run by the master user. It must be
run on the MariaDB DB instance that you are configuring as the replica of a MariaDB instance running
external to Amazon RDS. Before running mysql.rds_set_external_master_gtid, you must have
configured the instance of MariaDB running external to Amazon RDS as a source instance. For more
information, see Importing data into a MariaDB DB instance (p. 1296).
Warning
Do not use mysql.rds_set_external_master_gtid to manage replication between
two Amazon RDS DB instances. Use it only when replicating with a MariaDB instance running
external to RDS. For information about managing replication between Amazon RDS DB
instances, see Working with DB instance read replicas (p. 438).

After calling mysql.rds_set_external_master_gtid to configure an Amazon RDS DB instance


as a read replica, you can call mysql.rds_start_replication (p. 1780) on the replica to start the
replication process. You can call mysql.rds_reset_external_master (p. 1769) to remove the read replica
configuration.

When mysql.rds_set_external_master_gtid is called, Amazon RDS records the time, user, and an
action of "set master" in the mysql.rds_history and mysql.rds_replication_status tables.

Examples
When run on a MariaDB DB instance, the following example configures it as the replica of an instance of
MariaDB running external to Amazon RDS.

call mysql.rds_set_external_master_gtid
('Sourcedb.some.com',3306,'ReplicationUser','SomePassW0rd','0-123-456',0);

mysql.rds_kill_query_id
Ends a query running against the MariaDB server.

Syntax
CALL mysql.rds_kill_query_id(queryID);

Parameters
queryID

Integer. The identity of the query to be ended.

Usage notes
To stop a query running against the MariaDB server, use the mysql.rds_kill_query_id procedure
and pass in the ID of that query. To obtain the query ID, query the MariaDB Information schema
PROCESSLIST table, as shown following:

SELECT USER, HOST, COMMAND, TIME, STATE, INFO, QUERY_ID FROM


INFORMATION_SCHEMA.PROCESSLIST WHERE USER = '<user name>';

The connection to the MariaDB server is retained.

1347
Amazon Relational Database Service User Guide
mysql.rds_kill_query_id

Examples
The following example ends a query with a query ID of 230040:

call mysql.rds_kill_query_id(230040);

1348
Amazon Relational Database Service User Guide
Local time zone

Local time zone for MariaDB DB instances


By default, the time zone for a MariaDB DB instance is Universal Time Coordinated (UTC). You can set the
time zone for your DB instance to the local time zone for your application instead.

To set the local time zone for a DB instance, set the time_zone parameter in the parameter group for
your DB instance to one of the supported values listed later in this section. When you set the time_zone
parameter for a parameter group, all DB instances and read replicas that are using that parameter group
change to use the new local time zone. For information on setting parameters in a parameter group, see
Working with parameter groups (p. 347).

After you set the local time zone, all new connections to the database reflect the change. If you have any
open connections to your database when you change the local time zone, you won't see the local time
zone update until after you close the connection and open a new connection.

You can set a different local time zone for a DB instance and one or more of its read replicas. To do this,
use a different parameter group for the DB instance and the replica or replicas and set the time_zone
parameter in each parameter group to a different local time zone.

If you are replicating across AWS Regions, then the source DB instance and the read replica use different
parameter groups (parameter groups are unique to an AWS Region). To use the same local time zone
for each instance, you must set the time_zone parameter in the instance's and read replica's parameter
groups.

When you restore a DB instance from a DB snapshot, the local time zone is set to UTC. You can update
the time zone to your local time zone after the restore is complete. If you restore a DB instance to a
point in time, then the local time zone for the restored DB instance is the time zone setting from the
parameter group of the restored DB instance.

The Internet Assigned Numbers Authority (IANA) publishes new time zones at https://www.iana.org/
time-zones several times a year. Every time RDS releases a new minor maintenance release of MariaDB, it
ships with the latest time zone data at the time of the release. When you use the latest RDS for MariaDB
versions, you have recent time zone data from RDS. To ensure that your DB instance has recent time
zone data, we recommend upgrading to a higher DB engine version. Alternatively, you can modify the
time zone tables in MariaDB DB instances manually. To do so, you can use SQL commands or run the
mysql_tzinfo_to_sql tool in a SQL client. After updating the time zone data manually, reboot your DB
instance so that the changes take effect. RDS doesn't modify or reset the time zone data of running DB
instances. New time zone data is installed only when you perform a database engine version upgrade.

You can set your local time zone to one of the following values.

Africa/Cairo Asia/Riyadh

Africa/Casablanca Asia/Seoul

Africa/Harare Asia/Shanghai

Africa/Monrovia Asia/Singapore

Africa/Nairobi Asia/Taipei

Africa/Tripoli Asia/Tehran

Africa/Windhoek Asia/Tokyo

America/Araguaina Asia/Ulaanbaatar

America/Asuncion Asia/Vladivostok

1349
Amazon Relational Database Service User Guide
Local time zone

America/Bogota Asia/Yakutsk

America/Buenos_Aires Asia/Yerevan

America/Caracas Atlantic/Azores

America/Chihuahua Australia/Adelaide

America/Cuiaba Australia/Brisbane

America/Denver Australia/Darwin

America/Fortaleza Australia/Hobart

America/Guatemala Australia/Perth

America/Halifax Australia/Sydney

America/Manaus Brazil/East

America/Matamoros Canada/Newfoundland

America/Monterrey Canada/Saskatchewan

America/Montevideo Canada/Yukon

America/Phoenix Europe/Amsterdam

America/Santiago Europe/Athens

America/Tijuana Europe/Dublin

Asia/Amman Europe/Helsinki

Asia/Ashgabat Europe/Istanbul

Asia/Baghdad Europe/Kaliningrad

Asia/Baku Europe/Moscow

Asia/Bangkok Europe/Paris

Asia/Beirut Europe/Prague

Asia/Calcutta Europe/Sarajevo

Asia/Damascus Pacific/Auckland

Asia/Dhaka Pacific/Fiji

Asia/Irkutsk Pacific/Guam

Asia/Jerusalem Pacific/Honolulu

Asia/Kabul Pacific/Samoa

Asia/Karachi US/Alaska

Asia/Kathmandu US/Central

Asia/Krasnoyarsk US/Eastern

Asia/Magadan US/East-Indiana

1350
Amazon Relational Database Service User Guide
Local time zone

Asia/Muscat US/Pacific

Asia/Novosibirsk UTC

1351
Amazon Relational Database Service User Guide
Known issues and limitations for MariaDB

Known issues and limitations for RDS for MariaDB


The following items are known issues and limitations when using RDS for MariaDB.
Note
This list is not exhaustive.

Topics
• MariaDB file size limits in Amazon RDS (p. 1352)
• InnoDB reserved word (p. 1353)
• Custom ports (p. 1353)
• Performance Insights (p. 1353)

MariaDB file size limits in Amazon RDS


For MariaDB DB instances, the maximum size of a table is 16 TB when using InnoDB file-per-table
tablespaces. This limit also constrains the system tablespace to a maximum size of 16 TB. InnoDB file-
per-table tablespaces (with tables each in their own tablespace) are set by default for MariaDB DB
instances. This limit isn't related to the maximum storage limit for MariaDB DB instances. For more
information about the storage limit, see Amazon RDS DB instance storage (p. 101).

There are advantages and disadvantages to using InnoDB file-per-table tablespaces, depending on your
application. To determine the best approach for your application, see File-per-table tablespaces in the
MySQL documentation.

We don't recommend allowing tables to grow to the maximum file size. In general, a better practice is to
partition data into smaller tables, which can improve performance and recovery times.

One option that you can use for breaking up a large table into smaller tables is partitioning. Partitioning
distributes portions of your large table into separate files based on rules that you specify. For example,
if you store transactions by date, you can create partitioning rules that distribute older transactions into
separate files using partitioning. Then periodically, you can archive the historical transaction data that
doesn't need to be readily available to your application. For more information, see Partitioning in the
MySQL documentation.

To determine the size of all InnoDB tablespaces

• Use the following SQL command to determine if any of your tables are too large and are candidates
for partitioning.
Note
For MariaDB 10.6 and higher, this query also returns the size of the InnoDB system
tablespace.
For MariaDB versions earlier than 10.6, you can't determine the size of the InnoDB system
tablespace by querying the system tables. We recommend that you upgrade to a later
version.

SELECT SPACE,NAME,ROUND((ALLOCATED_SIZE/1024/1024/1024), 2)
as "Tablespace Size (GB)"
FROM information_schema.INNODB_SYS_TABLESPACES ORDER BY 3 DESC;

To determine the size of non-InnoDB user tables

• Use the following SQL command to determine if any of your non-InnoDB user tables are too large.

1352
Amazon Relational Database Service User Guide
InnoDB reserved word

SELECT TABLE_SCHEMA, TABLE_NAME, round(((DATA_LENGTH + INDEX_LENGTH+DATA_FREE)


/ 1024 / 1024/ 1024), 2) As "Approximate size (GB)" FROM information_schema.TABLES
WHERE TABLE_SCHEMA NOT IN ('mysql', 'information_schema', 'performance_schema')
and ENGINE<>'InnoDB';

To enable InnoDB file-per-table tablespaces

• Set the innodb_file_per_table parameter to 1 in the parameter group for the DB instance.

To disable InnoDB file-per-table tablespaces

• Set the innodb_file_per_table parameter to 0 in the parameter group for the DB instance.

For information on updating a parameter group, see Working with parameter groups (p. 347).

When you have enabled or disabled InnoDB file-per-table tablespaces, you can issue an ALTER TABLE
command. You can use this command to move a table from the global tablespace to its own tablespace.
Or you can move a table from its own tablespace to the global tablespace. Following is an example.

ALTER TABLE table_name ENGINE=InnoDB, ALGORITHM=COPY;

InnoDB reserved word


InnoDB is a reserved word for RDS for MariaDB. You can't use this name for a MariaDB database.

Custom ports
Amazon RDS blocks connections to custom port 33060 for the MariaDB engine. Choose a different port
for your MariaDB engine.

Performance Insights
InnoDB counters are not visible in Performance Insights for RDS for MariaDB version 10.11 because the
MariaDB community no longer supports them.

1353
Amazon Relational Database Service User Guide

Amazon RDS for Microsoft SQL


Server
Amazon RDS supports several versions and editions of Microsoft SQL Server. The following table shows
the most recent supported minor version of each major version. For the full list of supported versions,
editions, and RDS engine versions, see Microsoft SQL Server versions on Amazon RDS (p. 1362).

Major version Service Pack / Cumulative Minor version Knowledge Release Date
GDR Update Base Article

SQL Server – CU21 15.0.4316.3 KB5025808 June 15, 2023


2019

SQL Server GDR CU31 14.0.3460.9 KB5021126 February 14,


2017 2023

SQL Server SP3 GDR – 13.0.6430.49 KB5021129 February 14,


2016 2023

SQL Server SP3 GDR CU4 12.0.6444.4 KB5021045 February 14,


2014 2023

For information about licensing for SQL Server, see Licensing Microsoft SQL Server on Amazon
RDS (p. 1379). For information about SQL Server builds, see this Microsoft support article about the
latest SQL Server builds.

With Amazon RDS, you can create DB instances and DB snapshots, point-in-time restores, and automated
or manual backups. DB instances running SQL Server can be used inside a VPC. You can also use Secure
Sockets Layer (SSL) to connect to a DB instance running SQL Server, and you can use transparent data
encryption (TDE) to encrypt data at rest. Amazon RDS currently supports Multi-AZ deployments for SQL
Server using SQL Server Database Mirroring (DBM) or Always On Availability Groups (AGs) as a high-
availability, failover solution.

To deliver a managed service experience, Amazon RDS does not provide shell access to DB instances,
and it restricts access to certain system procedures and tables that require advanced privileges. Amazon
RDS supports access to databases on a DB instance using any standard SQL client application such
as Microsoft SQL Server Management Studio. Amazon RDS does not allow direct host access to a DB
instance via Telnet, Secure Shell (SSH), or Windows Remote Desktop Connection. When you create a DB
instance, the master user is assigned to the db_owner role for all user databases on that instance, and has
all database-level permissions except for those that are used for backups. Amazon RDS manages backups
for you.

Before creating your first DB instance, you should complete the steps in the setting up section of this
guide. For more information, see Setting up for Amazon RDS (p. 174).

Topics
• Common management tasks for Microsoft SQL Server on Amazon RDS (p. 1355)
• Limitations for Microsoft SQL Server DB instances (p. 1357)
• DB instance class support for Microsoft SQL Server (p. 1358)
• Microsoft SQL Server security (p. 1360)

1354
Amazon Relational Database Service User Guide
Common management tasks

• Compliance program support for Microsoft SQL Server DB instances (p. 1361)
• SSL support for Microsoft SQL Server DB instances (p. 1362)
• Microsoft SQL Server versions on Amazon RDS (p. 1362)
• Version management in Amazon RDS (p. 1363)
• Microsoft SQL Server features on Amazon RDS (p. 1364)
• Change data capture support for Microsoft SQL Server DB instances (p. 1366)
• Features not supported and features with limited support (p. 1367)
• Multi-AZ deployments using Microsoft SQL Server Database Mirroring or Always On availability
groups (p. 1368)
• Using Transparent Data Encryption to encrypt data at rest (p. 1368)
• Functions and stored procedures for Amazon RDS for Microsoft SQL Server (p. 1368)
• Local time zone for Microsoft SQL Server DB instances (p. 1371)
• Licensing Microsoft SQL Server on Amazon RDS (p. 1379)
• Connecting to a DB instance running the Microsoft SQL Server database engine (p. 1380)
• Working with Active Directory with RDS for SQL Server (p. 1387)
• Updating applications to connect to Microsoft SQL Server DB instances using new SSL/TLS
certificates (p. 1411)
• Upgrading the Microsoft SQL Server DB engine (p. 1414)
• Importing and exporting SQL Server databases using native backup and restore (p. 1419)
• Working with read replicas for Microsoft SQL Server in Amazon RDS (p. 1446)
• Multi-AZ deployments for Amazon RDS for Microsoft SQL Server (p. 1450)
• Additional features for Microsoft SQL Server on Amazon RDS (p. 1455)
• Options for the Microsoft SQL Server database engine (p. 1514)
• Common DBA tasks for Microsoft SQL Server (p. 1602)

Common management tasks for Microsoft SQL


Server on Amazon RDS
The following are the common management tasks you perform with an Amazon RDS for SQL Server DB
instance, with links to relevant documentation for each task.

Task area Relevant documentation

Instance classes, storage, and PIOPS DB instance class support for


Microsoft SQL Server (p. 1358)
If you are creating a DB instance for production purposes, you
should understand how instance classes, storage types, and Amazon RDS storage
Provisioned IOPS work in Amazon RDS. types (p. 101)

Multi-AZ deployments Configuring and managing a


Multi-AZ deployment (p. 492)
A production DB instance should use Multi-AZ deployments. Multi-
AZ deployments provide increased availability, data durability, and Multi-AZ deployments using
fault tolerance for DB instances. Multi-AZ deployments for SQL Microsoft SQL Server Database
Server are implemented using SQL Server's native DBM or AGs Mirroring or Always On
technology. availability groups (p. 1368)

Amazon Virtual Private Cloud (VPC) Working with a DB instance in a


VPC (p. 2688)

1355
Amazon Relational Database Service User Guide
Common management tasks

Task area Relevant documentation


If your AWS account has a default VPC, then your DB instance is
automatically created inside the default VPC. If your account does
not have a default VPC, and you want the DB instance in a VPC, you
must create the VPC and subnet groups before you create the DB
instance.

Security groups Controlling access with security


groups (p. 2680)
By default, DB instances are created with a firewall that prevents
access to them. You therefore must create a security group with the
correct IP addresses and network configuration to access the DB
instance.

Parameter groups Working with parameter


groups (p. 347)
If your DB instance is going to require specific database parameters,
you should create a parameter group before you create the DB
instance.

Option groups Options for the Microsoft


SQL Server database
If your DB instance is going to require specific database options, engine (p. 1514)
you should create an option group before you create the DB
instance.

Connecting to your DB instance Connecting to a DB instance


running the Microsoft
After creating a security group and associating it to a DB instance, SQL Server database
you can connect to the DB instance using any standard SQL client engine (p. 1380)
application such as Microsoft SQL Server Management Studio.

Backup and restore Working with backups (p. 591)

When you create your DB instance, you can configure it to take Importing and exporting SQL
automated backups. You can also back up and restore your Server databases using native
databases manually by using full backup files (.bak files). backup and restore (p. 1419)

Monitoring Viewing metrics in the Amazon


RDS console (p. 696)
You can monitor your SQL Server DB instance by using CloudWatch
Amazon RDS metrics, events, and enhanced monitoring. Viewing Amazon RDS
events (p. 852)

Log files Monitoring Amazon RDS log


files (p. 895)
You can access the log files for your SQL Server DB instance.
Microsoft SQL Server database
log files (p. 911)

There are also advanced administrative tasks for working with SQL Server DB instances. For more
information, see the following documentation:

• Common DBA tasks for Microsoft SQL Server (p. 1602).


• Working with AWS Managed Active Directory with RDS for SQL Server (p. 1401)
• Accessing the tempdb database (p. 1603)

1356
Amazon Relational Database Service User Guide
Limitations

Limitations for Microsoft SQL Server DB instances


The Amazon RDS implementation of Microsoft SQL Server on a DB instance has some limitations that
you should be aware of:

• The maximum number of databases supported on a DB instance depends on the instance class type
and the availability mode—Single-AZ, Multi-AZ Database Mirroring (DBM), or Multi-AZ Availability
Groups (AGs). The Microsoft SQL Server system databases don't count toward this limit.

The following table shows the maximum number of supported databases for each instance class type
and availability mode. Use this table to help you decide if you can move from one instance class type
to another, or from one availability mode to another. If your source DB instance has more databases
than the target instance class type or availability mode can support, modifying the DB instance fails.
You can see the status of your request in the Events pane.

Instance class type Single-AZ Multi-AZ with DBM Multi-AZ with Always
On AGs

db.*.micro to 30 N/A N/A


db.*.medium

db.*.large 30 30 30

db.*.xlarge to 100 50 75
db.*.16xlarge

db.*.24xlarge 100 50 100

* Represents the different instance class types.

For example, let's say that your DB instance runs on a db.*.16xlarge with Single-AZ and that it has 76
databases. You modify the DB instance to upgrade to using Multi-AZ Always On AGs. This upgrade
fails, because your DB instance contains more databases than your target configuration can support. If
you upgrade your instance class type to db.*.24xlarge instead, the modification succeeds.

If the upgrade fails, you see events and messages similar to the following:
• Unable to modify database instance class. The instance has 76 databases, but after conversion it
would only support 75.
• Unable to convert the DB instance to Multi-AZ: The instance has 76 databases, but after conversion
it would only support 75.

If the point-in-time restore or snapshot restore fails, you see events and messages similar to the
following:
• Database instance put into incompatible-restore. The instance has 76 databases, but after
conversion it would only support 75.
• Some ports are reserved for Amazon RDS, and you can't use them when you create a DB instance.
• Client connections from IP addresses within the range 169.254.0.0/16 are not permitted. This is the
Automatic Private IP Addressing Range (APIPA), which is used for local-link addressing.
• SQL Server Standard Edition uses only a subset of the available processors if the DB instance has more
processors than the software limits (24 cores, 4 sockets, and 128GB RAM). Examples of this are the
db.m5.24xlarge and db.r5.24xlarge instance classes.

For more information, see the table of scale limits under Editions and supported features of SQL
Server 2019 (15.x) in the Microsoft documentation.

1357
Amazon Relational Database Service User Guide
DB instance class support

• Amazon RDS for SQL Server doesn't support importing data into the msdb database.
• You can't rename databases on a DB instance in a SQL Server Multi-AZ deployment.
• Make sure that you use these guidelines when setting the following DB parameters on RDS for SQL
Server:
• max server memory (mb) >= 256 MB
• max worker threads >= (number of logical CPUs * 7)

For more information on setting DB parameters, see Working with parameter groups (p. 347).
• The maximum storage size for SQL Server DB instances is the following:
• General Purpose (SSD) storage – 16 TiB for all editions
• Provisioned IOPS storage – 16 TiB for all editions
• Magnetic storage – 1 TiB for all editions

If you have a scenario that requires a larger amount of storage, you can use sharding across multiple
DB instances to get around the limit. This approach requires data-dependent routing logic in
applications that connect to the sharded system. You can use an existing sharding framework, or you
can write custom code to enable sharding. If you use an existing framework, the framework can't
install any components on the same server as the DB instance.
• The minimum storage size for SQL Server DB instances is the following:
• General Purpose (SSD) storage – 20 GiB for Enterprise, Standard, Web, and Express Editions
• Provisioned IOPS storage – 20 GiB for Enterprise, Standard, Web, and Express Editions
• Magnetic storage – 20 GiB for Enterprise, Standard, Web, and Express Editions
• Amazon RDS doesn't support running these services on the same server as your RDS DB instance:
• Data Quality Services
• Master Data Services

To use these features, we recommend that you install SQL Server on an Amazon EC2 instance, or use
an on-premises SQL Server instance. In these cases, the EC2 or SQL Server instance acts as the Master
Data Services server for your SQL Server DB instance on Amazon RDS. You can install SQL Server on an
Amazon EC2 instance with Amazon EBS storage, pursuant to Microsoft licensing policies.
• Because of limitations in Microsoft SQL Server, restoring to a point in time before successfully running
DROP DATABASE might not reflect the state of that database at that point in time. For example,
the dropped database is typically restored to its state up to 5 minutes before the DROP DATABASE
command was issued. This type of restore means that you can't restore the transactions made
during those few minutes on your dropped database. To work around this, you can reissue the DROP
DATABASE command after the restore operation is completed. Dropping a database removes the
transaction logs for that database.
• For SQL Server, you create your databases after you create your DB instance. Database names follow
the usual SQL Server naming rules with the following differences:
• Database names can't start with rdsadmin.
• They can't start or end with a space or a tab.
• They can't contain any of the characters that create a new line.
• They can't contain a single quote (').

DB instance class support for Microsoft SQL Server


The computation and memory capacity of a DB instance is determined by its DB instance class. The
DB instance class you need depends on your processing power and memory requirements. For more
information, see DB instance classes (p. 11).
1358
Amazon Relational Database Service User Guide
DB instance class support

The following list of DB instance classes supported for Microsoft SQL Server is provided here for your
convenience. For the most current list, see the RDS console: https://console.aws.amazon.com/rds/.

Not all DB instance classes are available on all supported SQL Server minor versions. For example,
some newer DB instance classes such as db.r6i aren't available on older minor versions. You can use the
describe-orderable-db-instance-options AWS CLI command to find out which DB instance classes are
available for your SQL Server edition and version.

SQL 2019 support range 2017 and 2016 support 2014 support range
Server range
edition

Enterprise db.t3.xlarge–db.t3.2xlarge
db.t3.xlarge–db.t3.2xlarge
db.t3.xlarge–db.t3.2xlarge
Edition
db.r5.xlarge–db.r5.24xlarge
db.r3.xlarge–db.r3.8xlarge
db.r3.xlarge–db.r3.8xlarge

db.r5b.xlarge–db.r5b.24xlarge
db.r4.xlarge–db.r4.16xlarge
db.r4.xlarge–db.r4.8xlarge

db.r5d.xlarge–db.r5d.24xlarge
db.r5.xlarge–db.r5.24xlarge
db.r5.xlarge–db.r5.24xlarge

db.r6i.xlarge–db.r6i.32xlarge
db.r5b.xlarge–db.r5b.24xlarge
db.r5b.xlarge–db.r5b.24xlarge

db.m5.xlarge–db.m5.24xlarge
db.r5d.xlarge–db.r5d.24xlarge
db.r5d.xlarge–db.r5d.24xlarge

db.m5d.xlarge–db.m5d.24xlarge
db.r6i.xlarge–db.r6i.32xlarge
db.r6i.xlarge–db.r6i.32xlarge

db.m6i.xlarge–db.m6i.32xlarge
db.m4.xlarge–db.m4.16xlarge
db.m4.xlarge–db.m4.10xlarge

db.x1.16xlarge–db.x1.32xlarge
db.m5.xlarge–db.m5.24xlarge
db.m5.xlarge–db.m5.24xlarge

db.x1e.xlarge–db.x1e.32xlarge
db.m5d.xlarge–db.m5d.24xlarge
db.m5d.xlarge–db.m5d.24xlarge

db.z1d.xlarge–db.z1d.12xlarge
db.m6i.xlarge–db.m6i.32xlarge
db.m6i.xlarge–db.m6i.32xlarge

db.x1.16xlarge–db.x1.32xlarge
db.x1.16xlarge–db.x1.32xlarge

db.x1e.xlarge–db.x1e.32xlarge

db.z1d.xlarge–db.z1d.12xlarge

Standard db.t3.xlarge–db.t3.2xlarge
db.t3.xlarge–db.t3.2xlarge
db.t3.xlarge–db.t3.2xlarge
Edition
db.r5.large–db.r5.24xlarge
db.r4.large–db.r4.16xlarge
db.r3.large–db.r3.8xlarge

db.r5b.large–db.r5b.24xlarge
db.r5.large–db.r5.24xlarge
db.r4.large–db.r4.8xlarge

db.r5d.large–db.r5d.24xlarge
db.r5b.large–db.r5b.24xlarge
db.r5.large–db.r5.24xlarge

db.r6i.large–db.r6i.8xlarge
db.r5d.large–db.r5d.24xlarge
db.r5b.large–db.r5b.24xlarge

db.m5.large–db.m5.24xlarge
db.r6i.large–db.r6i.8xlarge
db.r5d.large–db.r5d.24xlarge

db.m5d.large–db.m5d.24xlarge
db.m4.large–db.m4.16xlarge
db.r6i.large–db.r6i.8xlarge

db.m6i.large–db.m6i.8xlarge
db.m5.large–db.m5.24xlarge
db.m3.medium–db.m3.2xlarge

db.x1.16xlarge–db.x1.32xlarge
db.m5d.large–db.m5d.24xlarge
db.m4.large–db.m4.10xlarge

db.x1e.xlarge–db.x1e.32xlarge
db.m6i.large–db.m6i.8xlarge
db.m5.large–db.m5.24xlarge

db.z1d.large–db.z1d.12xlarge
db.x1.16xlarge–db.x1.32xlarge
db.m5d.large–db.m5d.24xlarge

1359
Amazon Relational Database Service User Guide
Security

SQL 2019 support range 2017 and 2016 support 2014 support range
Server range
edition
db.x1e.xlarge–db.x1e.32xlarge
db.m6i.large–db.m6i.8xlarge

db.z1d.large–db.z1d.12xlarge
db.x1.16xlarge–db.x1.32xlarge

Web db.t3.small–db.t3.2xlarge
db.t2.small–db.t2.mediumdb.t2.small–db.t2.medium
Edition
db.r5.large–db.r5.4xlarge
db.t3.small–db.t3.2xlarge
db.t3.small–db.t3.2xlarge

db.r5b.large–db.r5b.4xlarge
db.r4.large–db.r4.2xlarge
db.r3.large–db.r3.2xlarge

db.r5d.large–db.r5d.4xlarge
db.r5.large–db.r5.4xlarge
db.r4.large–db.r4.2xlarge

db.r6i.large–db.r6i.4xlarge
db.r5b.large–db.r5b.4xlarge
db.r5.large–db.r5.4xlarge

db.m5.large–db.m5.4xlarge
db.r5d.large–db.r5d.4xlarge
db.r5b.large–db.r5b.4xlarge

db.m5d.large–db.m5d.4xlarge
db.r6i.large–db.r6i.4xlarge
db.r5d.large–db.r5d.4xlarge

db.m6i.large–db.m6i.4xlarge
db.m4.large–db.m4.4xlarge
db.r6i.large–db.r6i.4xlarge

db.z1d.large–db.z1d.3xlarge
db.m5.large–db.m5.4xlarge
db.m3.medium–db.m3.2xlarge

db.m5d.large–db.m5d.4xlarge
db.m4.large–db.m4.4xlarge

db.m6i.large–db.m6i.4xlarge
db.m5.large–db.m5.4xlarge

db.z1d.large–db.z1d.3xlarge
db.m5d.large–db.m5d.4xlarge

db.m6i.large–db.m6i.4xlarge

Express db.t3.small–db.t3.xlargedb.t2.micro–db.t2.mediumdb.t2.micro–db.t2.medium
Edition
db.t3.small–db.t3.xlargedb.t3.small–db.t3.xlarge

Microsoft SQL Server security


The Microsoft SQL Server database engine uses role-based security. The master user name that you
specify when you create a DB instance is a SQL Server Authentication login that is a member of the
processadmin, public, and setupadmin fixed server roles.

Any user who creates a database is assigned to the db_owner role for that database and has all
database-level permissions except for those that are used for backups. Amazon RDS manages backups
for you.

The following server-level roles aren't available in Amazon RDS for SQL Server:

• bulkadmin
• dbcreator
• diskadmin
• securityadmin
• serveradmin
• sysadmin

1360
Amazon Relational Database Service User Guide
Compliance programs

The following server-level permissions aren't available on RDS for SQL Server DB instances:

• ALTER ANY DATABASE


• ALTER ANY EVENT NOTIFICATION
• ALTER RESOURCES
• ALTER SETTINGS (you can use the DB parameter group API operations to modify parameters; for more
information, see Working with parameter groups (p. 347))
• AUTHENTICATE SERVER
• CONTROL_SERVER
• CREATE DDL EVENT NOTIFICATION
• CREATE ENDPOINT
• CREATE SERVER ROLE
• CREATE TRACE EVENT NOTIFICATION
• DROP ANY DATABASE
• EXTERNAL ACCESS ASSEMBLY
• SHUTDOWN (You can use the RDS reboot option instead)
• UNSAFE ASSEMBLY
• ALTER ANY AVAILABILITY GROUP
• CREATE ANY AVAILABILITY GROUP

Compliance program support for Microsoft SQL


Server DB instances
AWS Services in scope have been fully assessed by a third-party auditor and result in a certification,
attestation of compliance, or Authority to Operate (ATO). For more information, see AWS services in
scope by compliance program.

HIPAA support for Microsoft SQL Server DB instances


You can use Amazon RDS for Microsoft SQL Server databases to build HIPAA-compliant applications. You
can store healthcare-related information, including protected health information (PHI), under a Business
Associate Agreement (BAA) with AWS. For more information, see HIPAA compliance.

Amazon RDS for SQL Server supports HIPAA for the following versions and editions:

• SQL Server 2019 Enterprise, Standard, and Web Editions


• SQL Server 2017 Enterprise, Standard, and Web Editions
• SQL Server 2016 Enterprise, Standard, and Web Editions
• SQL Server 2014 Enterprise, Standard, and Web Editions

To enable HIPAA support on your DB instance, set up the following three components.

Component Details

Auditing To set up auditing, set the parameter rds.sqlserver_audit to the value


fedramp_hipaa. If your DB instance is not already using a custom DB
parameter group, you must create a custom parameter group and attach
it to your DB instance before you can modify the rds.sqlserver_audit
parameter. For more information, see Working with parameter groups (p. 347).

1361
Amazon Relational Database Service User Guide
SSL support

Component Details

Transport To set up transport encryption, force all connections to your DB instance to use
encryption Secure Sockets Layer (SSL). For more information, see Forcing connections to
your DB instance to use SSL (p. 1456).

Encryption at rest To set up encryption at rest, you have two options:

1. If you're running SQL Server 2014–2019 Enterprise Edition or 2019 Standard


Edition, you can use Transparent Data Encryption (TDE) to achieve encryption
at rest. For more information, see Support for Transparent Data Encryption in
SQL Server (p. 1528).
2. You can set up encryption at rest by using AWS Key Management Service
(AWS KMS) encryption keys. For more information, see Encrypting Amazon
RDS resources (p. 2586).

SSL support for Microsoft SQL Server DB instances


You can use SSL to encrypt connections between your applications and your Amazon RDS DB instances
running Microsoft SQL Server. You can also force all connections to your DB instance to use SSL. If you
force connections to use SSL, it happens transparently to the client, and the client doesn't have to do any
work to use SSL.

SSL is supported in all AWS Regions and for all supported SQL Server editions. For more information, see
Using SSL with a Microsoft SQL Server DB instance (p. 1456).

Microsoft SQL Server versions on Amazon RDS


You can specify any currently supported Microsoft SQL Server version when creating a new DB instance.
You can specify the Microsoft SQL Server major version (such as Microsoft SQL Server 14.00), and any
supported minor version for the specified major version. If no version is specified, Amazon RDS defaults
to a supported version, typically the most recent version. If a major version is specified but a minor
version is not, Amazon RDS defaults to a recent release of the major version you have specified.

The following table shows the supported versions for all editions and all AWS Regions, except where
noted. You can also use the describe-db-engine-versions AWS CLI command to see a list of
supported versions, as well as defaults for newly created DB instances.

SQL Server versions supported in RDS

Major version Minor version RDS API EngineVersion and CLI


engine-version

SQL Server 2019 15.00.4316.3 (CU21) 15.00.4316.3.v1

15.00.4236.7 (CU16) 15.00.4236.7.v1

15.00.4198.2 (CU15) 15.00.4198.2.v1

15.00.4153.1 (CU12) 15.00.4153.1.v1

15.00.4073.23 (CU8) 15.00.4073.23.v1

15.00.4043.16 (CU5)

1362
Amazon Relational Database Service User Guide
Version management

Major version Minor version RDS API EngineVersion and CLI


engine-version
15.00.4043.16.v1

SQL Server 2017 14.00.3460.9 (CU31) 14.00.3460.9.v1

14.00.3451.2 (CU30) 14.00.3451.2.v1

14.00.3421.10 (CU27) 14.00.3421.10.v1

14.00.3401.7 (CU25) 14.00.3401.7.v1

14.00.3381.3 (CU23) 14.00.3381.3.v1

14.00.3356.20 (CU22) 14.00.3356.20.v1

14.00.3294.2 (CU20) 14.00.3294.2.v1

14.00.3281.6 (CU19) 14.00.3281.6.v1

SQL Server 2016 13.00.6430.49 (GDR) 13.00.6430.49.v1

13.00.6419.1 (SP3 + Hotfix) 13.00.6419.1.v1

13.00.6300.2 (SP3) 13.00.6300.2.v1

SQL Server 2014 12.00.6444.4 (SP3 CU4 GDR) 12.00.6444.4.v1

12.00.6439.10 (SP3 CU4 SU) 12.00.6439.10.v1

12.00.6433.1 (SP3 CU4 SU) 12.00.6433.1.v1

12.00.6329.1 (SP3 CU4) 12.00.6329.1.v1

12.00.6293.0 (SP3 CU3) 12.00.6293.0.v1

Version management in Amazon RDS


Amazon RDS includes flexible version management that enables you to control when and how your DB
instance is patched or upgraded. This enables you to do the following for your DB engine:

• Maintain compatibility with database engine patch versions.


• Test new patch versions to verify that they work with your application before you deploy them in
production.
• Plan and perform version upgrades to meet your service level agreements and timing requirements.

Microsoft SQL Server engine patching in Amazon RDS


Amazon RDS periodically aggregates official Microsoft SQL Server database patches into a DB instance
engine version that's specific to Amazon RDS. For more information about the Microsoft SQL Server
patches in each engine version, see Version and feature support on Amazon RDS.

Currently, you manually perform all engine upgrades on your DB instance. For more information, see
Upgrading the Microsoft SQL Server DB engine (p. 1414).

1363
Amazon Relational Database Service User Guide
Deprecation schedule

Deprecation schedule for major engine versions of


Microsoft SQL Server on Amazon RDS
The following table displays the planned schedule of deprecations for major engine versions of Microsoft
SQL Server.

Date Information

July 9, 2024 Microsoft will stop critical patch updates for SQL Server 2014. For more information, see
documentation.

June 1, 2024 Amazon RDS plans to end support of Microsoft SQL Server 2014 on RDS for SQL Server. A
scheduled to migrate to SQL Server 2016 (latest minor version available). For more inform
Server ending support for SQL Server 2014 major versions.

To avoid an automatic upgrade from Microsoft SQL Server 2014, you can upgrade at a tim
see Upgrading a DB instance engine version (p. 429).

July 12, 2022 Microsoft will stop critical patch updates for SQL Server 2012. For more information, see
documentation.

June 1, 2022 Amazon RDS plans to end support of Microsoft SQL Server 2012 on RDS for SQL Server. A
scheduled to migrate to SQL Server 2014 (latest minor version available). For more inform
Server ending support for SQL Server 2012 major versions.

To avoid an automatic upgrade from Microsoft SQL Server 2012, you can upgrade at a tim
see Upgrading a DB instance engine version (p. 429).

September 1, 2021 Amazon RDS is starting to disable the creation of new RDS for SQL Server DB instances u
information, see Announcement: Amazon RDS for SQL Server ending support for SQL Ser

July 12, 2019 The Amazon RDS team deprecated support for Microsoft SQL Server 2008 R2 in June 201
2008 R2 are migrating to SQL Server 2012 (latest minor version available).

To avoid an automatic upgrade from Microsoft SQL Server 2008 R2, you can upgrade at a
information, see Upgrading a DB instance engine version (p. 429).

April 25, 2019 Before the end of April 2019, you will no longer be able to create new Amazon RDS for SQ
Server 2008R2.

Microsoft SQL Server features on Amazon RDS


The supported SQL Server versions on Amazon RDS include the following features. In general, a
version also includes features from the previous versions, unless otherwise noted in the Microsoft
documentation.

Topics
• Microsoft SQL Server 2019 features (p. 1365)
• Microsoft SQL Server 2017 features (p. 1365)
• Microsoft SQL Server 2016 features (p. 1366)
• Microsoft SQL Server 2014 features (p. 1366)
• Microsoft SQL Server 2012 end of support on Amazon RDS (p. 1366)
• Microsoft SQL Server 2008 R2 end of support on Amazon RDS (p. 1366)

1364
Amazon Relational Database Service User Guide
SQL Server 2019 features

Microsoft SQL Server 2019 features


SQL Server 2019 includes many new features, such as the following:

• Accelerated database recovery (ADR) – Reduces crash recovery time after a restart or a long-running
transaction rollback.
• Intelligent Query Processing (IQP):
• Row mode memory grant feedback – Corrects excessive grants automatically, that would otherwise
result in wasted memory and reduced concurrency.
• Batch mode on rowstore – Enables batch mode execution for analytic workloads without requiring
columnstore indexes.
• Table variable deferred compilation – Improves plan quality and overall performance for queries that
reference table variables.
• Intelligent performance:
• OPTIMIZE_FOR_SEQUENTIAL_KEY index option – Improves throughput for high-concurrency
inserts into indexes.
• Improved indirect checkpoint scalability – Helps databases with heavy DML workloads.
• Concurrent Page Free Space (PFS) updates – Enables handling as a shared latch rather than an
exclusive latch.
• Monitoring improvements:
• WAIT_ON_SYNC_STATISTICS_REFRESH wait type – Shows accumulated instance-level time spent
on synchronous statistics refresh operations.
• Database-scoped configurations – Include LIGHTWEIGHT_QUERY_PROFILING and
LAST_QUERY_PLAN_STATS.
• Dynamic management functions (DMFs) – Include sys.dm_exec_query_plan_stats and
sys.dm_db_page_info.
• Verbose truncation warnings – The data truncation error message defaults to include table and column
names and the truncated value.
• Resumable online index creation – In SQL Server 2017, only resumable online index rebuild is
supported.

For the full list of SQL Server 2019 features, see What's new in SQL Server 2019 (15.x) in the Microsoft
documentation.

For a list of unsupported features, see Features not supported and features with limited
support (p. 1367).

Microsoft SQL Server 2017 features


SQL Server 2017 includes many new features, such as the following:

• Adaptive query processing


• Automatic plan correction (an automatic tuning feature)
• GraphDB
• Resumable index rebuilds

For the full list of SQL Server 2017 features, see What's new in SQL Server 2017 in the Microsoft
documentation.

For a list of unsupported features, see Features not supported and features with limited
support (p. 1367).

1365
Amazon Relational Database Service User Guide
SQL Server 2016 features

Microsoft SQL Server 2016 features


Amazon RDS supports the following features of SQL Server 2016:

• Always Encrypted
• JSON Support
• Operational Analytics
• Query Store
• Temporal Tables

For the full list of SQL Server 2016 features, see What's new in SQL Server 2016 in the Microsoft
documentation.

Microsoft SQL Server 2014 features


In addition to supported features of SQL Server 2012, Amazon RDS supports the new query optimizer
available in SQL Server 2014, and also the delayed durability feature.

For a list of unsupported features, see Features not supported and features with limited
support (p. 1367).

SQL Server 2014 supports all the parameters from SQL Server 2012 and uses the same default values.
SQL Server 2014 includes one new parameter, backup checksum default. For more information, see
How to enable the CHECKSUM option if backup utilities do not expose the option in the Microsoft
documentation.

Microsoft SQL Server 2012 end of support on


Amazon RDS
SQL Server 2012 has reached its end of support on Amazon RDS.

RDS is upgrading all existing DB instances that are still using SQL Server 2012 to the latest minor version
of SQL Server 2014. For more information, see Version management in Amazon RDS (p. 1363).

Microsoft SQL Server 2008 R2 end of support on


Amazon RDS
SQL Server 2008 R2 has reached its end of support on Amazon RDS.

RDS is upgrading all existing DB instances that are still using SQL Server 2008 R2 to the latest minor
version of SQL Server 2012. For more information, see Version management in Amazon RDS (p. 1363).

Change data capture support for Microsoft SQL


Server DB instances
Amazon RDS supports change data capture (CDC) for your DB instances running Microsoft SQL Server.
CDC captures changes that are made to the data in your tables, and stores metadata about each

1366
Amazon Relational Database Service User Guide
Features not supported and features with limited support

change that you can access later. For more information, see Change data capture in the Microsoft
documentation.

Amazon RDS supports CDC for the following SQL Server editions and versions:

• Microsoft SQL Server Enterprise Edition (All versions)


• Microsoft SQL Server Standard Edition:
• 2019
• 2017
• 2016 version 13.00.4422.0 SP1 CU2 and later

To use CDC with your Amazon RDS DB instances, first enable or disable CDC at the database level by
using RDS-provided stored procedures. After that, any user that has the db_owner role for that database
can use the native Microsoft stored procedures to control CDC on that database. For more information,
see Using change data capture (p. 1614).

You can use CDC and AWS Database Migration Service to enable ongoing replication from SQL Server DB
instances.

Features not supported and features with limited


support
The following Microsoft SQL Server features aren't supported on Amazon RDS:

• Backing up to Microsoft Azure Blob Storage


• Buffer pool extension
• Custom password policies
• Data Quality Services
• Database Log Shipping
• Database snapshots (Amazon RDS supports only DB instance snapshots)
• Extended stored procedures, including xp_cmdshell
• FILESTREAM support
• File tables
• Machine Learning and R Services (requires OS access to install it)
• Maintenance plans
• Performance Data Collector
• Policy-Based Management
• PolyBase
• Replication
• Resource Governor
• Server-level triggers
• Service Broker endpoints
• Stretch database
• TRUSTWORTHY database property (requires sysadmin role)
• T-SQL endpoints (all operations using CREATE ENDPOINT are unavailable)

1367
Amazon Relational Database Service User Guide
Multi-AZ deployments

• WCF Data Services

The following Microsoft SQL Server features have limited support on Amazon RDS:

• Distributed queries/linked servers. For more information, see Implement linked servers with Amazon
RDS for Microsoft SQL Server.
• Common Runtime Language (CLR). On RDS for SQL Server 2016 and lower versions, CLR is supported
in SAFE mode and using assembly bits only. CLR isn't supported on RDS for SQL Server 2017 and
higher versions. For more information, see Common Runtime Language Integration in the Microsoft
documentation.

Multi-AZ deployments using Microsoft SQL Server


Database Mirroring or Always On availability
groups
Amazon RDS supports Multi-AZ deployments for DB instances running Microsoft SQL Server by using
SQL Server Database Mirroring (DBM) or Always On Availability Groups (AGs). Multi-AZ deployments
provide increased availability, data durability, and fault tolerance for DB instances. In the event of
planned database maintenance or unplanned service disruption, Amazon RDS automatically fails
over to the up-to-date secondary replica so database operations can resume quickly without manual
intervention. The primary and secondary instances use the same endpoint, whose physical network
address transitions to the passive secondary replica as part of the failover process. You don't have to
reconfigure your application when a failover occurs.

Amazon RDS manages failover by actively monitoring your Multi-AZ deployment and initiating a failover
when a problem with your primary occurs. Failover doesn't occur unless the standby and primary are
fully in sync. Amazon RDS actively maintains your Multi-AZ deployment by automatically repairing
unhealthy DB instances and re-establishing synchronous replication. You don't have to manage anything.
Amazon RDS handles the primary, the witness, and the standby instance for you. When you set up SQL
Server Multi-AZ, RDS configures passive secondary instances for all of the databases on the instance.

For more information, see Multi-AZ deployments for Amazon RDS for Microsoft SQL Server (p. 1450).

Using Transparent Data Encryption to encrypt data


at rest
Amazon RDS supports Microsoft SQL Server Transparent Data Encryption (TDE), which transparently
encrypts stored data. Amazon RDS uses option groups to enable and configure these features. For more
information about the TDE option, see Support for Transparent Data Encryption in SQL Server (p. 1528).

Functions and stored procedures for Amazon RDS


for Microsoft SQL Server
Following, you can find a list of the Amazon RDS functions and stored procedures that help automate
SQL Server tasks.

1368
Amazon Relational Database Service User Guide
Functions and stored procedures

Task type Procedure or Where it's used


function

Administrative tasks Dropping a Microsoft SQL Server database (p. 1613)


rds_drop_database

Determining the last failover time (p. 1612)


rds_failover_time

Renaming a Microsoft SQL Server database in a Multi-AZ


rds_modify_db_name
deployment (p. 1613)

Viewing error and agent logs (p. 1620)


rds_read_error_log

This operation is used to set various DB instance configurations:


rds_set_configuration

• Change data capture for Multi-AZ instances (p. 1616)


• Setting the retention period for trace and dump files (p. 1621)
• Compressing backup files (p. 1435)

Transitioning a Microsoft SQL Server database from OFFLINE to


rds_set_database_online
ONLINE (p. 1614)

Turning on SQL Server Agent job replication (p. 1617)


rds_set_system_database_sync_objects

rds_fn_get_system_database_sync_objects

rds_fn_server_object_last_sync_time

rds_show_configuration
To see the values that are set using rds_set_configuration,
see these topics:

• Change data capture for Multi-AZ instances (p. 1616)


• Setting the retention period for trace and dump files (p. 1621)

Shrinking the tempdb database (p. 1603)


rds_shrink_tempdbfile

Change data capture (CDC) Disabling CDC (p. 1614)


rds_cdc_disable_db

Enabling CDC (p. 1614)


rds_cdc_enable_db

Database Mail Viewing messages, logs, and attachments (p. 1486)


rds_fn_sysmail_allitems

Viewing messages, logs, and attachments (p. 1486)


rds_fn_sysmail_event_log

Viewing messages, logs, and attachments (p. 1486)


rds_fn_sysmail_mailattachments

This operation is used in starting and stopping the mail queue:


rds_sysmail_control

• Starting the mail queue (p. 1487)


• Stopping the mail queue (p. 1487)

Deleting messages (p. 1486)


rds_sysmail_delete_mailitems_sp

Native backup and restore Backing up a database (p. 1425)


rds_backup_database

Canceling a task (p. 1432)


rds_cancel_task

Finishing a database restore (p. 1431)


rds_finish_restore

Restoring a database (p. 1428)


rds_restore_database

1369
Amazon Relational Database Service User Guide
Functions and stored procedures

Task type Procedure or Where it's used


function

Restoring a log (p. 1430)


rds_restore_log

Amazon S3 file transfer Deleting files on the RDS DB instance (p. 1473)
rds_delete_from_filesystem

Downloading files from an Amazon S3 bucket to a SQL Server DB


rds_download_from_s3
instance (p. 1471)

Listing files on the RDS DB instance (p. 1473)


rds_gather_file_details

Uploading files from a SQL Server DB instance to an Amazon S3


rds_upload_to_s3
bucket (p. 1472)

Microsoft Distributed Using transaction tracing (p. 1598)


rds_msdtc_transaction_tracing
Transaction Coordinator
(MSDTC)

SQL Server Audit Viewing audit logs (p. 1538)


rds_fn_get_audit_file

Transparent Data Support for Transparent Data Encryption in SQL Server (p. 1528)
rds_backup_tde_certificate
Encryption
rds_drop_tde_certificate

rds_restore_tde_certificate

rds_fn_list_user_tde_certificates

Microsoft Business rds_msbi_task This operation is used with SQL Server Analysis Services (SSAS):
Intelligence (MSBI)
• Deploying SSAS projects on Amazon RDS (p. 1549)
• Adding a domain user as a database administrator (p. 1552)
• Backing up an SSAS database (p. 1556)
• Restoring an SSAS database (p. 1556)

This operation is also used with SQL Server Integration Services


(SSIS):

• Administrative permissions on SSISDB (p. 1569)


• Deploying an SSIS project (p. 1570)

This operation is also used with SQL Server Reporting Services


(SSRS):

• Granting access to domain users (p. 1584)


• Revoking system-level permissions (p. 1586)

This operation shows the status of MSBI tasks:


rds_fn_task_status

• SSAS: Monitoring the status of a deployment task (p. 1549)


• SSIS: Monitoring the status of a deployment task (p. 1571)
• SSRS: Monitoring the status of a task (p. 1587)

SSIS Dropping the SSISDB database (p. 1576)


rds_drop_ssis_database

Creating an SSIS proxy (p. 1573)


rds_sqlagent_proxy

1370
Amazon Relational Database Service User Guide
Local time zone

Task type Procedure or Where it's used


function

SSRS Deleting the SSRS databases (p. 1589)


rds_drop_ssrs_databases

Local time zone for Microsoft SQL Server DB


instances
The time zone of an Amazon RDS DB instance running Microsoft SQL Server is set by default. The current
default is Coordinated Universal Time (UTC). You can set the time zone of your DB instance to a local
time zone instead, to match the time zone of your applications.

You set the time zone when you first create your DB instance. You can create your DB instance by using
the AWS Management Console, the Amazon RDS API CreateDBInstance action, or the AWS CLI create-db-
instance command.

If your DB instance is part of a Multi-AZ deployment (using SQL Server DBM or AGs), then when you
fail over, your time zone remains the local time zone that you set. For more information, see Multi-AZ
deployments using Microsoft SQL Server Database Mirroring or Always On availability groups (p. 1368).

When you request a point-in-time restore, you specify the time to restore to. The time is shown in your
local time zone. For more information, see Restoring a DB instance to a specified time (p. 660).

The following are limitations to setting the local time zone on your DB instance:

• You can't modify the time zone of an existing SQL Server DB instance.
• You can't restore a snapshot from a DB instance in one time zone to a DB instance in a different time
zone.
• We strongly recommend that you don't restore a backup file from one time zone to a different time
zone. If you restore a backup file from one time zone to a different time zone, you must audit your
queries and applications for the effects of the time zone change. For more information, see Importing
and exporting SQL Server databases using native backup and restore (p. 1419).

Supported time zones


You can set your local time zone to one of the values listed in the following table.

Time zones supported for Amazon RDS on SQL Server

Time zone Standard time Description Notes


offset

Afghanistan Standard Time (UTC+04:30) Kabul This time zone


doesn't observe
daylight saving time.

Alaskan Standard Time (UTC–09:00) Alaska

Aleutian Standard Time (UTC–10:00) Aleutian Islands

Altai Standard Time (UTC+07:00) Barnaul, Gorno-


Altaysk

1371
Amazon Relational Database Service User Guide
Supported time zones

Time zone Standard time Description Notes


offset

Arab Standard Time (UTC+03:00) Kuwait, Riyadh This time zone


doesn't observe
daylight saving time.

Arabian Standard Time (UTC+04:00) Abu Dhabi, Muscat

Arabic Standard Time (UTC+03:00) Baghdad This time zone


doesn't observe
daylight saving time.

Argentina Standard Time (UTC–03:00) City of Buenos Aires This time zone
doesn't observe
daylight saving time.

Astrakhan Standard Time (UTC+04:00) Astrakhan, Ulyanovsk

Atlantic Standard Time (UTC–04:00) Atlantic Time


(Canada)

AUS Central Standard Time (UTC+09:30) Darwin This time zone


doesn't observe
daylight saving time.

Aus Central W. Standard Time (UTC+08:45) Eucla

AUS Eastern Standard Time (UTC+10:00) Canberra, Melbourne,


Sydney

Azerbaijan Standard Time (UTC+04:00) Baku

Azores Standard Time (UTC–01:00) Azores

Bahia Standard Time (UTC–03:00) Salvador

Bangladesh Standard Time (UTC+06:00) Dhaka This time zone


doesn't observe
daylight saving time.

Belarus Standard Time (UTC+03:00) Minsk This time zone


doesn't observe
daylight saving time.

Bougainville Standard Time (UTC+11:00) Bougainville Island

Canada Central Standard Time (UTC–06:00) Saskatchewan This time zone


doesn't observe
daylight saving time.

Cape Verde Standard Time (UTC–01:00) Cabo Verde Is. This time zone
doesn't observe
daylight saving time.

Caucasus Standard Time (UTC+04:00) Yerevan

Cen. Australia Standard Time (UTC+09:30) Adelaide

1372
Amazon Relational Database Service User Guide
Supported time zones

Time zone Standard time Description Notes


offset

Central America Standard Time (UTC–06:00) Central America This time zone
doesn't observe
daylight saving time.

Central Asia Standard Time (UTC+06:00) Astana This time zone


doesn't observe
daylight saving time.

Central Brazilian Standard Time (UTC–04:00) Cuiaba

Central Europe Standard Time (UTC+01:00) Belgrade, Bratislava,


Budapest, Ljubljana,
Prague

Central European Standard (UTC+01:00) Sarajevo, Skopje,


Time Warsaw, Zagreb

Central Pacific Standard Time (UTC+11:00) Solomon Islands, New This time zone
Caledonia doesn't observe
daylight saving time.

Central Standard Time (UTC–06:00) Central Time (US and


Canada)

Central Standard Time (Mexico) (UTC–06:00) Guadalajara, Mexico


City, Monterrey

Chatham Islands Standard Time (UTC+12:45) Chatham Islands

China Standard Time (UTC+08:00) Beijing, Chongqing, This time zone


Hong Kong, Urumqi doesn't observe
daylight saving time.

Cuba Standard Time (UTC–05:00) Havana

Dateline Standard Time (UTC–12:00) International Date This time zone


Line West doesn't observe
daylight saving time.

E. Africa Standard Time (UTC+03:00) Nairobi This time zone


doesn't observe
daylight saving time.

E. Australia Standard Time (UTC+10:00) Brisbane This time zone


doesn't observe
daylight saving time.

E. Europe Standard Time (UTC+02:00) Chisinau

E. South America Standard Time (UTC–03:00) Brasilia

Easter Island Standard Time (UTC–06:00) Easter Island

Eastern Standard Time (UTC–05:00) Eastern Time (US and


Canada)

Eastern Standard Time (Mexico) (UTC–05:00) Chetumal

1373
Amazon Relational Database Service User Guide
Supported time zones

Time zone Standard time Description Notes


offset

Egypt Standard Time (UTC+02:00) Cairo

Ekaterinburg Standard Time (UTC+05:00) Ekaterinburg

Fiji Standard Time (UTC+12:00) Fiji

FLE Standard Time (UTC+02:00) Helsinki, Kyiv, Riga,


Sofia, Tallinn, Vilnius

Georgian Standard Time (UTC+04:00) Tbilisi This time zone


doesn't observe
daylight saving time.

GMT Standard Time (UTC) Dublin, Edinburgh, This time zone


Lisbon, London isn't the same as
Greenwich Mean
Time. This time zone
does observe daylight
saving time.

Greenland Standard Time (UTC–03:00) Greenland

Greenwich Standard Time (UTC) Monrovia, Reykjavik This time zone


doesn't observe
daylight saving time.

GTB Standard Time (UTC+02:00) Athens, Bucharest

Haiti Standard Time (UTC–05:00) Haiti

Hawaiian Standard Time (UTC–10:00) Hawaii

India Standard Time (UTC+05:30) Chennai, Kolkata, This time zone


Mumbai, New Delhi doesn't observe
daylight saving time.

Iran Standard Time (UTC+03:30) Tehran

Israel Standard Time (UTC+02:00) Jerusalem

Jordan Standard Time (UTC+02:00) Amman

Kaliningrad Standard Time (UTC+02:00) Kaliningrad

Kamchatka Standard Time (UTC+12:00) Petropavlovsk-


Kamchatsky – Old

Korea Standard Time (UTC+09:00) Seoul This time zone


doesn't observe
daylight saving time.

Libya Standard Time (UTC+02:00) Tripoli

Line Islands Standard Time (UTC+14:00) Kiritimati Island

Lord Howe Standard Time (UTC+10:30) Lord Howe Island

1374
Amazon Relational Database Service User Guide
Supported time zones

Time zone Standard time Description Notes


offset

Magadan Standard Time (UTC+11:00) Magadan This time zone


doesn't observe
daylight saving time.

Magallanes Standard Time (UTC–03:00) Punta Arenas

Marquesas Standard Time (UTC–09:30) Marquesas Islands

Mauritius Standard Time (UTC+04:00) Port Louis This time zone


doesn't observe
daylight saving time.

Middle East Standard Time (UTC+02:00) Beirut

Montevideo Standard Time (UTC–03:00) Montevideo

Morocco Standard Time (UTC+01:00) Casablanca

Mountain Standard Time (UTC–07:00) Mountain Time (US


and Canada)

Mountain Standard Time (UTC–07:00) Chihuahua, La Paz,


(Mexico) Mazatlan

Myanmar Standard Time (UTC+06:30) Yangon (Rangoon) This time zone


doesn't observe
daylight saving time.

N. Central Asia Standard Time (UTC+07:00) Novosibirsk

Namibia Standard Time (UTC+02:00) Windhoek

Nepal Standard Time (UTC+05:45) Kathmandu This time zone


doesn't observe
daylight saving time.

New Zealand Standard Time (UTC+12:00) Auckland, Wellington

Newfoundland Standard Time (UTC–03:30) Newfoundland

Norfolk Standard Time (UTC+11:00) Norfolk Island

North Asia East Standard Time (UTC+08:00) Irkutsk

North Asia Standard Time (UTC+07:00) Krasnoyarsk

North Korea Standard Time (UTC+09:00) Pyongyang

Omsk Standard Time (UTC+06:00) Omsk

Pacific SA Standard Time (UTC–03:00) Santiago

Pacific Standard Time (UTC–08:00) Pacific Time (US and


Canada)

Pacific Standard Time (Mexico) (UTC–08:00) Baja California

1375
Amazon Relational Database Service User Guide
Supported time zones

Time zone Standard time Description Notes


offset

Pakistan Standard Time (UTC+05:00) Islamabad, Karachi This time zone


doesn't observe
daylight saving time.

Paraguay Standard Time (UTC–04:00) Asuncion

Romance Standard Time (UTC+01:00) Brussels,


Copenhagen, Madrid,
Paris

Russia Time Zone 10 (UTC+11:00) Chokurdakh

Russia Time Zone 11 (UTC+12:00) Anadyr,


Petropavlovsk-
Kamchatsky

Russia Time Zone 3 (UTC+04:00) Izhevsk, Samara

Russian Standard Time (UTC+03:00) Moscow, St. This time zone


Petersburg, doesn't observe
Volgograd daylight saving time.

SA Eastern Standard Time (UTC–03:00) Cayenne, Fortaleza This time zone


doesn't observe
daylight saving time.

SA Pacific Standard Time (UTC–05:00) Bogota, Lima, Quito, This time zone
Rio Branco doesn't observe
daylight saving time.

SA Western Standard Time (UTC–04:00) Georgetown, La Paz, This time zone


Manaus, San Juan doesn't observe
daylight saving time.

Saint Pierre Standard Time (UTC–03:00) Saint Pierre and


Miquelon

Sakhalin Standard Time (UTC+11:00) Sakhalin

Samoa Standard Time (UTC+13:00) Samoa

Sao Tome Standard Time (UTC+01:00) Sao Tome

Saratov Standard Time (UTC+04:00) Saratov

SE Asia Standard Time (UTC+07:00) Bangkok, Hanoi, This time zone


Jakarta doesn't observe
daylight saving time.

Singapore Standard Time (UTC+08:00) Kuala Lumpur, This time zone


Singapore doesn't observe
daylight saving time.

South Africa Standard Time (UTC+02:00) Harare, Pretoria This time zone
doesn't observe
daylight saving time.

1376
Amazon Relational Database Service User Guide
Supported time zones

Time zone Standard time Description Notes


offset

Sri Lanka Standard Time (UTC+05:30) Sri Jayawardenepura This time zone
doesn't observe
daylight saving time.

Sudan Standard Time (UTC+02:00) Khartoum

Syria Standard Time (UTC+02:00) Damascus

Taipei Standard Time (UTC+08:00) Taipei This time zone


doesn't observe
daylight saving time.

Tasmania Standard Time (UTC+10:00) Hobart

Tocantins Standard Time (UTC–03:00) Araguaina

Tokyo Standard Time (UTC+09:00) Osaka, Sapporo, This time zone


Tokyo doesn't observe
daylight saving time.

Tomsk Standard Time (UTC+07:00) Tomsk

Tonga Standard Time (UTC+13:00) Nuku'alofa This time zone


doesn't observe
daylight saving time.

Transbaikal Standard Time (UTC+09:00) Chita

Turkey Standard Time (UTC+03:00) Istanbul

Turks And Caicos Standard Time (UTC–05:00) Turks and Caicos

Ulaanbaatar Standard Time (UTC+08:00) Ulaanbaatar This time zone


doesn't observe
daylight saving time.

US Eastern Standard Time (UTC–05:00) Indiana (East)

US Mountain Standard Time (UTC–07:00) Arizona This time zone


doesn't observe
daylight saving time.

UTC UTC Coordinated Universal This time zone


Time doesn't observe
daylight saving time.

UTC–02 (UTC–02:00) Coordinated Universal This time zone


Time–02 doesn't observe
daylight saving time.

UTC–08 (UTC–08:00) Coordinated Universal


Time–08

UTC–09 (UTC–09:00) Coordinated Universal


Time–09

1377
Amazon Relational Database Service User Guide
Supported time zones

Time zone Standard time Description Notes


offset

UTC–11 (UTC–11:00) Coordinated Universal This time zone


Time–11 doesn't observe
daylight saving time.

UTC+12 (UTC+12:00) Coordinated Universal This time zone


Time+12 doesn't observe
daylight saving time.

UTC+13 (UTC+13:00) Coordinated Universal


Time+13

Venezuela Standard Time (UTC–04:00) Caracas This time zone


doesn't observe
daylight saving time.

Vladivostok Standard Time (UTC+10:00) Vladivostok

Volgograd Standard Time (UTC+04:00) Volgograd

W. Australia Standard Time (UTC+08:00) Perth This time zone


doesn't observe
daylight saving time.

W. Central Africa Standard Time (UTC+01:00) West Central Africa This time zone
doesn't observe
daylight saving time.

W. Europe Standard Time (UTC+01:00) Amsterdam,


Berlin, Bern, Rome,
Stockholm, Vienna

W. Mongolia Standard Time (UTC+07:00) Hovd

West Asia Standard Time (UTC+05:00) Ashgabat, Tashkent This time zone
doesn't observe
daylight saving time.

West Bank Standard Time (UTC+02:00) Gaza, Hebron

West Pacific Standard Time (UTC+10:00) Guam, Port Moresby This time zone
doesn't observe
daylight saving time.

Yakutsk Standard Time (UTC+09:00) Yakutsk

1378
Amazon Relational Database Service User Guide
Licensing SQL Server on Amazon RDS

Licensing Microsoft SQL Server on Amazon RDS


When you set up an Amazon RDS DB instance for Microsoft SQL Server, the software license is included.

This means that you don't need to purchase SQL Server licenses separately. AWS holds the license for the
SQL Server database software. Amazon RDS pricing includes the software license, underlying hardware
resources, and Amazon RDS management capabilities.

Amazon RDS supports the following Microsoft SQL Server editions:

• Enterprise
• Standard
• Web
• Express

Note
Licensing for SQL Server Web Edition supports only public and internet-accessible webpages,
websites, web applications, and web services. This level of support is required for compliance
with Microsoft's usage rights. For more information, see AWS service terms.

Amazon RDS supports Multi-AZ deployments for DB instances running Microsoft SQL Server by using
SQL Server Database Mirroring (DBM) or Always On Availability Groups (AGs). There are no additional
licensing requirements for Multi-AZ deployments. For more information, see Multi-AZ deployments for
Amazon RDS for Microsoft SQL Server (p. 1450).

Restoring license-terminated DB instances


Amazon RDS takes snapshots of license-terminated DB instances. If your instance is terminated for
licensing issues, you can restore it from the snapshot to a new DB instance. New DB instances have a
license included.

For more information, see Restoring license-terminated DB instances (p. 1614).

Development and test


Because of licensing requirements, we can't offer SQL Server Developer Edition on Amazon RDS. You
can use Express Edition for many development, testing, and other nonproduction needs. However, if
you need the full feature capabilities of an enterprise-level installation of SQL Server for development,
you can download and install SQL Server Developer Edition (and other MSDN products) on Amazon
EC2. Dedicated infrastructure isn't required for Developer Edition. By using your own host, you also gain
access to other programmability features that are not accessible on Amazon RDS. For more information
on the difference between SQL Server editions, see Editions and supported features of SQL Server 2017
in the Microsoft documentation.

1379
Amazon Relational Database Service User Guide
Connecting to a DB instance running SQL Server

Connecting to a DB instance running the Microsoft


SQL Server database engine
After Amazon RDS provisions your DB instance, you can use any standard SQL client application to
connect to the DB instance. In this topic, you connect to your DB instance by using either Microsoft SQL
Server Management Studio (SSMS) or SQL Workbench/J.

For an example that walks you through the process of creating and connecting to a sample DB instance,
see Creating and connecting to a Microsoft SQL Server DB instance (p. 194).

Before you connect


Before you can connect to your DB instance, it has to be available and accessible.

1. Make sure that its status is available. You can check this on the details page for your instance in the
AWS Management Console or by using the describe-db-instances AWS CLI command.

2. Make sure that it is accessible to your source. Depending on your scenario, it may not need to be
publicly accessible. For more information, see Amazon VPC VPCs and Amazon RDS (p. 2688).
3. Make sure that the inbound rules of your VPC security group allow access to your DB instance. For
more information, see Can't connect to Amazon RDS DB instance (p. 2727).

Finding the DB instance endpoint and port number


You need both the endpoint and the port number to connect to the DB instance.

To find the endpoint and port

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.

1380
Amazon Relational Database Service User Guide
Connecting to your DB instance with SSMS

2. In the upper-right corner of the Amazon RDS console, choose the AWS Region of your DB instance.
3. Find the Domain Name System (DNS) name (endpoint) and port number for your DB instance:

a. Open the RDS console and choose Databases to display a list of your DB instances.
b. Choose the SQL Server DB instance name to display its details.
c. On the Connectivity & security tab, copy the endpoint.

d. Note the port number.

Connecting to your DB instance with Microsoft SQL


Server Management Studio
In this procedure, you connect to your sample DB instance by using Microsoft SQL Server Management
Studio (SSMS). To download a standalone version of this utility, see Download SQL Server Management
Studio (SSMS) in the Microsoft documentation.

To connect to a DB instance using SSMS

1. Start SQL Server Management Studio.

The Connect to Server dialog box appears.

1381
Amazon Relational Database Service User Guide
Connecting to your DB instance with SSMS

2. Provide the information for your DB instance:

a. For Server type, choose Database Engine.


b. For Server name, enter the DNS name (endpoint) and port number of your DB instance,
separated by a comma.
Important
Change the colon between the endpoint and port number to a comma.

Your server name should look like the following example.

database-2.cg034itsfake.us-east-1.rds.amazonaws.com,1433

c. For Authentication, choose SQL Server Authentication.


d. For Login, enter the master user name for your DB instance.
e. For Password, enter the password for your DB instance.
3. Choose Connect.

After a few moments, SSMS connects to your DB instance.

If you can't connect to your DB instance, see Security group considerations (p. 1385) and
Troubleshooting connections to your SQL Server DB instance (p. 1385).
4. Your SQL Server DB instance comes with SQL Server's standard built-in system databases (master,
model, msdb, and tempdb). To explore the system databases, do the following:

a. In SSMS, on the View menu, choose Object Explorer.


b. Expand your DB instance, expand Databases, and then expand System Databases.

1382
Amazon Relational Database Service User Guide
Connecting to your DB instance with SSMS

5. Your SQL Server DB instance also comes with a database named rdsadmin. Amazon RDS uses this
database to store the objects that it uses to manage your database. The rdsadmin database also
includes stored procedures that you can run to perform advanced tasks. For more information, see
Common DBA tasks for Microsoft SQL Server (p. 1602).
6. You can now start creating your own databases and running queries against your DB instance and
databases as usual. To run a test query against your DB instance, do the following:

a. In SSMS, on the File menu point to New and then choose Query with Current Connection.
b. Enter the following SQL query.

select @@VERSION

c. Run the query. SSMS returns the SQL Server version of your Amazon RDS DB instance.

1383
Amazon Relational Database Service User Guide
Connecting to your DB instance with SQL Workbench/J

Connecting to your DB instance with SQL


Workbench/J
This example shows how to connect to a DB instance running the Microsoft SQL Server database engine
by using the SQL Workbench/J database tool. To download SQL Workbench/J, see SQL Workbench/J.

SQL Workbench/J uses JDBC to connect to your DB instance. You also need the JDBC driver for SQL
Server. To download this driver, see Microsoft JDBC drivers 4.1 (preview) and 4.0 for SQL Server.

To connect to a DB instance using SQL Workbench/J

1. Open SQL Workbench/J. The Select Connection Profile dialog box appears, as shown following.

2. In the first box at the top of the dialog box, enter a name for the profile.
3. For Driver, choose SQL JDBC 4.0.
4. For URL, enter jdbc:sqlserver://, then enter the endpoint of your DB instance. For example, the
URL value might be the following.

jdbc:sqlserver://sqlsvr-pdz.abcd12340.us-west-2.rds.amazonaws.com:1433

5. For Username, enter the master user name for the DB instance.
6. For Password, enter the password for the master user.
7. Choose the save icon in the dialog toolbar, as shown following.

8. Choose OK. After a few moments, SQL Workbench/J connects to your DB instance. If you can't
connect to your DB instance, see Security group considerations (p. 1385) and Troubleshooting
connections to your SQL Server DB instance (p. 1385).
9. In the query pane, enter the following SQL query.

select @@VERSION

10. Choose the Execute icon in the toolbar, as shown following.

1384
Amazon Relational Database Service User Guide
Security group considerations

The query returns the version information for your DB instance, similar to the following.

Microsoft SQL Server 2017 (RTM-CU22) (KB4577467) - 14.0.3356.20 (X64)

Security group considerations


To connect to your DB instance, your DB instance must be associated with a security group. This security
group contains the IP addresses and network configuration that you use to access the DB instance. You
might have associated your DB instance with an appropriate security group when you created your DB
instance. If you assigned a default, no-configured security group when you created your DB instance,
your DB instance firewall prevents connections.

In some cases, you might need to create a new security group to make access possible. For instructions
on creating a new security group, see Controlling access with security groups (p. 2680). For a topic that
walks you through the process of setting up rules for your VPC security group, see Tutorial: Create a VPC
for use with a DB instance (IPv4 only) (p. 2706).

After you have created the new security group, modify your DB instance to associate it with the security
group. For more information, see Modifying an Amazon RDS DB instance (p. 401).

You can enhance security by using SSL to encrypt connections to your DB instance. For more information,
see Using SSL with a Microsoft SQL Server DB instance (p. 1456).

Troubleshooting connections to your SQL Server DB


instance
The following table shows error messages that you might encounter when you attempt to connect to
your SQL Server DB instance.

Issue Troubleshooting suggestions

Could not open a connection Make sure that you specified the server name correctly. For Server
to SQL Server – Microsoft name, enter the DNS name and port number of your sample DB
SQL Server, Error: 53 instance, separated by a comma.
Important
If you have a colon between the DNS name and port number,
change the colon to a comma.
Your server name should look like the following example.

sample-instance.cg034itsfake.us-east-1.rds.amazonaws.com,1433

No connection could be You were able to reach the DB instance but the connection was refused.
made because the target This issue is usually caused by specifying the user name or password
machine actively refused it – incorrectly. Verify the user name and password, then retry.

1385
Amazon Relational Database Service User Guide
Troubleshooting

Issue Troubleshooting suggestions


Microsoft SQL Server, Error:
10061

A network-related or The access rules enforced by your local firewall and the IP addresses
instance-specific error authorized to access your DB instance might not match. The problem
occurred while establishing is most likely the inbound rules in your security group. For more
a connection to SQL Server. information, see Security in Amazon RDS (p. 2565).
The server was not found
or was not accessible... The Your database instance must be publicly accessible. To connect to it
wait operation timed out – from outside of the VPC, the instance must have a public IP address
Microsoft SQL Server, Error: assigned.
258

Note
For more information on connection issues, see Can't connect to Amazon RDS DB
instance (p. 2727).

1386
Amazon Relational Database Service User Guide
Working with Active Directory with RDS for SQL Server

Working with Active Directory with RDS for SQL


Server
You can join an RDS for SQL Server DB instance to a Microsoft Active Directory (AD) domain. Your AD
domain can be hosted on AWS Managed AD within AWS, or on a Self Managed AD in a location of your
choice, including your corporate data centers, on AWS EC2, or with other cloud providers.

You can authenticate domain users using NTLM authentication with Self Managed Active Directory. You
can use Kerberos and NTLM authentication with AWS Managed Active Directory.

In the following sections, you can find information about working with Self Managed Active Directory
and AWS Managed Active Directory for Microsoft SQL Server on Amazon RDS.

Topics
• Working with Self Managed Active Directory with an Amazon RDS for SQL Server DB
instance (p. 1388)
• Working with AWS Managed Active Directory with RDS for SQL Server (p. 1401)

1387
Amazon Relational Database Service User Guide
Working with Self Managed Active
Directory with a SQL Server DB instance

Working with Self Managed Active Directory with an


Amazon RDS for SQL Server DB instance
You can join your RDS for SQL Server DB instances directly to your self-managed Active Directory (AD)
domain, regardless of where your AD is hosted: in corporate data centers, on AWS EC2, or with other
cloud providers. With self-managed AD, you use NTLM authentication to directly control authentication
of users and services on your RDS for SQL Server DB instances without using intermediary domains
and forest trusts. When users authenticate with an RDS for SQL Server DB instance joined to your self-
managed AD domain, authentication requests are forwarded to a self-managed AD domain that you
specify.

Topics
• Region and version availability (p. 1388)
• Requirements (p. 1388)
• Limitations (p. 1390)
• Overview of setting up Self Managed Active Directory (p. 1391)
• Setting up Self Managed Active Directory (p. 1391)
• Managing a DB instance in a self-managed Active Directory Domain (p. 1397)
• Understanding self-managed Active Directory Domain membership (p. 1398)
• Troubleshooting self-managed Active Directory (p. 1398)
• Restoring a SQL Server DB instance and then adding it to a self-managed Active Directory
domain (p. 1400)

Region and version availability


Amazon RDS supports Self Managed AD for SQL Server using NTLM in all AWS Regions.

Requirements
Make sure you've met the following requirements before joining an RDS for SQL Server DB instance to
your self-managed AD domain.

Topics
• Configure your on-premises AD (p. 1388)
• Configure your network connectivity (p. 1389)
• Configure your AD domain service account (p. 1390)

Configure your on-premises AD


Make sure that you have an on-premises or other self-managed Microsoft AD that you can join the
Amazon RDS for SQL Server instance to. Your on-premises AD should have the following configuration:

• If you have Active Directory sites defined, make sure the subnets in the VPC associated with your RDS
for SQL Server DB instance are defined in your Active Directory site. Confirm there aren't any conflicts
between the subnets in your VPC and the subnets in your other AD sites.
• Your AD domain controller has a domain functional level of Windows Server 2008 R2 or higher.
• Your AD domain name can't be in Single Label Domain (SLD) format. RDS for SQL Server does not
support SLD domains.

1388
Amazon Relational Database Service User Guide
Working with Self Managed Active
Directory with a SQL Server DB instance

• The fully qualified domain name (FQDN) and organizational unit (OU) for your AD can't exceed 64
characters.

Configure your network connectivity


Make sure that you have met the following network configurations:

• Connectivity configured between the Amazon VPC where you want to create the RDS for SQL Server
DB instance and your self-managed Active Directory. You can set up connectivity using AWS Direct
Connect, AWS VPN, VPC peering, or AWS Transit Gateway.
• For VPC security groups, the default security group for your default Amazon VPC is already added
to your RDS for SQL Server DB instance in the console. Ensure that the security group and the VPC
network ACLs for the subnet(s) where you're creating your RDS for SQL Server DB instance allow traffic
on the ports and in the directions shown in the following diagram.

The following table identifies the role of each port.

Protocol Ports Role

TCP/UDP 53 Domain Name System (DNS)

TCP/UDP 88 Kerberos authentication

TCP/UDP 464 Change/Set password

TCP/UDP 389 Lightweight Directory Access


Protocol (LDAP)

TCP 135 Distributed Computing


Environment / End Point
Mapper (DCE / EPMAP)

TCP 445 Directory Services SMB file


sharing

TCP 636 Lightweight Directory Access


Protocol over TLS/SSL (LDAPS)

1389
Amazon Relational Database Service User Guide
Working with Self Managed Active
Directory with a SQL Server DB instance

Protocol Ports Role

TCP 49152 - 65535 Ephemeral ports for RPC

• Generally, the domain DNS servers are located in the AD domain controllers. You do not need to
configure the VPC DHCP option set to use this feature. For more information, see DHCP option sets in
the Amazon VPC User Guide.

Important
If you're using VPC network ACLs, you must also allow outbound traffic on dynamic ports
(49152-65535) from your RDS for SQL Server DB instance. Ensure that these traffic rules are
also mirrored on the firewalls that apply to each of the AD domain controllers, DNS servers, and
RDS for SQL Server DB instances.
While VPC security groups require ports to be opened only in the direction that network traffic
is initiated, most Windows firewalls and VPC network ACLs require ports to be open in both
directions.

Configure your AD domain service account


Make sure that you have met the following requirements for an AD domain service account:

• Make sure that you have a service account in your self-managed AD domain with delegated
permissions to join computers to the domain. A domain service account is a user account in your self-
managed AD that has been delegated permission to perform certain tasks.
• The domain service account needs to be delegated the following permissions in the Organizational
Unit (OU) that you're joining your RDS for SQL Server DB instance to:
• Validated ability to write to the DNS host name
• Validated ability to write to the service principal name
• Create and delete computer objects

These represent the minimum set of permissions that are required to join computer objects to your
self-managed Active Directory. For more information, see Errors when attempting to join computers to
a domain in the Microsoft Windows Server documentation.

Important
Do not move computer objects that RDS for SQL Server creates in the Organizational Unit after
your DB instance is created. Moving the associated objects will cause your RDS for SQL Server
DB instance to become misconfigured. If you need to move the computer objects created by
Amazon RDS, use the ModifyDBInstance RDS API operation to modify the domain parameters
with the desired location of the computer objects.

Limitations
The following limitations apply for Self Managed AD for SQL Server.

• NTLM is the only supported authentication type. Kerberos authentication is not supported. If you need
to use kerberos authentication, you can use AWS Managed AD instead of self-managed AD.
• The Microsoft Distributed Transaction Coordinator (MSDTC) service isn't supported, as it requires
Kerberos authentication.
• Your RDS for SQL Server DB instances do not use the Network Time Protocol (NTP) server of your self-
managed AD domain. They use an AWS NTP service instead.
• SQL Server linked servers must use SQL authentication to connect to other RDS for SQL Server DB
instances joined to your self-managed AD domain.

1390
Amazon Relational Database Service User Guide
Working with Self Managed Active
Directory with a SQL Server DB instance

• Microsoft Group Policy Object (GPO) settings from your self-managed AD domain are not applied to
RDS for SQL Server DB instances.

Overview of setting up Self Managed Active Directory


To set up self-managed AD for an RDS for SQL Server DB instance, take the following steps, explained in
greater detail in Setting up Self Managed Active Directory (p. 1391):

In your AD domain:

• Create an Organizational Unit (OU).


• Create an AD domain user.
• Delegate control to the AD domain user.

From the AWS Management Console or API:

• Create a AWS KMS key.


• Create a secret using AWS Secrets Manager.
• Create or modify an RDS for SQL Server DB instance and join it to your self-managed AD domain.

Setting up Self Managed Active Directory


To set up Self Managed AD, take the following steps.

Topics
• Step 1: Create an Organizational Unit in your AD (p. 1391)
• Step 2: Create an AD domain user in your AD (p. 1392)
• Step 3: Delegate control to the AD user (p. 1392)
• Step 4: Create an AWS KMS key (p. 1392)
• Step 5: Create an AWS secret (p. 1393)
• Step 6: Create or modify a SQL Server DB instance (p. 1394)
• Step 7: Create Windows Authentication SQL Server logins (p. 1396)

Step 1: Create an Organizational Unit in your AD


Important
We recommend creating a dedicated OU and service credential scoped to that OU for any AWS
account that owns an RDS for SQL Server DB instance joined your self-managed AD domain. By
dedicating an OU and service credential, you can avoid conflicting permissions and follow the
principal of least privilege.

To create an OU in your AD

1. Connect to your AD domain as a domain administrator.


2. Open Active Directory Users and Computers and select the domain where you want to create your
OU.
3. Right-click the domain and choose New, then Organizational Unit.
4. Enter a name for the OU.
5. Keep the box selected for Protect container from accidental deletion.
6. Click OK. Your new OU will appear under your domain.

1391
Amazon Relational Database Service User Guide
Working with Self Managed Active
Directory with a SQL Server DB instance

Step 2: Create an AD domain user in your AD


The domain user credentials will be used for the secret in AWS Secrets Manager.

To create an AD domain user in your AD

1. Open Active Directory Users and Computers and select the domain and OU where you want to
create your user.
2. Right-click the Users object and choose New, then User.
3. Enter a first name, last name, and logon name for the user. Click Next.
4. Enter a password for the user. Don't select "User must change password at next login". Don't select
"Account is disabled". Click Next.
5. Click OK. Your new user will appear under your domain.

Step 3: Delegate control to the AD user


To delegate control to the AD domain user in your domain

1. Open Active Directory Users and Computers MMC snap-in and select the domain where you want
to create your user.
2. Right-click the OU that you created earlier and choose Delegate Control.
3. On the Delegation of Control Wizard, click Next.
4. On the Users or Groups section, click Add.
5. On the Select Users, Computers, or Groups section, enter the AD user you created and click Check
Names. If your AD user check is successful, click OK.
6. On the Users or Groups section, confirm your AD user was added and click Next.
7. On the Tasks to Delegate section, choose Create a custom task to delegate and click Next.
8. On the Active Directory Object Type section:

a. Choose Only the following objects in the folder.


b. Select Computer Objects and click Next.
c. Select Create selected objects in this folder.
d. Select Delete selected objects in this folder and click Next.
9. On the Permissions section:

a. Keep General selected.


b. Select Validated write to DNS host name.
c. Select Validated write to service principal name and click Next.
10. For Completing the Delegation of Control Wizard, review and confirm your settings and click
Finish.

Step 4: Create an AWS KMS key


The KMS key is used to encrypt your AWS secret.

To create an AWS KMS key


Note
For Encryption Key, don't use the AWS default KMS key. Be sure to create the AWS KMS key in
the same AWS account that contains the RDS for SQL Server DB instance that you want to join
to your self-managed AD.

1392
Amazon Relational Database Service User Guide
Working with Self Managed Active
Directory with a SQL Server DB instance

1. In the AWS KMS console, choose Create key.


2. For Key Type, choose Symmetric.
3. For Key Usage, choose Encrypt and decrypt.
4. For Advanced options:

a. For Key material origin, choose KMS.


b. For Regionality, choose Single-Region key and click Next.
5. For Alias, provide a name for the KMS key.
6. (Optional) For Description, provide a description of the KMS key.
7. (Optional) For Tags, provide a tag the KMS key and click Next.
8. For Key administrators, provide the name of an IAM user and select it.
9. For Key deletion, keep the box selected for Allow key administrators to delete this key and click
Next.
10. For Key users, provide the same IAM user from the previous step and select it. Click Next.
11. Review the configuration.
12. For Key policy, include the following to the policy Statement:

{
"Sid": "Allow use of the KMS key on behalf of RDS",
"Effect": "Allow",
"Principal": {
"Service": [
"rds.amazonaws.com"
]
},
"Action": "kms:Decrypt",
"Resource": "*"
}

13. Click Finish.

Step 5: Create an AWS secret


To create a secret
Note
Be sure to create the secret in the same AWS account that contains the RDS for SQL Server DB
instance that you want to join to your self-managed AD.

1. In AWS Secrets Manager, choose Store a new secret.


2. For Secret type, choose Other type of secret.
3. For Key/value pairs, add your two keys:

a. For the first key, enter CUSTOMER_MANAGED_ACTIVE_DIRECTORY_USERNAME.


b. For the value of the first key, enter the name of the AD user that you created on your domain in
a previous step.
c. For the second key, enter CUSTOMER_MANAGED_ACTIVE_DIRECTORY_PASSWORD.
d. For the value of the second key, enter the password that you created for the AD user on your
domain.
4. For Encryption key, enter the KMS key that you created in a previous step and click Next.
5. For Secret name, enter a descriptive name that helps you find your secret later.
6. (Optional) For Description, enter a description for the secret name.

1393
Amazon Relational Database Service User Guide
Working with Self Managed Active
Directory with a SQL Server DB instance

7. For Resource permission, click Edit.


8. Add the following policy to the permission policy:
Note
We recommend that you use the aws:sourceAccount and aws:sourceArn
conditions in the policy to avoid the confused deputy problem. Use your AWS account for
aws:sourceAccount and the RDS for SQL Server DB instance ARN for aws:sourceArn.
For more information, see Preventing cross-service confused deputy problems (p. 2640).

{
"Version": "2012-10-17",
"Statement":
[
{
"Effect": "Allow",
"Principal":
{
"Service": "rds.amazonaws.com"
},
"Action": "secretsmanager:GetSecretValue",
"Resource": "*",
"Condition":
{
"StringEquals":
{
"aws:sourceAccount": "123456789012"
},
"ArnLike":
{
"aws:sourceArn": "arn:aws:rds:us-west-2:123456789012:db:*"
}
}
}
]
}

9. Click Save then click Next.


10. For Configure rotation settings, keep the default values and choose Next.
11. Review the settings for the secret and click Store.
12. Choose the secret you created and copy the value for the Secret ARN. This will be used in the next
step to set up self-managed Active Directory.

Step 6: Create or modify a SQL Server DB instance


You can use the console, CLI, or RDS API to associate an RDS for SQL Server DB instance with a self-
managed AD domain. You can do this in one of the following ways:

• Create a new SQL Server DB instance using the console, the create-db-instance CLI command, or the
CreateDBInstance RDS API operation.

For instructions, see Creating an Amazon RDS DB instance (p. 300).


• Modify an existing SQL Server DB instance using the console, the modify-db-instance CLI command, or
the ModifyDBInstance RDS API operation.

For instructions, see Modifying an Amazon RDS DB instance (p. 401).


• Restore a SQL Server DB instance from a DB snapshot using the console, the restore-db-instance-from-
db-snapshot CLI command, or the RestoreDBInstanceFromDBSnapshot RDS API operation.

For instructions, see Restoring from a DB snapshot (p. 615).

1394
Amazon Relational Database Service User Guide
Working with Self Managed Active
Directory with a SQL Server DB instance

• Restore a SQL Server DB instance to a point-in-time using the console, the restore-db-instance-to-
point-in-time CLI command, or the RestoreDBInstanceToPointInTime RDS API operation.

For instructions, see Restoring a DB instance to a specified time (p. 660).

When you use the AWS CLI, the following parameters are required for the DB instance to be able to use
the self-managed Active Directory domain that you created:

• For the --domain-fqdn parameter, use the fully qualified domain name (FQDN) of your self-managed
Active Directory.
• For the --domain-ou parameter, use the OU that you created in your self-managed AD.
• For the --domain-auth-secret-arn parameter, use the value of the Secret ARN that you created in
a previous step.
• For the --domain-dns-ips parameter, use the primary and secondary IPv4 addresses of the DNS
servers for your self-managed AD. If you don't have a secondary DNS server IP address, enter the
primary IP address twice.

The following example CLI commands show how to create, modify, and remove an RDS for SQL Server
DB instance with a self-managed AD domain.
Important
If you modify a DB instance to join it to or remove it from a self-managed AD domain, a reboot
of the DB instance is required for the modification to take effect. You can choose to apply
the changes immediately or wait until the next maintenance window. Choosing the Apply
Immediately option will cause downtime for a single-AZ DB instance. A multi-AZ DB instance
will perform a failover before completing a reboot. For more information, see Using the Apply
Immediately setting (p. 402).

The following CLI command creates a new RDS for SQL Server DB instance and joins it to a self-managed
AD domain.

For Linux, macOS, or Unix:

aws rds create-db-instance \


--db-instance-identifier my-DB-instance \
--db-instance-class db.m5.xlarge \
--allocated-storage 50 \
--engine sqlserver-se \
--engine-version 15.00.4043.16.v1 \
--license-model license-included \
--master-username my-master-username \
--master-user-password my-master-password \
--domain-fqdn my_AD_domain.my_AD.my_domain \
--domain-ou OU=my-AD-test-OU,DC=my-AD-test,DC=my-AD,DC=my-domain \
--domain-auth-secret-arn "arn:aws:secretsmanager:region:account-number:secret:my-AD-
test-secret-123456" \
--domain-dns-ips "10.11.12.13" "10.11.12.14"

For Windows:

aws rds create-db-instance ^


--db-instance-identifier my-DB-instance ^
--db-instance-class db.m5.xlarge ^
--allocated-storage 50 ^
--engine sqlserver-se ^
--engine-version 15.00.4043.16.v1 ^
--license-model license-included ^

1395
Amazon Relational Database Service User Guide
Working with Self Managed Active
Directory with a SQL Server DB instance

--master-username my-master-username ^
--master-user-password my-master-password ^
--domain-fqdn my-AD-test.my-AD.mydomain ^
--domain-ou OU=my-AD-test-OU,DC=my-AD-test,DC=my-AD,DC=my-domain ^
--domain-auth-secret-arn "arn:aws:secretsmanager:region:account-number:secret:my-AD-
test-secret-123456" \ ^
--domain-dns-ips "10.11.12.13" "10.11.12.14"

The following CLI command modifies an existing RDS for SQL Server DB instance to use a self-managed
Active Directory domain.

For Linux, macOS, or Unix:

aws rds modify-db-instance \


--db-instance-identifier my-DB-instance \
--domain-fqdn my_AD_domain.my_AD.my_domain \
--domain-ou OU=my-AD-test-OU,DC=my-AD-test,DC=my-AD,DC=my-domain \
--domain-auth-secret-arn "arn:aws:secretsmanager:region:account-number:secret:my-AD-
test-secret-123456" \
--domain-dns-ips "10.11.12.13" "10.11.12.14"

For Windows:

aws rds modify-db-instance ^


--db-instance-identifier my-DBinstance ^
--domain-fqdn my_AD_domain.my_AD.my_domain ^
--domain-ou OU=my-AD-test-OU,DC=my-AD-test,DC=my-AD,DC=my-domain ^
--domain-auth-secret-arn "arn:aws:secretsmanager:region:account-number:secret:my-AD-
test-secret-123456" ^
--domain-dns-ips "10.11.12.13" "10.11.12.14"

The following CLI command removes an RDS for SQL Server DB instance from a self-managed Active
Directory domain.

For Linux, macOS, or Unix:

aws rds modify-db-instance \


--db-instance-identifier my-DB-instance \
--disable-domain

For Windows:

aws rds modify-db-instance ^


--db-instance-identifier my-DB-instance ^
--disable-domain

Step 7: Create Windows Authentication SQL Server logins


Use the Amazon RDS master user credentials to connect to the SQL Server DB instance as you do for any
other DB instance. Because the DB instance is joined to the self-managed AD domain, you can provision
SQL Server logins and users. You do this from the AD users and groups utility in your self-managed AD
domain. Database permissions are managed through standard SQL Server permissions granted and
revoked to these Windows logins.

In order for a self-managed AD user to authenticate with SQL Server, a SQL Server Windows login must
exist for the self-managed AD user or a self-managed Active Directory group that the user is a member
of. Fine-grained access control is handled through granting and revoking permissions on these SQL

1396
Amazon Relational Database Service User Guide
Working with Self Managed Active
Directory with a SQL Server DB instance

Server logins. A self-managed AD user that doesn't have a SQL Server login or belong to a self-managed
AD group with such a login can't access the SQL Server DB instance.

The ALTER ANY LOGIN permission is required to create a self-managed AD SQL Server login. If you
haven't created any logins with this permission, connect as the DB instance's master user using SQL
Server Authentication and create your self-managed AD SQL Server logins under the context of the
master user.

You can run a data definition language (DDL) command such as the following to create a SQL Server
login for an self-managed AD user or group.
Note
Specify users and groups using the pre-Windows 2000 login name in the format
my_AD_domain\my_AD_domain_user. You can't use a user principal name (UPN) in the format
my_AD_domain_user@my_AD_domain.

USE [master]
GO
CREATE LOGIN [my_AD_domain\my_AD_domain_user] FROM WINDOWS WITH DEFAULT_DATABASE =
[master], DEFAULT_LANGUAGE = [us_english];
GO

For more information, see CREATE LOGIN (Transact-SQL) in the Microsoft Developer Network
documentation.

Users (both humans and applications) from your domain can now connect to the RDS for SQL Server
instance from a self-managed AD domain-joined client machine using Windows authentication.

Managing a DB instance in a self-managed Active Directory


Domain
You can use the console, AWS CLI, or the Amazon RDS API to manage your DB instance and its
relationship with your self-managed AD domain. For example, you can move the DB instance into, out of,
or between domains.

For example, using the Amazon RDS API, you can do the following:

• To reattempt a self-managed domain join for a failed membership, use the ModifyDBInstance API
operation and specify the same set of parameters:
• --domain-fqdn
• --domain-dns-ips
• --domain-ou
• --domain-auth-secret-arn
• To remove a DB instance from a self-managed domain, use the ModifyDBInstance API operation
and specify --disable-domain for the domain parameter.
• To move a DB instance from one self-managed domain to another, use the ModifyDBInstance API
operation and specify the domain parameters for the new domain:
• --domain-fqdn
• --domain-dns-ips
• --domain-ou
• --domain-auth-secret-arn
• To list self-managed AD domain membership for each DB instance, use the DescribeDBInstances API
operation.

1397
Amazon Relational Database Service User Guide
Working with Self Managed Active
Directory with a SQL Server DB instance

Understanding self-managed Active Directory Domain


membership
After you create or modify your DB instance, the instance becomes a member of the self-managed AD
domain. The AWS console indicates the status of the self-managed Active Directory domain membership
for the DB instance. The status of the DB instance can be one of the following:

• joined – The instance is a member of the AD domain.


• joining – The instance is in the process of becoming a member of the AD domain.
• pending-join – The instance membership is pending.
• pending-maintenance-join – AWS will attempt to make the instance a member of the AD domain
during the next scheduled maintenance window.
• pending-removal – The removal of the instance from the AD domain is pending.
• pending-maintenance-removal – AWS will attempt to remove the instance from the AD domain
during the next scheduled maintenance window.
• failed – A configuration problem has prevented the instance from joining the AD domain. Check and
fix your configuration before reissuing the instance modify command.
• removing – The instance is being removed from the self-managed AD domain.

A request to become a member of a self-managed AD domain can fail because of a network connectivity
issue. For example, you might create a DB instance or modify an existing instance and have the attempt
fail for the DB instance to become a member of a self-managed AD domain. In this case, either reissue
the command to create or modify the DB instance or modify the newly created instance to join the self-
managed AD domain.

Troubleshooting self-managed Active Directory


The following are issues you might encounter when you set up or modify self-managed AD.

Error Code Description Common causes Troubleshooting


suggestions

Error 2 / 0x2 The The format or location for Review the —domain-
system the Organizational Unit ou parameter. Ensure the
cannot (OU) specified with the — domain service account
find domain-ou parameter is has the correct permissions
the file invalid. The domain service to the OU. For more
specified. account specified via AWS information, see Configure
Secrets Manager lack the your AD domain service
permissions required to join account (p. 1390).
the OU.

Error 5 / 0x5 Access is Misconfigured permissions Review the domain service


denied. for the domain service account permissions in the
account, or the computer domain, and verify that
account already exists in the the RDS computer account
domain. is not duplicated in the
domain. You can verify the
name of the RDS computer
account by running SELECT
@@SERVERNAME on your
RDS for SQL Server DB
instance. If you are using

1398
Amazon Relational Database Service User Guide
Working with Self Managed Active
Directory with a SQL Server DB instance

Error Code Description Common causes Troubleshooting


suggestions
Multi-AZ, try rebooting with
failover and then verify
that the RDS computer
account again. For more
information, see Rebooting
a DB instance (p. 436).

Error 87 / 0x57 The The domain service account Review the requirements for
parameter specified via AWS Secrets the domain service account.
is Manager doesn't have the For more information, see
incorrect. correct permissions. The Configure your AD domain
user profile may also be service account (p. 1390).
corrupted.

Error 234 / 0xEA Specified The OU specified with the Review the —domain-ou
Organizational
—domain-ou parameter parameter and ensure the
Unit (OU) doesn't exist in your self- specified OU exists in your
does not managed AD. self-managed AD.
exist.

Error 1326 / 0x52E The user The domain service account Ensure the credentials
name or credentials provided in AWS provided in AWS Secrets
password Secrets Manager contains Manager are correct and the
is an unknown username domain account is enabled
incorrect. or bad password. The in your self-managed Active
domain account may also Directory.
be disabled in your self-
managed AD.

Error 1355 / 0x54B The The domain is down, Review the —domain-
specified the specified set of DNS dns-ips and —domain-
domain IPs are unreachable, or fqdn parameters to
either the specified FQDN is ensure they're correct.
does not unreachable. Review the networking
exist or configuration of your RDS
could for SQL Server DB instance
not be and ensure your self-
contacted. managed AD is reachable.
For more information, see
Configure your network
connectivity (p. 1389).

Error 1772 / 0x6BA The RPC There was an issue reaching Validate that the RPC
server is the RPC service of your AD service is running on your
unavailable. domain. This might be a domain controllers and
service or network issue. that the TCP ports 135 and
49152-65535 are reachable
on your domain from your
RDS for SQL Server DB
instance.

1399
Amazon Relational Database Service User Guide
Working with Self Managed Active
Directory with a SQL Server DB instance

Error Code Description Common causes Troubleshooting


suggestions

Error 2224 / 0x8B0 The user The computer account that's Identify the computer
account attempting to be added account by running SELECT
already to your self-managed AD @@SERVERNAME on your
exists. already exists. RDS for SQL Server DB
instance and then carefully
remove it from your self-
managed AD.

Error 2242 / 0x8c2 The The password for the Update the password for
password domain service account the domain service account
of this specified via AWS Secrets used to join your RDS for
user has Manager has expired. SQL Server DB instance to
expired. your self-managed AD.

Restoring a SQL Server DB instance and then adding it to a self-


managed Active Directory domain
You can restore a DB snapshot or do point-in-time recovery (PITR) for a SQL Server DB instance and then
add it to a self-managed Active Directory domain. Once the DB instance is restored, modify the instance
using the process explained in Step 6: Create or modify a SQL Server DB instance (p. 1394) to add the DB
instance to a self-managed AD domain.

1400
Amazon Relational Database Service User Guide
Working with AWS Managed Active
Directory with RDS for SQL Server

Working with AWS Managed Active Directory with


RDS for SQL Server
You can use AWS Managed Microsoft AD to authenticate users with Windows Authentication when they
connect to your RDS for SQL Server DB instance. The DB instance works with AWS Directory Service for
Microsoft Active Directory, also called AWS Managed Microsoft AD, to enable Windows Authentication.
When users authenticate with a SQL Server DB instance joined to the trusting domain, authentication
requests are forwarded to the domain directory that you create with AWS Directory Service.

Region and version availability


Amazon RDS supports using only AWS Managed Microsoft AD for Windows Authentication. RDS doesn't
support using AD Connector. For more information, see the following:

• Application compatibility policy for AWS Managed Microsoft AD


• Application compatibility policy for AD Connector

For information on version and Region availability, see Kerberos authentication with RDS for SQL Server.

Overview of setting up Windows authentication


Amazon RDS uses mixed mode for Windows Authentication. This approach means that the master user
(the name and password used to create your SQL Server DB instance) uses SQL Authentication. Because
the master user account is a privileged credential, you should restrict access to this account.

To get Windows Authentication using an on-premises or self-hosted Microsoft Active Directory, create
a forest trust. The trust can be one-way or two-way. For more information on setting up forest trusts
using AWS Directory Service, see When to create a trust relationship in the AWS Directory Service
Administration Guide.

To set up Windows authentication for a SQL Server DB instance, do the following steps, explained in
greater detail in Setting up Windows Authentication for SQL Server DB instances (p. 1402):

1. Use AWS Managed Microsoft AD, either from the AWS Management Console or AWS Directory Service
API, to create an AWS Managed Microsoft AD directory.
2. If you use the AWS CLI or Amazon RDS API to create your SQL Server DB instance, create
an AWS Identity and Access Management (IAM) role. This role uses the managed IAM policy
AmazonRDSDirectoryServiceAccess and allows Amazon RDS to make calls to your directory. If
you use the console to create your SQL Server DB instance, AWS creates the IAM role for you.

For the role to allow access, the AWS Security Token Service (AWS STS) endpoint must be activated in
the AWS Region for your AWS account. AWS STS endpoints are active by default in all AWS Regions,
and you can use them without any further actions. For more information, see Managing AWS STS in an
AWS Region in the IAM User Guide.
3. Create and configure users and groups in the AWS Managed Microsoft AD directory using the
Microsoft Active Directory tools. For more information about creating users and groups in your Active
Directory, see Manage users and groups in AWS Managed Microsoft AD in the AWS Directory Service
Administration Guide.
4. If you plan to locate the directory and the DB instance in different VPCs, enable cross-VPC traffic.
5. Use Amazon RDS to create a new SQL Server DB instance either from the console, AWS CLI, or Amazon
RDS API. In the create request, you provide the domain identifier ("d-*" identifier) that was generated
when you created your directory and the name of the role you created. You can also modify an
existing SQL Server DB instance to use Windows Authentication by setting the domain and IAM role
parameters for the DB instance.

1401
Amazon Relational Database Service User Guide
Working with AWS Managed Active
Directory with RDS for SQL Server

6. Use the Amazon RDS master user credentials to connect to the SQL Server DB instance as you do any
other DB instance. Because the DB instance is joined to the AWS Managed Microsoft AD domain, you
can provision SQL Server logins and users from the Active Directory users and groups in their domain.
(These are known as SQL Server "Windows" logins.) Database permissions are managed through
standard SQL Server permissions granted and revoked to these Windows logins.

Creating the endpoint for Kerberos authentication


Kerberos-based authentication requires that the endpoint be the customer-specified host name, a
period, and then the fully qualified domain name (FQDN). For example, the following is an example of an
endpoint you might use with Kerberos-based authentication. In this example, the SQL Server DB instance
host name is ad-test and the domain name is corp-ad.company.com.

ad-test.corp-ad.company.com

If you want to make sure your connection is using Kerberos, run the following query:

SELECT net_transport, auth_scheme


FROM sys.dm_exec_connections
WHERE session_id = @@SPID;

Setting up Windows Authentication for SQL Server DB instances


You use AWS Directory Service for Microsoft Active Directory, also called AWS Managed Microsoft AD, to
set up Windows Authentication for a SQL Server DB instance. To set up Windows Authentication, take
the following steps.

Step 1: Create a directory using the AWS Directory Service for Microsoft Active
Directory
AWS Directory Service creates a fully managed, Microsoft Active Directory in the AWS Cloud. When you
create an AWS Managed Microsoft AD directory, AWS Directory Service creates two domain controllers
and Domain Name Service (DNS) servers on your behalf. The directory servers are created in two subnets
in two different Availability Zones within a VPC. This redundancy helps ensure that your directory
remains accessible even if a failure occurs.

When you create an AWS Managed Microsoft AD directory, AWS Directory Service performs the following
tasks on your behalf:

• Sets up a Microsoft Active Directory within the VPC.


• Creates a directory administrator account with the user name Admin and the specified password. You
use this account to manage your directory.
Note
Be sure to save this password. AWS Directory Service doesn't store this password, and you
can't retrieve or reset it.
• Creates a security group for the directory controllers.

When you launch an AWS Directory Service for Microsoft Active Directory, AWS creates an Organizational
Unit (OU) that contains all your directory's objects. This OU, which has the NetBIOS name that you typed
when you created your directory, is located in the domain root. The domain root is owned and managed
by AWS.

The admin account that was created with your AWS Managed Microsoft AD directory has permissions for
the most common administrative activities for your OU:

1402
Amazon Relational Database Service User Guide
Working with AWS Managed Active
Directory with RDS for SQL Server

• Create, update, or delete users, groups, and computers.


• Add resources to your domain such as file or print servers, and then assign permissions for those
resources to users and groups in your OU.
• Create additional OUs and containers.
• Delegate authority.
• Create and link group policies.
• Restore deleted objects from the Active Directory Recycle Bin.
• Run AD and DNS Windows PowerShell modules on the Active Directory Web Service.

The admin account also has rights to perform the following domain-wide activities:

• Manage DNS configurations (add, remove, or update records, zones, and forwarders).
• View DNS event logs.
• View security event logs.

To create a directory with AWS Managed Microsoft AD

1. In the AWS Directory Service console navigation pane, choose Directories and choose Set up
directory.
2. Choose AWS Managed Microsoft AD. This is the only option currently supported for use with
Amazon RDS.
3. Choose Next.
4. On the Enter directory information page, provide the following information:

Edition

Choose the edition that meets your requirements.


Directory DNS name

The fully qualified name for the directory, such as corp.example.com. Names longer than 47
characters aren't supported by SQL Server.
Directory NetBIOS name

An optional short name for the directory, such as CORP.


Directory description

An optional description for the directory.


Admin password

The password for the directory administrator. The directory creation process creates an
administrator account with the user name Admin and this password.

The directory administrator password can't include the word admin. The password is case-
sensitive and must be 8–64 characters in length. It must also contain at least one character from
three of the following four categories:
• Lowercase letters (a-z)
• Uppercase letters (A-Z)
• Numbers (0-9)
• Non-alphanumeric characters (~!@#$%^&*_-+=`|\(){}[]:;"'<>,.?/)
Confirm password

Retype the administrator password.

1403
Amazon Relational Database Service User Guide
Working with AWS Managed Active
Directory with RDS for SQL Server

5. Choose Next.
6. On the Choose VPC and subnets page, provide the following information:

VPC

Choose the VPC for the directory.


Note
You can locate the directory and the DB instance in different VPCs, but if you do so,
make sure to enable cross-VPC traffic. For more information, see Step 4: Enable cross-
VPC traffic between the directory and the DB instance (p. 1407).
Subnets

Choose the subnets for the directory servers. The two subnets must be in different Availability
Zones.
7. Choose Next.
8. Review the directory information. If changes are needed, choose Previous. When the information is
correct, choose Create directory.

It takes several minutes for the directory to be created. When it has been successfully created, the Status
value changes to Active.

1404
Amazon Relational Database Service User Guide
Working with AWS Managed Active
Directory with RDS for SQL Server

To see information about your directory, choose the directory ID in the directory listing. Make a note of
the Directory ID. You need this value when you create or modify your SQL Server DB instance.

Step 2: Create the IAM role for use by Amazon RDS


If you use the console to create your SQL Server DB instance, you can skip this step. If you use the
CLI or RDS API to create your SQL Server DB instance, you must create an IAM role that uses the
AmazonRDSDirectoryServiceAccess managed IAM policy. This role allows Amazon RDS to make
calls to the AWS Directory Service for you.

If you are using a custom policy for joining a domain, rather than using the AWS-
managed AmazonRDSDirectoryServiceAccess policy, make sure that you allow the
ds:GetAuthorizedApplicationDetails action. This requirement is effective starting July 2019, due
to a change in the AWS Directory Service API.

The following IAM policy, AmazonRDSDirectoryServiceAccess, provides access to AWS Directory


Service.

Example IAM policy for providing access to AWS Directory Service

{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"ds:DescribeDirectories",
"ds:AuthorizeApplication",
"ds:UnauthorizeApplication",

1405
Amazon Relational Database Service User Guide
Working with AWS Managed Active
Directory with RDS for SQL Server

"ds:GetAuthorizedApplicationDetails"
],
"Effect": "Allow",
"Resource": "*"
}
]
}

We recommend using the aws:SourceArn and aws:SourceAccount global condition context keys in
resource-based trust relationships to limit the service's permissions to a specific resource. This is the most
effective way to protect against the confused deputy problem.

You might use both global condition context keys and have the aws:SourceArn value contain the
account ID. In this case, the aws:SourceAccount value and the account in the aws:SourceArn value
must use the same account ID when used in the same statement.

• Use aws:SourceArn if you want cross-service access for a single resource.


• Use aws:SourceAccount if you want to allow any resource in that account to be associated with the
cross-service use.

In the trust relationship, make sure to use the aws:SourceArn global condition context key with the full
Amazon Resource Name (ARN) of the resources accessing the role. For Windows Authentication, make
sure to include the DB instances, as shown in the following example.

Example trust relationship with global condition context key for Windows Authentication

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "rds.amazonaws.com"
},
"Action": "sts:AssumeRole",
"Condition": {
"StringEquals": {
"aws:SourceArn": [
"arn:aws:rds:Region:my_account_ID:db:db_instance_identifier"
]
}
}
}
]
}

Create an IAM role using this IAM policy and trust relationship. For more information about creating IAM
roles, see Creating customer managed policies in the IAM User Guide.

Step 3: Create and configure users and groups


You can create users and groups with the Active Directory Users and Computers tool. This tool is one of
the Active Directory Domain Services and Active Directory Lightweight Directory Services tools. Users
represent individual people or entities that have access to your directory. Groups are very useful for
giving or denying privileges to groups of users, rather than having to apply those privileges to each
individual user.

To create users and groups in an AWS Directory Service directory, you must be connected to a Windows
EC2 instance that is a member of the AWS Directory Service directory. You must also be logged in as

1406
Amazon Relational Database Service User Guide
Working with AWS Managed Active
Directory with RDS for SQL Server

a user that has privileges to create users and groups. For more information, see Add users and groups
(Simple AD and AWS Managed Microsoft AD) in the AWS Directory Service Administration Guide.

Step 4: Enable cross-VPC traffic between the directory and the DB instance
If you plan to locate the directory and the DB instance in the same VPC, skip this step and move on to
Step 5: Create or modify a SQL Server DB instance (p. 1407).

If you plan to locate the directory and the DB instance in different VPCs, configure cross-VPC traffic using
VPC peering or AWS Transit Gateway.

The following procedure enables traffic between VPCs using VPC peering. Follow the instructions in
What is VPC peering? in the Amazon Virtual Private Cloud Peering Guide.

To enable cross-VPC traffic using VPC peering

1. Set up appropriate VPC routing rules to ensure that network traffic can flow both ways.
2. Ensure that the DB instance's security group can receive inbound traffic from the directory's security
group.
3. Ensure that there is no network access control list (ACL) rule to block traffic.

If a different AWS account owns the directory, you must share the directory.

To share the directory between AWS accounts

1. Start sharing the directory with the AWS account that the DB instance will be created in by following
the instructions in Tutorial: Sharing your AWS Managed Microsoft AD directory for seamless EC2
domain-join in the AWS Directory Service Administration Guide.
2. Sign in to the AWS Directory Service console using the account for the DB instance, and ensure that
the domain has the SHARED status before proceeding.
3. While signed into the AWS Directory Service console using the account for the DB instance, note the
Directory ID value. You use this directory ID to join the DB instance to the domain.

Step 5: Create or modify a SQL Server DB instance


Create or modify a SQL Server DB instance for use with your directory. You can use the console, CLI, or
RDS API to associate a DB instance with a directory. You can do this in one of the following ways:

• Create a new SQL Server DB instance using the console, the create-db-instance CLI command, or the
CreateDBInstance RDS API operation.

For instructions, see Creating an Amazon RDS DB instance (p. 300).


• Modify an existing SQL Server DB instance using the console, the modify-db-instance CLI command, or
the ModifyDBInstance RDS API operation.

For instructions, see Modifying an Amazon RDS DB instance (p. 401).


• Restore a SQL Server DB instance from a DB snapshot using the console, the restore-db-instance-from-
db-snapshot CLI command, or the RestoreDBInstanceFromDBSnapshot RDS API operation.

For instructions, see Restoring from a DB snapshot (p. 615).


• Restore a SQL Server DB instance to a point-in-time using the console, the restore-db-instance-to-
point-in-time CLI command, or the RestoreDBInstanceToPointInTime RDS API operation.

For instructions, see Restoring a DB instance to a specified time (p. 660).

1407
Amazon Relational Database Service User Guide
Working with AWS Managed Active
Directory with RDS for SQL Server

Windows Authentication is only supported for SQL Server DB instances in a VPC.

For the DB instance to be able to use the domain directory that you created, the following is required:

• For Directory, you must choose the domain identifier (d-ID) generated when you created the
directory.
• Make sure that the VPC security group has an outbound rule that lets the DB instance communicate
with the directory.

When you use the AWS CLI, the following parameters are required for the DB instance to be able to use
the directory that you created:

• For the --domain parameter, use the domain identifier (d-ID) generated when you created the
directory.
• For the --domain-iam-role-name parameter, use the role that you created that uses the managed
IAM policy AmazonRDSDirectoryServiceAccess.

For example, the following CLI command modifies a DB instance to use a directory.

For Linux, macOS, or Unix:

aws rds modify-db-instance \


--db-instance-identifier mydbinstance \
--domain d-ID \
--domain-iam-role-name role-name

For Windows:

aws rds modify-db-instance ^


--db-instance-identifier mydbinstance ^
--domain d-ID ^
--domain-iam-role-name role-name

Important
If you modify a DB instance to enable Kerberos authentication, reboot the DB instance after
making the change.

Step 6: Create Windows Authentication SQL Server logins


Use the Amazon RDS master user credentials to connect to the SQL Server DB instance as you do any
other DB instance. Because the DB instance is joined to the AWS Managed Microsoft AD domain, you

1408
Amazon Relational Database Service User Guide
Working with AWS Managed Active
Directory with RDS for SQL Server

can provision SQL Server logins and users. You do this from the Active Directory users and groups in
your domain. Database permissions are managed through standard SQL Server permissions granted and
revoked to these Windows logins.

For an Active Directory user to authenticate with SQL Server, a SQL Server Windows login must exist for
the user or a group that the user is a member of. Fine-grained access control is handled through granting
and revoking permissions on these SQL Server logins. A user that doesn't have a SQL Server login or
belong to a group with such a login can't access the SQL Server DB instance.

The ALTER ANY LOGIN permission is required to create an Active Directory SQL Server login. If you
haven't created any logins with this permission, connect as the DB instance's master user using SQL
Server Authentication.

Run a data definition language (DDL) command such as the following example to create a SQL Server
login for an Active Directory user or group.
Note
Specify users and groups using the pre-Windows 2000 login name in the format
domainName\login_name. You can't use a user principal name (UPN) in the format
login_name@DomainName.

USE [master]
GO
CREATE LOGIN [mydomain\myuser] FROM WINDOWS WITH DEFAULT_DATABASE = [master],
DEFAULT_LANGUAGE = [us_english];
GO

For more information, see CREATE LOGIN (Transact-SQL) in the Microsoft Developer Network
documentation.

Users (both humans and applications) from your domain can now connect to the RDS for SQL Server
instance from a domain-joined client machine using Windows authentication.

Managing a DB instance in a Domain


You can use the console, AWS CLI, or the Amazon RDS API to manage your DB instance and its
relationship with your domain. For example, you can move the DB instance into, out of, or between
domains.

For example, using the Amazon RDS API, you can do the following:

• To reattempt a domain join for a failed membership, use the ModifyDBInstance API operation and
specify the current membership's directory ID.
• To update the IAM role name for membership, use the ModifyDBInstance API operation and specify
the current membership's directory ID and the new IAM role.
• To remove a DB instance from a domain, use the ModifyDBInstance API operation and specify none
as the domain parameter.
• To move a DB instance from one domain to another, use the ModifyDBInstance API operation and
specify the domain identifier of the new domain as the domain parameter.
• To list membership for each DB instance, use the DescribeDBInstances API operation.

Understanding Domain membership


After you create or modify your DB instance, the instance becomes a member of the domain. The
AWS console indicates the status of the domain membership for the DB instance. The status of the DB
instance can be one of the following:

1409
Amazon Relational Database Service User Guide
Working with AWS Managed Active
Directory with RDS for SQL Server

• joined – The instance is a member of the domain.


• joining – The instance is in the process of becoming a member of the domain.
• pending-join – The instance membership is pending.
• pending-maintenance-join – AWS will attempt to make the instance a member of the domain during
the next scheduled maintenance window.
• pending-removal – The removal of the instance from the domain is pending.
• pending-maintenance-removal – AWS will attempt to remove the instance from the domain during
the next scheduled maintenance window.
• failed – A configuration problem has prevented the instance from joining the domain. Check and fix
your configuration before reissuing the instance modify command.
• removing – The instance is being removed from the domain.

A request to become a member of a domain can fail because of a network connectivity issue or an
incorrect IAM role. For example, you might create a DB instance or modify an existing instance and have
the attempt fail for the DB instance to become a member of a domain. In this case, either reissue the
command to create or modify the DB instance or modify the newly created instance to join the domain.

Connecting to SQL Server with Windows authentication


To connect to SQL Server with Windows Authentication, you must be logged into a domain-joined
computer as a domain user. After launching SQL Server Management Studio, choose Windows
Authentication as the authentication type, as shown following.

Restoring a SQL Server DB instance and then adding it to a


domain
You can restore a DB snapshot or do point-in-time recovery (PITR) for a SQL Server DB instance and then
add it to a domain. Once the DB instance is restored, modify the instance using the process explained in
Step 5: Create or modify a SQL Server DB instance (p. 1407) to add the DB instance to a domain.

1410
Amazon Relational Database Service User Guide
Updating applications for new SSL/TLS certificates

Updating applications to connect to Microsoft SQL


Server DB instances using new SSL/TLS certificates
As of January 13, 2023, Amazon RDS has published new Certificate Authority (CA) certificates for
connecting to your RDS DB instances using Secure Socket Layer or Transport Layer Security (SSL/TLS).
Following, you can find information about updating your applications to use the new certificates.

This topic can help you to determine whether any client applications use SSL/TLS to connect to your DB
instances. If they do, you can further check whether those applications require certificate verification to
connect.
Note
Some applications are configured to connect to SQL Server DB instances only if they can
successfully verify the certificate on the server.
For such applications, you must update your client application trust stores to include the new CA
certificates.

After you update your CA certificates in the client application trust stores, you can rotate the certificates
on your DB instances. We strongly recommend testing these procedures in a development or staging
environment before implementing them in your production environments.

For more information about certificate rotation, see Rotating your SSL/TLS certificate (p. 2596). For
more information about downloading certificates, see Using SSL/TLS to encrypt a connection to a DB
instance (p. 2591). For information about using SSL/TLS with Microsoft SQL Server DB instances, see
Using SSL with a Microsoft SQL Server DB instance (p. 1456).

Topics
• Determining whether any applications are connecting to your Microsoft SQL Server DB instance
using SSL (p. 1411)
• Determining whether a client requires certificate verification in order to connect (p. 1412)
• Updating your application trust store (p. 1413)

Determining whether any applications are connecting


to your Microsoft SQL Server DB instance using SSL
Check the DB instance configuration for the value of the rds.force_ssl parameter. By default, the
rds.force_ssl parameter is set to 0 (off). If the rds.force_ssl parameter is set to 1 (on), clients are
required to use SSL/TLS for connections. For more information about parameter groups, see Working
with parameter groups (p. 347).

Run the following query to get the current encryption option for all the open connections to a DB
instance. The column ENCRYPT_OPTION returns TRUE if the connection is encrypted.

select SESSION_ID,
ENCRYPT_OPTION,
NET_TRANSPORT,
AUTH_SCHEME
from SYS.DM_EXEC_CONNECTIONS

This query shows only the current connections. It doesn't show whether applications that have
connected and disconnected in the past have used SSL.

1411
Amazon Relational Database Service User Guide
Determining whether a client requires
certificate verification in order to connect

Determining whether a client requires certificate


verification in order to connect
You can check whether different types of clients require certificate verification to connect.
Note
If you use connectors other than the ones listed, see the specific connector's documentation
for information about how it enforces encrypted connections. For more information, see
Connection modules for Microsoft SQL databases in the Microsoft SQL Server documentation.

SQL Server Management Studio


Check whether encryption is enforced for SQL Server Management Studio connections:

1. Launch SQL Server Management Studio.


2. For Connect to server, enter the server information, login user name, and password.
3. Choose Options.
4. Check if Encrypt connection is selected in the connect page.

For more information about SQL Server Management Studio, see Use SQL Server Management Studio.

Sqlcmd
The following example with the sqlcmd client shows how to check a script's SQL Server connection
to determine whether successful connections require a valid certificate. For more information, see
Connecting with sqlcmd in the Microsoft SQL Server documentation.

When using sqlcmd, an SSL connection requires verification against the server certificate if you use the -
N command argument to encrypt connections, as in the following example.

$ sqlcmd -N -S dbinstance.rds.amazon.com -d ExampleDB

Note
If sqlcmd is invoked with the -C option, it trusts the server certificate, even if that doesn't
match the client-side trust store.

ADO.NET
In the following example, the application connects using SSL, and the server certificate must be verified.

using SQLC = Microsoft.Data.SqlClient;

...

static public void Main()


{
using (var connection = new SQLC.SqlConnection(
"Server=tcp:dbinstance.rds.amazon.com;" +
"Database=ExampleDB;User ID=LOGIN_NAME;" +
"Password=YOUR_PASSWORD;" +
"Encrypt=True;TrustServerCertificate=False;"
))

1412
Amazon Relational Database Service User Guide
Updating your application trust store

{
connection.Open();
...
}

Java
In the following example, the application connects using SSL, and the server certificate must be verified.

String connectionUrl =
"jdbc:sqlserver://dbinstance.rds.amazon.com;" +
"databaseName=ExampleDB;integratedSecurity=true;" +
"encrypt=true;trustServerCertificate=false";

To enable SSL encryption for clients that connect using JDBC, you might need to add the Amazon RDS
certificate to the Java CA certificate store. For instructions, see Configuring the client for encryption
in the Microsoft SQL Server documentation. You can also provide the trusted CA certificate file name
directly by appending trustStore=path-to-certificate-trust-store-file to the connection
string.
Note
If you use TrustServerCertificate=true (or its equivalent) in the connection string, the
connection process skips the trust chain validation. In this case, the application connects even if
the certificate can't be verified. Using TrustServerCertificate=false enforces certificate
validation and is a best practice.

Updating your application trust store


You can update the trust store for applications that use Microsoft SQL Server. For instructions, see
Encrypting specific connections (p. 1457). Also, see Configuring the client for encryption in the Microsoft
SQL Server documentation.

If you are using an operating system other than Microsoft Windows, see the software distribution
documentation for SSL/TLS implementation for information about adding a new root CA certificate. For
example, OpenSSL and GnuTLS are popular options. Use the implementation method to add trust to the
RDS root CA certificate. Microsoft provides instructions for configuring certificates on some systems.

For information about downloading the root certificate, see Using SSL/TLS to encrypt a connection to a
DB instance (p. 2591).

For sample scripts that import certificates, see Sample script for importing certificates into your trust
store (p. 2603).
Note
When you update the trust store, you can retain older certificates in addition to adding the new
certificates.

1413
Amazon Relational Database Service User Guide
Upgrading the SQL Server DB engine

Upgrading the Microsoft SQL Server DB engine


When Amazon RDS supports a new version of a database engine, you can upgrade your DB instances to
the new version. There are two kinds of upgrades for SQL Server DB instances: major version upgrades
and minor version upgrades.

Major version upgrades can contain database changes that are not backward-compatible with existing
applications. As a result, you must manually perform major version upgrades of your DB instances. You
can initiate a major version upgrade by modifying your DB instance. However, before you perform a
major version upgrade, we recommend that you test the upgrade by following the steps described in
Testing an upgrade (p. 1417).

In contrast, minor version upgrades include only changes that are backward-compatible with existing
applications. You can initiate a minor version upgrade manually by modifying your DB instance.

Alternatively, you can enable the Auto minor version upgrade option when creating or modifying a
DB instance. Doing so means that your DB instance is automatically upgraded after Amazon RDS tests
and approves the new version. You can confirm whether the minor version upgrade will be automatic by
using the describe-db-engine-versions AWS CLI command. For example:

aws rds describe-db-engine-versions --engine sqlserver-se --engine-version 14.00.3281.6.v1

In the following example, the CLI command returns a response showing AutoUpgrade is true, indicating
that upgrades are automatic.

...

"ValidUpgradeTarget": [
{
"Engine": "sqlserver-se",
"EngineVersion": "14.00.3281.6.v1",
"Description": "SQL Server 2017 14.00.3281.6.v1",
"AutoUpgrade": true,
"IsMajorVersionUpgrade": false
}

...

For more information about performing upgrades, see Upgrading a SQL Server DB instance (p. 1418).
For information about what SQL Server versions are available on Amazon RDS, see Amazon RDS for
Microsoft SQL Server (p. 1354).

Topics
• Overview of upgrading (p. 1415)
• Major version upgrades (p. 1415)
• Multi-AZ and in-memory optimization considerations (p. 1417)
• Read replica considerations (p. 1417)
• Option group considerations (p. 1417)
• Parameter group considerations (p. 1417)
• Testing an upgrade (p. 1417)
• Upgrading a SQL Server DB instance (p. 1418)
• Upgrading deprecated DB instances before support ends (p. 1418)

1414
Amazon Relational Database Service User Guide
Overview

Overview of upgrading
Amazon RDS takes two DB snapshots during the upgrade process. The first DB snapshot is of the DB
instance before any upgrade changes have been made. The second DB snapshot is taken after the
upgrade finishes.
Note
Amazon RDS only takes DB snapshots if you have set the backup retention period for your DB
instance to a number greater than 0. To change your backup retention period, see Modifying an
Amazon RDS DB instance (p. 401).

After an upgrade is completed, you can't revert to the previous version of the database engine. If you
want to return to the previous version, restore from the DB snapshot that was taken before the upgrade
to create a new DB instance.

During a minor or major version upgrade of SQL Server, the Free Storage Space and Disk Queue Depth
metrics will display -1. After the upgrade is completed, both metrics will return to normal.

Major version upgrades


Amazon RDS currently supports the following major version upgrades to a Microsoft SQL Server DB
instance.

You can upgrade your existing DB instance to SQL Server 2017 or 2019 from any version except SQL
Server 2008. To upgrade from SQL Server 2008, first upgrade to one of the other versions.

Current version Supported upgrade versions

SQL Server 2017 SQL Server 2019

SQL Server 2016 SQL Server 2019

SQL Server 2017

SQL Server 2014 SQL Server 2019

SQL Server 2017

SQL Server 2016

SQL Server 2012 (end of support) SQL Server 2019

SQL Server 2017

SQL Server 2016

SQL Server 2014

SQL Server 2008 R2 (end of support) SQL Server 2016

SQL Server 2014

SQL Server 2012

You can use an AWS CLI query, such as the following example, to find the available upgrades for a
particular database engine version.

1415
Amazon Relational Database Service User Guide
Major version upgrades

Example

For Linux, macOS, or Unix:

aws rds describe-db-engine-versions \


--engine sqlserver-se \
--engine-version 14.00.3281.6.v1 \
--query "DBEngineVersions[*].ValidUpgradeTarget[*].{EngineVersion:EngineVersion}" \
--output table

For Windows:

aws rds describe-db-engine-versions ^


--engine sqlserver-se ^
--engine-version 14.00.3281.6.v1 ^
--query "DBEngineVersions[*].ValidUpgradeTarget[*].{EngineVersion:EngineVersion}" ^
--output table

The output shows that you can upgrade version 14.00.3281.6 to the latest available SQL Server 2017 or
2019 versions.

--------------------------
|DescribeDBEngineVersions|
+------------------------+
| EngineVersion |
+------------------------+
| 14.00.3294.2.v1 |
| 14.00.3356.20.v1 |
| 14.00.3381.3.v1 |
| 14.00.3401.7.v1 |
| 14.00.3421.10.v1 |
| 14.00.3451.2.v1 |
| 15.00.4043.16.v1 |
| 15.00.4073.23.v1 |
| 15.00.4153.1.v1 |
| 15.00.4198.2.v1 |
| 15.00.4236.7.v1 |
+------------------------+

Database compatibility level


You can use Microsoft SQL Server database compatibility levels to adjust some database behaviors to
mimic previous versions of SQL Server. For more information, see Compatibility level in the Microsoft
documentation.

When you upgrade your DB instance, all existing databases remain at their original compatibility level.
For example, if you upgrade from SQL Server 2014 to SQL Server 2016, all existing databases have a
compatibility level of 120. Any new database created after the upgrade have compatibility level 130.

You can change the compatibility level of a database by using the ALTER DATABASE command. For
example, to change a database named customeracct to be compatible with SQL Server 2014, issue the
following command:

ALTER DATABASE customeracct SET COMPATIBILITY_LEVEL = 120

1416
Amazon Relational Database Service User Guide
Multi-AZ and in-memory optimization considerations

Multi-AZ and in-memory optimization considerations


Amazon RDS supports Multi-AZ deployments for DB instances running Microsoft SQL Server by using
SQL Server Database Mirroring (DBM) or Always On Availability Groups (AGs). For more information, see
Multi-AZ deployments for Amazon RDS for Microsoft SQL Server (p. 1450).

If your DB instance is in a Multi-AZ deployment, both the primary and standby instances are upgraded.
Amazon RDS does rolling upgrades. You have an outage only for the duration of a failover.

SQL Server 2014 through 2019 Enterprise Edition support in-memory optimization.

Read replica considerations


During a database version upgrade, Amazon RDS upgrades all of your read replicas along with the
primary DB instance. Amazon RDS does not support database version upgrades on the read replicas
separately. For more information on read replicas, see Working with read replicas for Microsoft SQL
Server in Amazon RDS (p. 1446).

When you perform a database version upgrade of the primary DB instance, all its read-replicas are
also automatically upgraded. Amazon RDS will upgrade all of the read replicas simultaneously before
upgrading the primary DB instance. Read replicas may not be available until the database version
upgrade on the primary DB instance is complete.

Option group considerations


If your DB instance uses a custom DB option group, in some cases Amazon RDS can't automatically assign
your DB instance a new option group. For example, when you upgrade to a new major version, you must
specify a new option group. We recommend that you create a new option group, and add the same
options to it as your existing custom option group.

For more information, see Creating an option group (p. 332) or Copying an option group (p. 334).

Parameter group considerations


If your DB instance uses a custom DB parameter group:

• Amazon RDS automatically reboots the DB instance after an upgrade.


• In some cases, RDS can't automatically assign a new parameter group to your DB instance.

For example, when you upgrade to a new major version, you must specify a new parameter group. We
recommend that you create a new parameter group, and configure the parameters as in your existing
custom parameter group.

For more information, see Creating a DB parameter group (p. 350) or Copying a DB parameter
group (p. 356).

Testing an upgrade
Before you perform a major version upgrade on your DB instance, you should thoroughly test your
database, and all applications that access the database, for compatibility with the new version. We
recommend that you use the following procedure.

To test a major version upgrade

1. Review Upgrade SQL Server in the Microsoft documentation for the new version of the database
engine to see if there are compatibility issues that might affect your database or applications.

1417
Amazon Relational Database Service User Guide
Upgrading a SQL server DB instance

2. If your DB instance uses a custom option group, create a new option group compatible with the new
version you are upgrading to. For more information, see Option group considerations (p. 1417).
3. If your DB instance uses a custom parameter group, create a new parameter group compatible
with the new version you are upgrading to. For more information, see Parameter group
considerations (p. 1417).
4. Create a DB snapshot of the DB instance to be upgraded. For more information, see Creating a DB
snapshot (p. 613).
5. Restore the DB snapshot to create a new test DB instance. For more information, see Restoring from
a DB snapshot (p. 615).
6. Modify this new test DB instance to upgrade it to the new version, by using one of the following
methods:

• Console (p. 430)


• AWS CLI (p. 430)
• RDS API (p. 430)
7. Evaluate the storage used by the upgraded instance to determine if the upgrade requires additional
storage.
8. Run as many of your quality assurance tests against the upgraded DB instance as needed to ensure
that your database and application work correctly with the new version. Implement any new tests
needed to evaluate the impact of any compatibility issues you identified in step 1. Test all stored
procedures and functions. Direct test versions of your applications to the upgraded DB instance.
9. If all tests pass, then perform the upgrade on your production DB instance. We recommend that
you do not allow write operations to the DB instance until you confirm that everything is working
correctly.

Upgrading a SQL Server DB instance


For information about manually or automatically upgrading a SQL Server DB instance, see the following:

• Upgrading a DB instance engine version (p. 429)


• Best practices for upgrading SQL Server 2008 R2 to SQL Server 2016 on Amazon RDS for SQL Server

Important
If you have any snapshots that are encrypted using AWS KMS, we recommend that you initiate
an upgrade before support ends.

Upgrading deprecated DB instances before support


ends
After a major version is deprecated, you can't install it on new DB instances. RDS will try to automatically
upgrade all existing DB instances.

If you need to restore a deprecated DB instance, you can do point-in-time recovery (PITR) or restore
a snapshot. Doing this gives you temporary access a DB instance that uses the version that is being
deprecated. However, after a major version is fully deprecated, these DB instances will also be
automatically upgraded to a supported version.

1418
Amazon Relational Database Service User Guide
Importing and exporting SQL Server databases

Importing and exporting SQL Server databases


using native backup and restore
Amazon RDS supports native backup and restore for Microsoft SQL Server databases using full backup
files (.bak files). When you use RDS, you access files stored in Amazon S3 rather than using the local file
system on the database server.

For example, you can create a full backup from your local server, store it on S3, and then restore it onto
an existing Amazon RDS DB instance. You can also make backups from RDS, store them on S3, and then
restore them wherever you want.

Native backup and restore is available in all AWS Regions for Single-AZ and Multi-AZ DB instances,
including Multi-AZ DB instances with read replicas. Native backup and restore is available for all editions
of Microsoft SQL Server supported on Amazon RDS.

The following diagram shows the supported scenarios.

Using native .bak files to back up and restore databases is usually the fastest way to back up and restore
databases. There are many additional advantages to using native backup and restore. For example, you
can do the following:

• Migrate databases to or from Amazon RDS.


• Move databases between RDS for SQL Server DB instances.
• Migrate data, schemas, stored procedures, triggers, and other database code inside .bak files.
• Backup and restore single databases, instead of entire DB instances.
• Create copies of databases for development, testing, training, and demonstrations.
• Store and transfer backup files with Amazon S3, for an added layer of protection for disaster recovery.
• Create native backups of databases that have Transparent Data Encryption (TDE) turned on, and
restore those backups to on-premises databases. For more information, see Support for Transparent
Data Encryption in SQL Server (p. 1528).
• Restore native backups of on-premises databases that have TDE turned on to RDS for SQL Server DB
instances. For more information, see Support for Transparent Data Encryption in SQL Server (p. 1528).

Contents
• Limitations and recommendations (p. 1420)
• Setting up for native backup and restore (p. 1421)

1419
Amazon Relational Database Service User Guide
Limitations and recommendations

• Manually creating an IAM role for native backup and restore (p. 1422)
• Using native backup and restore (p. 1425)
• Backing up a database (p. 1425)
• Usage (p. 1425)
• Examples (p. 1427)
• Restoring a database (p. 1428)
• Usage (p. 1428)
• Examples (p. 1429)
• Restoring a log (p. 1430)
• Usage (p. 1430)
• Examples (p. 1431)
• Finishing a database restore (p. 1431)
• Usage (p. 1432)
• Working with partially restored databases (p. 1432)
• Dropping a partially restored database (p. 1432)
• Snapshot restore and point-in-time recovery behavior for partially restored
databases (p. 1432)
• Canceling a task (p. 1432)
• Usage (p. 1432)
• Tracking the status of tasks (p. 1432)
• Usage (p. 1432)
• Examples (p. 1433)
• Response (p. 1433)
• Compressing backup files (p. 1435)
• Troubleshooting (p. 1435)
• Importing and exporting SQL Server data using other methods (p. 1437)
• Importing data into RDS for SQL Server by using a snapshot (p. 1437)
• Import the data (p. 1440)
• Generate and Publish Scripts Wizard (p. 1440)
• Import and Export Wizard (p. 1441)
• Bulk copy (p. 1441)
• Exporting data from RDS for SQL Server (p. 1442)
• SQL Server Import and Export Wizard (p. 1442)
• SQL Server Generate and Publish Scripts Wizard and bcp utility (p. 1444)

Limitations and recommendations


The following are some limitations to using native backup and restore:

• You can't back up to, or restore from, an Amazon S3 bucket in a different AWS Region from your
Amazon RDS DB instance.
• You can't restore a database with the same name as an existing database. Database names are unique.
• We strongly recommend that you don't restore backups from one time zone to a different time zone.
If you restore backups from one time zone to a different time zone, you must audit your queries and
applications for the effects of the time zone change.
• Amazon S3 has a size limit of 5 TB per file. For native backups of larger databases, you can use
multifile backup.

1420
Amazon Relational Database Service User Guide
Setting up

• The maximum database size that can be backed up to S3 depends on the available memory, CPU, I/
O, and network resources on the DB instance. The larger the database, the more memory the backup
agent consumes. Our testing shows that you can make a compressed backup of a 16-TB database on
our newest-generation instance types from 2xlarge instance sizes and larger, given sufficient system
resources.
• You can't back up to or restore from more than 10 backup files at the same time.
• A differential backup is based on the last full backup. For differential backups to work, you can't take
a snapshot between the last full backup and the differential backup. If you want a differential backup,
but a manual or automated snapshot exists, then do another full backup before proceeding with the
differential backup.
• Differential and log restores aren't supported for databases with files that have their file_guid (unique
identifier) set to NULL.
• You can run up to two backup or restore tasks at the same time.
• You can't perform native log backups from SQL Server on Amazon RDS.
• RDS supports native restores of databases up to 16 TB. Native restores of databases on SQL Server
Express Edition are limited to 10 GB.
• You can't do a native backup during the maintenance window, or any time Amazon RDS is in the
process of taking a snapshot of the database. If a native backup task overlaps with the RDS daily
backup window, the native backup task is canceled.
• On Multi-AZ DB instances, you can only natively restore databases that are backed up in the full
recovery model.
• Restoring from differential backups on Multi-AZ instances isn't supported.
• Calling the RDS procedures for native backup and restore within a transaction isn't supported.
• Use a symmetric encryption AWS KMS key to encrypt your backups. Amazon RDS doesn't support
asymmetric KMS keys. For more information, see Creating symmetric encryption KMS keys in the AWS
Key Management Service Developer Guide.
• Native backup files are encrypted with the specified KMS key using the "Encryption-Only" crypto
mode. When you are restoring encrypted backup files, be aware that they were encrypted with the
"Encryption-Only" crypto mode.
• You can't restore a database that contains a FILESTREAM file group.

If your database can be offline while the backup file is created, copied, and restored, we recommend that
you use native backup and restore to migrate it to RDS. If your on-premises database can't be offline, we
recommend that you use the AWS Database Migration Service to migrate your database to Amazon RDS.
For more information, see What is AWS Database Migration Service?

Native backup and restore isn't intended to replace the data recovery capabilities of the cross-region
snapshot copy feature. We recommend that you use snapshot copy to copy your database snapshot
to another AWS Region for cross-region disaster recovery in Amazon RDS. For more information, see
Copying a DB snapshot (p. 619).

Setting up for native backup and restore


To set up for native backup and restore, you need three components:

1. An Amazon S3 bucket to store your backup files.

You must have an S3 bucket to use for your backup files and then upload backups you want to
migrate to RDS. If you already have an Amazon S3 bucket, you can use that. If you don't, you can
create a bucket. Alternatively, you can choose to have a new bucket created for you when you add the
SQLSERVER_BACKUP_RESTORE option by using the AWS Management Console.

For information on using S3, see the Amazon Simple Storage Service User Guide

1421
Amazon Relational Database Service User Guide
Setting up

2. An AWS Identity and Access Management (IAM) role to access the bucket.

If you already have an IAM role, you can use that. You can choose to have a new IAM role created
for you when you add the SQLSERVER_BACKUP_RESTORE option by using the AWS Management
Console. Alternatively, you can create a new one manually.

If you want to create a new IAM role manually, take the approach discussed in the next section. Do the
same if you want to attach trust relationships and permissions policies to an existing IAM role.
3. The SQLSERVER_BACKUP_RESTORE option added to an option group on your DB instance.

To enable native backup and restore on your DB instance, you add the SQLSERVER_BACKUP_RESTORE
option to an option group on your DB instance. For more information and instructions, see Support
for native backup and restore in SQL Server (p. 1525).

Manually creating an IAM role for native backup and restore


If you want to manually create a new IAM role to use with native backup and restore, you can do so.
In this case, you create a role to delegate permissions from the Amazon RDS service to your Amazon
S3 bucket. When you create an IAM role, you attach a trust relationship and a permissions policy. The
trust relationship allows RDS to assume this role. The permissions policy defines the actions this role can
perform. For more information about creating the role, see Creating a role to delegate permissions to an
AWS service.

For the native backup and restore feature, use trust relationships and permissions policies similar
to the examples in this section. In the following example, we use the service principal name
rds.amazonaws.com as an alias for all service accounts. In the other examples, we specify an Amazon
Resource Name (ARN) to identify another account, user, or role that we're granting access to in the trust
policy.

We recommend using the aws:SourceArn and aws:SourceAccount global condition context keys in
resource-based trust relationships to limit the service's permissions to a specific resource. This is the most
effective way to protect against the confused deputy problem.

You might use both global condition context keys and have the aws:SourceArn value contain the
account ID. In this case, the aws:SourceAccount value and the account in the aws:SourceArn value
must use the same account ID when used in the same statement.

• Use aws:SourceArn if you want cross-service access for a single resource.


• Use aws:SourceAccount if you want to allow any resource in that account to be associated with the
cross-service use.

In the trust relationship, make sure to use the aws:SourceArn global condition context key with the full
ARN of the resources accessing the role. For native backup and restore, make sure to include both the DB
option group and the DB instances, as shown in the following example.

Example trust relationship with global condition context key for native backup and restore

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "rds.amazonaws.com"
},
"Action": "sts:AssumeRole",
"Condition": {

1422
Amazon Relational Database Service User Guide
Setting up

"StringEquals": {
"aws:SourceArn": [
"arn:aws:rds:Region:my_account_ID:db:db_instance_identifier",
"arn:aws:rds:Region:my_account_ID:og:option_group_name"
]
}
}
}
]
}

The following example uses an ARN to specify a resource. For more information on using ARNs, see
Amazon resource names (ARNs).

Example permissions policy for native backup and restore without encryption support

{
"Version": "2012-10-17",
"Statement":
[
{
"Effect": "Allow",
"Action":
[
"s3:ListBucket",
"s3:GetBucketLocation"
],
"Resource": "arn:aws:s3:::bucket_name"
},
{
"Effect": "Allow",
"Action":
[
"s3:GetObjectAttributes",
"s3:GetObject",
"s3:PutObject",
"s3:ListMultipartUploadParts",
"s3:AbortMultipartUpload"
],
"Resource": "arn:aws:s3:::bucket_name/*"
}
]
}

Example permissions policy for native backup and restore with encryption support

If you want to encrypt your backup files, include an encryption key in your permissions policy. For more
information about encryption keys, see Getting started in the AWS Key Management Service Developer
Guide.
Note
You must use a symmetric encryption KMS key to encrypt your backups. Amazon RDS doesn't
support asymmetric KMS keys. For more information, see Creating symmetric encryption KMS
keys in the AWS Key Management Service Developer Guide.
The IAM role must also be a key user and key administrator for the KMS key, that is, it must be
specified in the key policy. For more information, see Creating symmetric encryption KMS keys
in the AWS Key Management Service Developer Guide.

{
"Version": "2012-10-17",
"Statement":
[

1423
Amazon Relational Database Service User Guide
Setting up

{
"Effect": "Allow",
"Action":
[
"kms:DescribeKey",
"kms:GenerateDataKey",
"kms:Encrypt",
"kms:Decrypt"
],
"Resource": "arn:aws:kms:region:account-id:key/key-id"
},
{
"Effect": "Allow",
"Action":
[
"s3:ListBucket",
"s3:GetBucketLocation"
],
"Resource": "arn:aws:s3:::bucket_name"
},
{
"Effect": "Allow",
"Action":
[
"s3:GetObjectAttributes",
"s3:GetObject",
"s3:PutObject",
"s3:ListMultipartUploadParts",
"s3:AbortMultipartUpload"
],
"Resource": "arn:aws:s3:::bucket_name/*"
}
]
}

1424
Amazon Relational Database Service User Guide
Using native backup and restore

Using native backup and restore


After you have enabled and configured native backup and restore, you can start using it. First, you
connect to your Microsoft SQL Server database, and then you call an Amazon RDS stored procedure to
do the work. For instructions on connecting to your database, see Connecting to a DB instance running
the Microsoft SQL Server database engine (p. 1380).

Some of the stored procedures require that you provide an Amazon Resource
Name (ARN) to your Amazon S3 bucket and file. The format for your ARN is
arn:aws:s3:::bucket_name/file_name.extension. Amazon S3 doesn't require an account
number or AWS Region in ARNs.

If you also provide an optional KMS key, the format for the ARN of the key is
arn:aws:kms:region:account-id:key/key-id. For more information, see Amazon resource
names (ARNs) and AWS service namespaces. You must use a symmetric encryption KMS key to encrypt
your backups. Amazon RDS doesn't support asymmetric KMS keys. For more information, see Creating
symmetric encryption KMS keys in the AWS Key Management Service Developer Guide.
Note
Whether or not you use a KMS key, the native backup and restore tasks enable server-side
Advanced Encryption Standard (AES) 256-bit encryption by default for files uploaded to S3.

For instructions on how to call each stored procedure, see the following topics:

• Backing up a database (p. 1425)


• Restoring a database (p. 1428)
• Restoring a log (p. 1430)
• Finishing a database restore (p. 1431)
• Working with partially restored databases (p. 1432)
• Canceling a task (p. 1432)
• Tracking the status of tasks (p. 1432)

Backing up a database
To back up your database, use the rds_backup_database stored procedure.
Note
You can't back up a database during the maintenance window, or while Amazon RDS is taking a
snapshot.

Usage

exec msdb.dbo.rds_backup_database
@source_db_name='database_name',
@s3_arn_to_backup_to='arn:aws:s3:::bucket_name/file_name.extension',
[@kms_master_key_arn='arn:aws:kms:region:account-id:key/key-id'],
[@overwrite_s3_backup_file=0|1],
[@type='DIFFERENTIAL|FULL'],
[@number_of_files=n];

The following parameters are required:

• @source_db_name – The name of the database to back up.


• @s3_arn_to_backup_to – The ARN indicating the Amazon S3 bucket to use for the backup, plus the
name of the backup file.

1425
Amazon Relational Database Service User Guide
Using native backup and restore

The file can have any extension, but .bak is usually used.

The following parameters are optional:

• @kms_master_key_arn – The ARN for the symmetric encryption KMS key to use to encrypt the item.
• You can't use the default encryption key. If you use the default key, the database won't be backed
up.
• If you don't specify a KMS key identifier, the backup file won't be encrypted. For more information,
see Encrypting Amazon RDS resources.
• When you specify a KMS key, client-side encryption is used.
• Amazon RDS doesn't support asymmetric KMS keys. For more information, see Creating symmetric
encryption KMS keys in the AWS Key Management Service Developer Guide.
• @overwrite_s3_backup_file – A value that indicates whether to overwrite an existing backup file.
• 0 – Doesn't overwrite an existing file. This value is the default.

Setting @overwrite_s3_backup_file to 0 returns an error if the file already exists.


• 1 – Overwrites an existing file that has the specified name, even if it isn't a backup file.
• @type – The type of backup.
• DIFFERENTIAL – Makes a differential backup.
• FULL – Makes a full backup. This value is the default.

A differential backup is based on the last full backup. For differential backups to work, you can't take
a snapshot between the last full backup and the differential backup. If you want a differential backup,
but a snapshot exists, then do another full backup before proceeding with the differential backup.

You can look for the last full backup or snapshot using the following example SQL query:

select top 1
database_name
, backup_start_date
, backup_finish_date
from msdb.dbo.backupset
where database_name='mydatabase'
and type = 'D'
order by backup_start_date desc;

• @number_of_files – The number of files into which the backup will be divided (chunked). The
maximum number is 10.
• Multifile backup is supported for both full and differential backups.
• If you enter a value of 1 or omit the parameter, a single backup file is created.

Provide the prefix that the files have in common, then suffix that with an asterisk (*). The asterisk
can be anywhere in the file_name part of the S3 ARN. The asterisk is replaced by a series of
alphanumeric strings in the generated files, starting with 1-of-number_of_files.

For example, if the file names in the S3 ARN are backup*.bak and you set @number_of_files=4,
the backup files generated are backup1-of-4.bak, backup2-of-4.bak, backup3-of-4.bak, and
backup4-of-4.bak.
• If any of the file names already exists, and @overwrite_s3_backup_file is 0, an error is returned.
• Multifile backups can only have one asterisk in the file_name part of the S3 ARN.
• Single-file backups can have any number of asterisks in the file_name part of the S3 ARN.
Asterisks aren't removed from the generated file name.

1426
Amazon Relational Database Service User Guide
Using native backup and restore

Examples
Example of differential backup

exec msdb.dbo.rds_backup_database
@source_db_name='mydatabase',
@s3_arn_to_backup_to='arn:aws:s3:::mybucket/backup1.bak',
@overwrite_s3_backup_file=1,
@type='DIFFERENTIAL';

Example of full backup with encryption

exec msdb.dbo.rds_backup_database
@source_db_name='mydatabase',
@s3_arn_to_backup_to='arn:aws:s3:::mybucket/backup1.bak',
@kms_master_key_arn='arn:aws:kms:us-east-1:123456789012:key/AKIAIOSFODNN7EXAMPLE',
@overwrite_s3_backup_file=1,
@type='FULL';

Example of multifile backup

exec msdb.dbo.rds_backup_database
@source_db_name='mydatabase',
@s3_arn_to_backup_to='arn:aws:s3:::mybucket/backup*.bak',
@number_of_files=4;

Example of multifile differential backup

exec msdb.dbo.rds_backup_database
@source_db_name='mydatabase',
@s3_arn_to_backup_to='arn:aws:s3:::mybucket/backup*.bak',
@type='DIFFERENTIAL',
@number_of_files=4;

Example of multifile backup with encryption

exec msdb.dbo.rds_backup_database
@source_db_name='mydatabase',
@s3_arn_to_backup_to='arn:aws:s3:::mybucket/backup*.bak',
@kms_master_key_arn='arn:aws:kms:us-east-1:123456789012:key/AKIAIOSFODNN7EXAMPLE',
@number_of_files=4;

Example of multifile backup with S3 overwrite

exec msdb.dbo.rds_backup_database
@source_db_name='mydatabase',
@s3_arn_to_backup_to='arn:aws:s3:::mybucket/backup*.bak',
@overwrite_s3_backup_file=1,
@number_of_files=4;

Example of single-file backup with the @number_of_files parameter

This example generates a backup file named backup*.bak.

exec msdb.dbo.rds_backup_database

1427
Amazon Relational Database Service User Guide
Using native backup and restore

@source_db_name='mydatabase',
@s3_arn_to_backup_to='arn:aws:s3:::mybucket/backup*.bak',
@number_of_files=1;

Restoring a database
To restore your database, call the rds_restore_database stored procedure. Amazon RDS creates an
initial snapshot of the database after the restore task is complete and the database is open.

Usage

exec msdb.dbo.rds_restore_database
@restore_db_name='database_name',
@s3_arn_to_restore_from='arn:aws:s3:::bucket_name/file_name.extension',
@with_norecovery=0|1,
[@kms_master_key_arn='arn:aws:kms:region:account-id:key/key-id'],
[@type='DIFFERENTIAL|FULL'];

The following parameters are required:

• @restore_db_name – The name of the database to restore. Database names are unique. You can't
restore a database with the same name as an existing database.
• @s3_arn_to_restore_from – The ARN indicating the Amazon S3 prefix and names of the backup
files used to restore the database.
• For a single-file backup, provide the entire file name.
• For a multifile backup, provide the prefix that the files have in common, then suffix that with an
asterisk (*).
• If @s3_arn_to_restore_from is empty, the following error message is returned: S3 ARN prefix
cannot be empty.

The following parameter is required for differential restores, but optional for full restores:

• @with_norecovery – The recovery clause to use for the restore operation.


• Set it to 0 to restore with RECOVERY. In this case, the database is online after the restore.
• Set it to 1 to restore with NORECOVERY. In this case, the database remains in the RESTORING state
after restore task completion. With this approach, you can do later differential restores.
• For DIFFERENTIAL restores, specify 0 or 1.
• For FULL restores, this value defaults to 0.

The following parameters are optional:

• @kms_master_key_arn – If you encrypted the backup file, the KMS key to use to decrypt the file.

When you specify a KMS key, client-side encryption is used.


• @type – The type of restore. Valid types are DIFFERENTIAL and FULL. The default value is FULL.

Note
For differential restores, either the database must be in the RESTORING state or a task must
already exist that restores with NORECOVERY.
You can't restore later differential backups while the database is online.
You can't submit a restore task for a database that already has a pending restore task with
RECOVERY.
Full restores with NORECOVERY and differential restores aren't supported on Multi-AZ instances.

1428
Amazon Relational Database Service User Guide
Using native backup and restore

Restoring a database on a Multi-AZ instance with read replicas is similar to restoring a database
on a Multi-AZ instance. You don't have to take any additional actions to restore a database on a
replica.

Examples
Example of single-file restore

exec msdb.dbo.rds_restore_database
@restore_db_name='mydatabase',
@s3_arn_to_restore_from='arn:aws:s3:::mybucket/backup1.bak';

Example of multifile restore

To avoid errors when restoring multiple files, make sure that all the backup files have the same prefix,
and that no other files use that prefix.

exec msdb.dbo.rds_restore_database
@restore_db_name='mydatabase',
@s3_arn_to_restore_from='arn:aws:s3:::mybucket/backup*';

Example of full database restore with RECOVERY

The following three examples perform the same task, full restore with RECOVERY.

exec msdb.dbo.rds_restore_database
@restore_db_name='mydatabase',
@s3_arn_to_restore_from='arn:aws:s3:::mybucket/backup1.bak';

exec msdb.dbo.rds_restore_database
@restore_db_name='mydatabase',
@s3_arn_to_restore_from='arn:aws:s3:::mybucket/backup1.bak',
[@type='DIFFERENTIAL|FULL'];

exec msdb.dbo.rds_restore_database
@restore_db_name='mydatabase',
@s3_arn_to_restore_from='arn:aws:s3:::mybucket/backup1.bak',
@type='FULL',
@with_norecovery=0;

Example of full database restore with encryption

exec msdb.dbo.rds_restore_database
@restore_db_name='mydatabase',
@s3_arn_to_restore_from='arn:aws:s3:::mybucket/backup1.bak',
@kms_master_key_arn='arn:aws:kms:us-east-1:123456789012:key/AKIAIOSFODNN7EXAMPLE';

Example of full database restore with NORECOVERY

exec msdb.dbo.rds_restore_database
@restore_db_name='mydatabase',
@s3_arn_to_restore_from='arn:aws:s3:::mybucket/backup1.bak',
@type='FULL',
@with_norecovery=1;

1429
Amazon Relational Database Service User Guide
Using native backup and restore

Example of differential restore with NORECOVERY

exec msdb.dbo.rds_restore_database
@restore_db_name='mydatabase',
@s3_arn_to_restore_from='arn:aws:s3:::mybucket/backup1.bak',
@type='DIFFERENTIAL',
@with_norecovery=1;

Example of differential restore with RECOVERY

exec msdb.dbo.rds_restore_database
@restore_db_name='mydatabase',
@s3_arn_to_restore_from='arn:aws:s3:::mybucket/backup1.bak',
@type='DIFFERENTIAL',
@with_norecovery=0;

Restoring a log
To restore your log, call the rds_restore_log stored procedure.

Usage

exec msdb.dbo.rds_restore_log
@restore_db_name='database_name',
@s3_arn_to_restore_from='arn:aws:s3:::bucket_name/log_file_name.extension',
[@kms_master_key_arn='arn:aws:kms:region:account-id:key/key-id'],
[@with_norecovery=0|1],
[@stopat='datetime'];

The following parameters are required:

• @restore_db_name – The name of the database whose log to restore.


• @s3_arn_to_restore_from – The ARN indicating the Amazon S3 prefix and name of the log file
used to restore the log. The file can have any extension, but .trn is usually used.

If @s3_arn_to_restore_from is empty, the following error message is returned: S3 ARN prefix


cannot be empty.

The following parameters are optional:

• @kms_master_key_arn – If you encrypted the log, the KMS key to use to decrypt the log.
• @with_norecovery – The recovery clause to use for the restore operation. This value defaults to 1.
• Set it to 0 to restore with RECOVERY. In this case, the database is online after the restore. You can't
restore further log backups while the database is online.
• Set it to 1 to restore with NORECOVERY. In this case, the database remains in the RESTORING state
after restore task completion. With this approach, you can do later log restores.
• @stopat – A value that specifies that the database is restored to its state at the date and time
specified (in datetime format). Only transaction log records written before the specified date and time
are applied to the database.

If this parameter isn't specified (it is NULL), the complete log is restored.

Note
For log restores, either the database must be in a state of restoring or a task must already exist
that restores with NORECOVERY.

1430
Amazon Relational Database Service User Guide
Using native backup and restore

You can't restore log backups while the database is online.


You can't submit a log restore task on a database that already has a pending restore task with
RECOVERY.
Log restores aren't supported on Multi-AZ instances.

Examples

Example of log restore

exec msdb.dbo.rds_restore_log
@restore_db_name='mydatabase',
@s3_arn_to_restore_from='arn:aws:s3:::mybucket/mylog.trn';

Example of log restore with encryption

exec msdb.dbo.rds_restore_log
@restore_db_name='mydatabase',
@s3_arn_to_restore_from='arn:aws:s3:::mybucket/mylog.trn',
@kms_master_key_arn='arn:aws:kms:us-east-1:123456789012:key/AKIAIOSFODNN7EXAMPLE';

Example of log restore with NORECOVERY

The following two examples perform the same task, log restore with NORECOVERY.

exec msdb.dbo.rds_restore_log
@restore_db_name='mydatabase',
@s3_arn_to_restore_from='arn:aws:s3:::mybucket/mylog.trn',
@with_norecovery=1;

exec msdb.dbo.rds_restore_log
@restore_db_name='mydatabase',
@s3_arn_to_restore_from='arn:aws:s3:::mybucket/mylog.trn';

Example of log restore with RECOVERY

exec msdb.dbo.rds_restore_log
@restore_db_name='mydatabase',
@s3_arn_to_restore_from='arn:aws:s3:::mybucket/mylog.trn',
@with_norecovery=0;

Example of log restore with STOPAT clause

exec msdb.dbo.rds_restore_log
@restore_db_name='mydatabase',
@s3_arn_to_restore_from='arn:aws:s3:::mybucket/mylog.trn',
@with_norecovery=0,
@stopat='2019-12-01 03:57:09';

Finishing a database restore


If the last restore task on the database was performed using @with_norecovery=1, the
database is now in the RESTORING state. Open this database for normal operation by using the
rds_finish_restore stored procedure.

1431
Amazon Relational Database Service User Guide
Using native backup and restore

Usage

exec msdb.dbo.rds_finish_restore @db_name='database_name';

Note
To use this approach, the database must be in the RESTORING state without any pending
restore tasks.
The rds_finish_restore procedure isn't supported on Multi-AZ instances.
To finish restoring the database, use the master login. Or use the user login that most recently
restored the database or log with NORECOVERY.

Working with partially restored databases


Dropping a partially restored database
To drop a partially restored database (left in the RESTORING state), use the rds_drop_database stored
procedure.

exec msdb.dbo.rds_drop_database @db_name='database_name';

Note
You can't submit a DROP database request for a database that already has a pending restore or
finish restore task.
To drop the database, use the master login. Or use the user login that most recently restored the
database or log with NORECOVERY.

Snapshot restore and point-in-time recovery behavior for partially restored


databases
Partially restored databases in the source instance (left in the RESTORING state) are dropped from the
target instance during snapshot restore and point-in-time recovery.

Canceling a task
To cancel a backup or restore task, call the rds_cancel_task stored procedure.
Note
You can't cancel a FINISH_RESTORE task.

Usage

exec msdb.dbo.rds_cancel_task @task_id=ID_number;

The following parameter is required:

• @task_id – The ID of the task to cancel. You can get the task ID by calling rds_task_status.

Tracking the status of tasks


To track the status of your backup and restore tasks, call the rds_task_status stored procedure. If you
don't provide any parameters, the stored procedure returns the status of all tasks. The status for tasks is
updated approximately every two minutes. Task history is retained for 36 days.

Usage

exec msdb.dbo.rds_task_status

1432
Amazon Relational Database Service User Guide
Using native backup and restore

[@db_name='database_name'],
[@task_id=ID_number];

The following parameters are optional:

• @db_name – The name of the database to show the task status for.
• @task_id – The ID of the task to show the task status for.

Examples
Example of listing the status for a specific task

exec msdb.dbo.rds_task_status @task_id=5;

Example of listing the status for a specific database and task

exec msdb.dbo.rds_task_status
@db_name='my_database',
@task_id=5;

Example of listing all tasks and their statuses on a specific database

exec msdb.dbo.rds_task_status @db_name='my_database';

Example of listing all tasks and their statuses on the current instance

exec msdb.dbo.rds_task_status;

Response
The rds_task_status stored procedure returns the following columns.

Column Description

task_id The ID of the task.

task_type Task type depending on the input parameters, as follows:

• For backup tasks:


• BACKUP_DB – Full database backup
• BACKUP_DB_DIFFERENTIAL – Differential database backup
• For restore tasks:
• RESTORE_DB – Full database restore with RECOVERY
• RESTORE_DB_NORECOVERY – Full database restore with NORECOVERY
• RESTORE_DB_DIFFERENTIAL – Differential database restore with RECOVERY
• RESTORE_DB_DIFFERENTIAL_NORECOVERY – Differential database restore
with NORECOVERY
• RESTORE_DB_LOG – Log restore with RECOVERY
• RESTORE_DB_LOG_NORECOVERY – Log restore with NORECOVERY
• For tasks that finish a restore:
• FINISH_RESTORE – Finish restore and open database

1433
Amazon Relational Database Service User Guide
Using native backup and restore

Column Description
Amazon RDS creates an initial snapshot of the database after it is open on
completion of the following restore tasks:

• RESTORE_DB
• RESTORE_DB_DIFFERENTIAL
• RESTORE_DB_LOG
• FINISH_RESTORE

database_name The name of the database that the task is associated with.

% complete The progress of the task as a percent value.

duration The amount of time spent on the task, in minutes.


(mins)

lifecycle The status of the task. The possible statuses are the following:

• CREATED – As soon as you call rds_backup_database or


rds_restore_database, a task is created and the status is set to CREATED.
• IN_PROGRESS – After a backup or restore task starts, the status is set to
IN_PROGRESS. It can take up to 5 minutes for the status to change from
CREATED to IN_PROGRESS.
• SUCCESS – After a backup or restore task completes, the status is set to
SUCCESS.
• ERROR – If a backup or restore task fails, the status is set to ERROR. For more
information about the error, see the task_info column.
• CANCEL_REQUESTED – As soon as you call rds_cancel_task, the status of
the task is set to CANCEL_REQUESTED.
• CANCELLED – After a task is successfully canceled, the status of the task is set
to CANCELLED.

task_info Additional information about the task.

If an error occurs while backing up or restoring a database, this column contains


information about the error. For a list of possible errors, and mitigation strategies,
see Troubleshooting (p. 1435).

last_updated The date and time that the task status was last updated. The status is updated
after every 5 percent of progress.

created_at The date and time that the task was created.

S3_object_arn The ARN indicating the Amazon S3 prefix and the name of the file that is being
backed up or restored.

overwrite_s3_backup_file
The value of the @overwrite_s3_backup_file parameter specified when
calling a backup task. For more information, see Backing up a database (p. 1425).

The ARN for the KMS key used for encryption (for backup) and decryption (for
KMS_master_key_arn
restore).

filepath Not applicable to native backup and restore tasks.

overwrite_file Not applicable to native backup and restore tasks.

1434
Amazon Relational Database Service User Guide
Compressing backup files

Compressing backup files


To save space in your Amazon S3 bucket, you can compress your backup files. For more information
about compressing backup files, see Backup compression in the Microsoft documentation.

Compressing your backup files is supported for the following database editions:

• Microsoft SQL Server Enterprise Edition


• Microsoft SQL Server Standard Edition

To turn on compression for your backup files, run the following code:

exec rdsadmin..rds_set_configuration 'S3 backup compression', 'true';

To turn off compression for your backup files, run the following code:

exec rdsadmin..rds_set_configuration 'S3 backup compression', 'false';

Troubleshooting
The following are issues you might encounter when you use native backup and restore.

Issue Troubleshooting suggestions

Database backup/restore Make sure that you have added the SQLSERVER_BACKUP_RESTORE
option is not enabled yet option to the DB option group associated with your DB instance.
or is in the process of being For more information, see Adding the native backup and restore
enabled. Please try again option (p. 1525).
later.

Access Denied The backup or restore process can't access the backup file. This is
usually caused by issues like the following:

• Referencing the incorrect bucket. Referencing the bucket using an


incorrect format. Referencing a file name without using the ARN.
• Incorrect permissions on the bucket file. For example, if it is created
by a different account that is trying to access it now, add the correct
permissions.
• An IAM policy that is incorrect or incomplete. Your IAM role must
include all the necessary elements, including, for example, the
correct version. These are highlighted in Importing and exporting
SQL Server databases using native backup and restore (p. 1419).

BACKUP DATABASE Compressing your backup files is only supported for Microsoft SQL
WITH COMPRESSION Server Enterprise Edition and Standard Edition.
isn't supported on
<edition_name> Edition For more information, see Compressing backup files (p. 1435).

Key <ARN> does not exist You attempted to restore an encrypted backup, but didn't provide a
valid encryption key. Check your encryption key and retry.

For more information, see Restoring a database (p. 1428).

1435
Amazon Relational Database Service User Guide
Troubleshooting

Issue Troubleshooting suggestions

Please reissue task with If you attempt to back up your database and provide the name of a
correct type and overwrite file that already exists, but set the overwrite property to false, the save
property operation fails. To fix this error, either provide the name of a file that
doesn't already exist, or set the overwrite property to true.

For more information, see Backing up a database (p. 1425).

It's also possible that you intended to restore your database, but called
the rds_backup_database stored procedure accidentally. In that
case, call the rds_restore_database stored procedure instead.

For more information, see Restoring a database (p. 1428).

If you intended to restore your database and called the


rds_restore_database stored procedure, make sure that you
provided the name of a valid backup file.

For more information, see Using native backup and restore (p. 1425).

Please specify a bucket that You can't back up to, or restore from, an Amazon S3 bucket in a
is in the same region as RDS different AWS Region from your Amazon RDS DB instance. You can
instance use Amazon S3 replication to copy the backup file to the correct AWS
Region.

For more information, see Cross-Region replication in the Amazon S3


documentation.

The specified bucket does Verify that you have provided the correct ARN for your bucket and file,
not exist in the correct format.

For more information, see Using native backup and restore (p. 1425).

User <ARN> is not authorized You requested an encrypted operation, but didn't provide correct AWS
to perform <kms action> on KMS permissions. Verify that you have the correct permissions, or add
resource <ARN> them.

For more information, see Setting up for native backup and


restore (p. 1421).

The Restore task is unable to Reduce the number of files that you're trying to restore from. You can
restore from more than 10 make each individual file larger if necessary.
backup file(s). Please reduce
the number of files matched
and try again.

Database 'database_name' You can't restore a database with the same name as an existing
already exists. Two databases database. Database names are unique.
that differ only by case or
accent are not allowed.
Choose a different database
name.

1436
Amazon Relational Database Service User Guide
Importing and exporting SQL
Server data using other methods

Importing and exporting SQL Server data using other


methods
Following, you can find information about using snapshots to import your Microsoft SQL Server data to
Amazon RDS. You can also find information about using snapshots to export your data from an RDS DB
instance running SQL Server.

If your scenario supports it, it's easier to move data in and out of Amazon RDS by using the native backup
and restore functionality. For more information, see Importing and exporting SQL Server databases
using native backup and restore (p. 1419).
Note
Amazon RDS for Microsoft SQL Server doesn't support importing data into the msdb database.

Importing data into RDS for SQL Server by using a snapshot


To import data into a SQL Server DB instance by using a snapshot

1. Create a DB instance. For more information, see Creating an Amazon RDS DB instance (p. 300).
2. Stop applications from accessing the destination DB instance.

If you prevent access to your DB instance while you are importing data, data transfer is faster.
Additionally, you don't need to worry about conflicts while data is being loaded if other applications
cannot write to the DB instance at the same time. If something goes wrong and you have to roll
back to an earlier database snapshot, the only changes that you lose are the imported data. You can
import this data again after you resolve the issue.

For information about controlling access to your DB instance, see Controlling access with security
groups (p. 2680).
3. Create a snapshot of the target database.

If the target database is already populated with data, we recommend that you take a snapshot of
the database before you import the data. If something goes wrong with the data import or you want
to discard the changes, you can restore the database to its previous state by using the snapshot. For
information about database snapshots, see Creating a DB snapshot (p. 613).
Note
When you take a database snapshot, I/O operations to the database are suspended for a
moment (milliseconds) while the backup is in progress.
4. Disable automated backups on the target database.

Disabling automated backups on the target DB instance improves performance while you are
importing your data because Amazon RDS doesn't log transactions when automatic backups are
disabled. However, there are some things to consider. Automated backups are required to perform
a point-in-time recovery. Thus, you can't restore the database to a specific point in time while you
are importing data. Additionally, any automated backups that were created on the DB instance are
erased unless you choose to retain them.

Choosing to retain the automated backups can help protect you against accidental deletion of
data. Amazon RDS also saves the database instance properties along with each automated backup
to make it easy to recover. Using this option lets you can restore a deleted database instance to a
specified point in time within the backup retention period even after deleting it. Automated backups
are automatically deleted at the end of the specified backup window, just as they are for an active
database instance.

You can also use previous snapshots to recover the database, and any snapshots that you have taken
remain available. For information about automated backups, see Working with backups (p. 591).

1437
Amazon Relational Database Service User Guide
Importing and exporting SQL
Server data using other methods

5. Disable foreign key constraints, if applicable.

If you need to disable foreign key constraints, you can do so with the following script.

--Disable foreign keys on all tables


DECLARE @table_name SYSNAME;
DECLARE @cmd NVARCHAR(MAX);
DECLARE table_cursor CURSOR FOR SELECT name FROM sys.tables;

OPEN table_cursor;
FETCH NEXT FROM table_cursor INTO @table_name;

WHILE @@FETCH_STATUS = 0 BEGIN


SELECT @cmd = 'ALTER TABLE '+QUOTENAME(@table_name)+' NOCHECK CONSTRAINT ALL';
EXEC (@cmd);
FETCH NEXT FROM table_cursor INTO @table_name;
END

CLOSE table_cursor;
DEALLOCATE table_cursor;

GO

6. Drop indexes, if applicable.


7. Disable triggers, if applicable.

If you need to disable triggers, you can do so with the following script.

--Disable triggers on all tables


DECLARE @enable BIT = 0;
DECLARE @trigger SYSNAME;
DECLARE @table SYSNAME;
DECLARE @cmd NVARCHAR(MAX);
DECLARE trigger_cursor CURSOR FOR SELECT trigger_object.name trigger_name,
table_object.name table_name
FROM sysobjects trigger_object
JOIN sysobjects table_object ON trigger_object.parent_obj = table_object.id
WHERE trigger_object.type = 'TR';

OPEN trigger_cursor;
FETCH NEXT FROM trigger_cursor INTO @trigger, @table;

WHILE @@FETCH_STATUS = 0 BEGIN


IF @enable = 1
SET @cmd = 'ENABLE ';
ELSE
SET @cmd = 'DISABLE ';

SET @cmd = @cmd + ' TRIGGER dbo.'+QUOTENAME(@trigger)+' ON


dbo.'+QUOTENAME(@table)+' ';
EXEC (@cmd);
FETCH NEXT FROM trigger_cursor INTO @trigger, @table;
END

CLOSE trigger_cursor;
DEALLOCATE trigger_cursor;

GO

8. Query the source SQL Server instance for any logins that you want to import to the destination DB
instance.

1438
Amazon Relational Database Service User Guide
Importing and exporting SQL
Server data using other methods

SQL Server stores logins and passwords in the master database. Because Amazon RDS doesn't
grant access to the master database, you cannot directly import logins and passwords into your
destination DB instance. Instead, you must query the master database on the source SQL Server
instance to generate a data definition language (DDL) file. This file should include all logins and
passwords that you want to add to the destination DB instance. This file also should include role
memberships and permissions that you want to transfer.

For information about querying the master database, see How to transfer the logins and the
passwords between instances of SQL Server 2005 and SQL Server 2008 in the Microsoft Knowledge
Base.

The output of the script is another script that you can run on the destination DB instance. The script
in the Knowledge Base article has the following code:

p.type IN

Every place p.type appears, use the following code instead:

p.type = 'S'

9. Import the data using the method in Import the data (p. 1440).
10. Grant applications access to the target DB instance.

When your data import is complete, you can grant access to the DB instance to those applications
that you blocked during the import. For information about controlling access to your DB instance,
see Controlling access with security groups (p. 2680).
11. Enable automated backups on the target DB instance.

For information about automated backups, see Working with backups (p. 591).
12. Enable foreign key constraints.

If you disabled foreign key constraints earlier, you can now enable them with the following script.

--Enable foreign keys on all tables


DECLARE @table_name SYSNAME;
DECLARE @cmd NVARCHAR(MAX);
DECLARE table_cursor CURSOR FOR SELECT name FROM sys.tables;

OPEN table_cursor;
FETCH NEXT FROM table_cursor INTO @table_name;

WHILE @@FETCH_STATUS = 0 BEGIN


SELECT @cmd = 'ALTER TABLE '+QUOTENAME(@table_name)+' CHECK CONSTRAINT ALL';
EXEC (@cmd);
FETCH NEXT FROM table_cursor INTO @table_name;
END

CLOSE table_cursor;
DEALLOCATE table_cursor;

13. Enable indexes, if applicable.


14. Enable triggers, if applicable.

If you disabled triggers earlier, you can now enable them with the following script.

--Enable triggers on all tables


DECLARE @enable BIT = 1;

1439
Amazon Relational Database Service User Guide
Importing and exporting SQL
Server data using other methods

DECLARE @trigger SYSNAME;


DECLARE @table SYSNAME;
DECLARE @cmd NVARCHAR(MAX);
DECLARE trigger_cursor CURSOR FOR SELECT trigger_object.name trigger_name,
table_object.name table_name
FROM sysobjects trigger_object
JOIN sysobjects table_object ON trigger_object.parent_obj = table_object.id
WHERE trigger_object.type = 'TR';

OPEN trigger_cursor;
FETCH NEXT FROM trigger_cursor INTO @trigger, @table;

WHILE @@FETCH_STATUS = 0 BEGIN


IF @enable = 1
SET @cmd = 'ENABLE ';
ELSE
SET @cmd = 'DISABLE ';

SET @cmd = @cmd + ' TRIGGER dbo.'+QUOTENAME(@trigger)+' ON


dbo.'+QUOTENAME(@table)+' ';
EXEC (@cmd);
FETCH NEXT FROM trigger_cursor INTO @trigger, @table;
END

CLOSE trigger_cursor;
DEALLOCATE trigger_cursor;

Import the data


Microsoft SQL Server Management Studio is a graphical SQL Server client that is included in all Microsoft
SQL Server editions except the Express Edition. SQL Server Management Studio Express is available from
Microsoft as a free download. To find this download, see the Microsoft website.
Note
SQL Server Management Studio is available only as a Windows-based application.

SQL Server Management Studio includes the following tools, which are useful in importing data to a SQL
Server DB instance:

• Generate and Publish Scripts Wizard


• Import and Export Wizard
• Bulk copy

Generate and Publish Scripts Wizard

The Generate and Publish Scripts Wizard creates a script that contains the schema of a database, the
data itself, or both. You can generate a script for a database in your local SQL Server deployment. You
can then run the script to transfer the information that it contains to an Amazon RDS DB instance.
Note
For databases of 1 GiB or larger, it's more efficient to script only the database schema. You then
use the Import and Export Wizard or the bulk copy feature of SQL Server to transfer the data.

For detailed information about the Generate and Publish Scripts Wizard, see the Microsoft SQL Server
documentation.

In the wizard, pay particular attention to the advanced options on the Set Scripting Options page to
ensure that everything you want your script to include is selected. For example, by default, database
triggers are not included in the script.

1440
Amazon Relational Database Service User Guide
Importing and exporting SQL
Server data using other methods

When the script is generated and saved, you can use SQL Server Management Studio to connect to your
DB instance and then run the script.

Import and Export Wizard

The Import and Export Wizard creates a special Integration Services package, which you can use to copy
data from your local SQL Server database to the destination DB instance. The wizard can filter which
tables and even which tuples within a table are copied to the destination DB instance.
Note
The Import and Export Wizard works well for large datasets, but it might not be the fastest way
to remotely export data from your local deployment. For an even faster way, consider the SQL
Server bulk copy feature.

For detailed information about the Import and Export Wizard, see the Microsoft SQL Server
documentation.

In the wizard, on the Choose a Destination page, do the following:

• For Server Name, type the name of the endpoint for your DB instance.
• For the server authentication mode, choose Use SQL Server Authentication.
• For User name and Password, type the credentials for the master user that you created for the DB
instance.

Bulk copy

The SQL Server bulk copy feature is an efficient means of copying data from a source database to your
DB instance. Bulk copy writes the data that you specify to a data file, such as an ASCII file. You can then
run bulk copy again to write the contents of the file to the destination DB instance.

This section uses the bcp utility, which is included with all editions of SQL Server. For detailed
information about bulk import and export operations, see the Microsoft SQL Server documentation.
Note
Before you use bulk copy, you must first import your database schema to the destination DB
instance. The Generate and Publish Scripts Wizard, described earlier in this topic, is an excellent
tool for this purpose.

The following command connects to the local SQL Server instance. It generates a tab-delimited file of a
specified table in the C:\ root directory of your existing SQL Server deployment. The table is specified by
its fully qualified name, and the text file has the same name as the table that is being copied.

bcp dbname.schema_name.table_name out C:\table_name.txt -n -S localhost -U username -


P password -b 10000

The preceding code includes the following options:

• -n specifies that the bulk copy uses the native data types of the data to be copied.
• -S specifies the SQL Server instance that the bcp utility connects to.
• -U specifies the user name of the account to log in to the SQL Server instance.
• -P specifies the password for the user specified by -U.
• -b specifies the number of rows per batch of imported data.

Note
There might be other parameters that are important to your import situation. For example,
you might need the -E parameter that pertains to identity values. For more information; see

1441
Amazon Relational Database Service User Guide
Importing and exporting SQL
Server data using other methods

the full description of the command line syntax for the bcp utility in the Microsoft SQL Server
documentation.

For example, suppose that a database named store that uses the default schema, dbo, contains a table
named customers. The user account admin, with the password insecure, copies 10,000 rows of the
customers table to a file named customers.txt.

bcp store.dbo.customers out C:\customers.txt -n -S localhost -U admin -P insecure -b 10000

After you generate the data file, you can upload the data to your DB instance by using a similar
command. Beforehand, create the database and schema on the target DB instance. Then use the in
argument to specify an input file instead of out to specify an output file. Instead of using localhost to
specify the local SQL Server instance, specify the endpoint of your DB instance. If you use a port other
than 1433, specify that too. The user name and password are the master user and password for your DB
instance. The syntax is as follows.

bcp dbname.schema_name.table_name
in C:\table_name.txt -n -S endpoint,port -U master_user_name -P master_user_password -
b 10000

To continue the previous example, suppose that the master user name is admin, and the
password is insecure. The endpoint for the DB instance is rds.ckz2kqd4qsn1.us-
east-1.rds.amazonaws.com, and you use port 4080. The command is as follows.

bcp store.dbo.customers in C:\customers.txt -n -S rds.ckz2kqd4qsn1.us-


east-1.rds.amazonaws.com,4080 -U admin -P insecure -b 10000

Note
Specify a password other than the prompt shown here as a security best practice.

Exporting data from RDS for SQL Server


You can choose one of the following options to export data from an RDS for SQL Server DB instance:

• Native database backup using a full backup file (.bak) – Using .bak files to backup databases is
heavily optimized, and is usually the fastest way to export data. For more information, see Importing
and exporting SQL Server databases using native backup and restore (p. 1419).
• SQL Server Import and Export Wizard – For more information, see SQL Server Import and Export
Wizard (p. 1442).
• SQL Server Generate and Publish Scripts Wizard and bcp utility – For more information, see SQL
Server Generate and Publish Scripts Wizard and bcp utility (p. 1444).

SQL Server Import and Export Wizard


You can use the SQL Server Import and Export Wizard to copy one or more tables, views, or queries from
your RDS for SQL Server DB instance to another data store. This choice is best if the target data store
is not SQL Server. For more information, see SQL Server Import and Export Wizard in the SQL Server
documentation.

The SQL Server Import and Export Wizard is available as part of Microsoft SQL Server Management
Studio. This graphical SQL Server client is included in all Microsoft SQL Server editions except the
Express Edition. SQL Server Management Studio is available only as a Windows-based application.
SQL Server Management Studio Express is available from Microsoft as a free download. To find this
download, see the Microsoft website.

1442
Amazon Relational Database Service User Guide
Importing and exporting SQL
Server data using other methods

To use the SQL Server Import and Export Wizard to export data

1. In SQL Server Management Studio, connect to your RDS for SQL Server DB instance. For details
on how to do this, see Connecting to a DB instance running the Microsoft SQL Server database
engine (p. 1380).
2. In Object Explorer, expand Databases, open the context (right-click) menu for the source database,
choose Tasks, and then choose Export Data. The wizard appears.
3. On the Choose a Data Source page, do the following:

a. For Data source, choose SQL Server Native Client 11.0.


b. Verify that the Server name box shows the endpoint of your RDS for SQL Server DB instance.
c. Select Use SQL Server Authentication. For User name and Password, type the master user
name and password of your DB instance.
d. Verify that the Database box shows the database from which you want to export data.
e. Choose Next.
4. On the Choose a Destination page, do the following:

a. For Destination, choose SQL Server Native Client 11.0.


Note
Other target data sources are available. These include .NET Framework data providers,
OLE DB providers, SQL Server Native Client providers, ADO.NET providers, Microsoft
Office Excel, Microsoft Office Access, and the Flat File source. If you choose to
target one of these data sources, skip the remainder of step 4. For details on the
connection information to provide next, see Choose a destination in the SQL Server
documentation.
b. For Server name, type the server name of the target SQL Server DB instance.
c. Choose the appropriate authentication type. Type a user name and password if necessary.
d. For Database, choose the name of the target database, or choose New to create a new database
to contain the exported data.

If you choose New, see Create database in the SQL Server documentation for details on the
database information to provide.
e. Choose Next.
5. On the Table Copy or Query page, choose Copy data from one or more tables or views or Write a
query to specify the data to transfer. Choose Next.
6. If you chose Write a query to specify the data to transfer, you see the Provide a Source Query
page. Type or paste in a SQL query, and then choose Parse to verify it. Once the query validates,
choose Next.
7. On the Select Source Tables and Views page, do the following:

a. Select the tables and views that you want to export, or verify that the query you provided is
selected.
b. Choose Edit Mappings and specify database and column mapping information. For more
information, see Column mappings in the SQL Server documentation.
c. (Optional) To see a preview of data to be exported, select the table, view, or query, and then
choose Preview.
d. Choose Next.
8. On the Run Package page, verify that Run immediately is selected. Choose Next.
9. On the Complete the Wizard page, verify that the data export details are as you expect. Choose
Finish.
10. On the The execution was successful page, choose Close.
1443
Amazon Relational Database Service User Guide
Importing and exporting SQL
Server data using other methods

SQL Server Generate and Publish Scripts Wizard and bcp utility
You can use the SQL Server Generate and Publish Scripts Wizard to create scripts for an entire database
or just selected objects. You can run these scripts on a target SQL Server DB instance to recreate the
scripted objects. You can then use the bcp utility to bulk export the data for the selected objects to the
target DB instance. This choice is best if you want to move a whole database (including objects other
than tables) or large quantities of data between two SQL Server DB instances. For a full description of
the bcp command-line syntax, see bcp utility in the Microsoft SQL Server documentation.

The SQL Server Generate and Publish Scripts Wizard is available as part of Microsoft SQL Server
Management Studio. This graphical SQL Server client is included in all Microsoft SQL Server editions
except the Express Edition. SQL Server Management Studio is available only as a Windows-based
application. SQL Server Management Studio Express is available from Microsoft as a free download.

To use the SQL Server Generate and Publish Scripts Wizard and the bcp utility to export data

1. In SQL Server Management Studio, connect to your RDS for SQL Server DB instance. For details
on how to do this, see Connecting to a DB instance running the Microsoft SQL Server database
engine (p. 1380).
2. In Object Explorer, expand the Databases node and select the database you want to script.
3. Follow the instructions in Generate and publish scripts Wizard in the SQL Server documentation to
create a script file.
4. In SQL Server Management Studio, connect to your target SQL Server DB instance.
5. With the target SQL Server DB instance selected in Object Explorer, choose Open on the File menu,
choose File, and then open the script file.
6. If you have scripted the entire database, review the CREATE DATABASE statement in the script. Make
sure that the database is being created in the location and with the parameters that you want. For
more information, see CREATE DATABASE in the SQL Server documentation.
7. If you are creating database users in the script, check to see if server logins exist on the target DB
instance for those users. If not, create logins for those users; the scripted commands to create
the database users fail otherwise. For more information, see Create a login in the SQL Server
documentation.
8. Choose !Execute on the SQL Editor menu to run the script file and create the database objects.
When the script finishes, verify that all database objects exist as expected.
9. Use the bcp utility to export data from the RDS for SQL Server DB instance into files. Open a
command prompt and type the following command.

bcp database_name.schema_name.table_name out data_file -n -S aws_rds_sql_endpoint -U


username -P password

The preceding code includes the following options:

• table_name is the name of one of the tables that you've recreated in the target database and now
want to populate with data.
• data_file is the full path and name of the data file to be created.
• -n specifies that the bulk copy uses the native data types of the data to be copied.
• -S specifies the SQL Server DB instance to export from.
• -U specifies the user name to use when connecting to the SQL Server DB instance.
• -P specifies the password for the user specified by -U.

The following shows an example command.

1444
Amazon Relational Database Service User Guide
Importing and exporting SQL
Server data using other methods

bcp world.dbo.city out C:\Users\JohnDoe\city.dat -n -S sql-jdoe.1234abcd.us-


west-2.rds.amazonaws.com,1433 -U JohnDoe -P ClearTextPassword

Repeat this step until you have data files for all of the tables you want to export.
10. Prepare your target DB instance for bulk import of data by following the instructions at Basic
guidelines for bulk importing data in the SQL Server documentation.
11. Decide on a bulk import method to use after considering performance and other concerns discussed
in About bulk import and bulk export operations in the SQL Server documentation.
12. Bulk import the data from the data files that you created using the bcp utility. To do so, follow the
instructions at either Import and export bulk data by using the bcp utility or Import bulk data by
using BULK INSERT or OPENROWSET(BULK...) in the SQL Server documentation, depending on what
you decided in step 11.

1445
Amazon Relational Database Service User Guide
Working with SQL Server read replicas

Working with read replicas for Microsoft SQL


Server in Amazon RDS
You usually use read replicas to configure replication between Amazon RDS DB instances. For general
information about read replicas, see Working with DB instance read replicas (p. 438).

In this section, you can find specific information about working with read replicas on Amazon RDS for
SQL Server.

Topics
• Configuring read replicas for SQL Server (p. 1446)
• Read replica limitations with SQL Server (p. 1446)
• Option considerations for RDS for SQL Server replicas (p. 1447)
• Synchronizing database users and objects with a SQL Server read replica (p. 1448)
• Troubleshooting a SQL Server read replica problem (p. 1449)

Configuring read replicas for SQL Server


Before a DB instance can serve as a source instance for replication, you must enable automatic backups
on the source DB instance. To do so, you set the backup retention period to a value other than 0. The
source DB instance must be a Multi-AZ deployment with Always On Availability Groups (AGs). Setting this
type of deployment also enforces that automatic backups are enabled.

Creating a SQL Server read replica doesn't require an outage for the primary DB instance. Amazon RDS
sets the necessary parameters and permissions for the source DB instance and the read replica without
any service interruption. A snapshot is taken of the source DB instance, and this snapshot becomes the
read replica. No outage occurs when you delete a read replica.

You can create up to 15 read replicas from one source DB instance. For replication to operate effectively,
we recommend that you configure each read replica with the same amount of compute and storage
resources as the source DB instance. If you scale the source DB instance, also scale the read replicas.

The SQL Server DB engine version of the source DB instance and all of its read replicas must be the same.
Amazon RDS upgrades the primary immediately after upgrading the read replicas, regardless of the
maintenance window. For more information about upgrading the DB engine version, see Upgrading the
Microsoft SQL Server DB engine (p. 1414).

For a read replica to receive and apply changes from the source, it should have sufficient compute
and storage resources. If a read replica reaches compute, network, or storage resource capacity, the
read replica stops receiving or applying changes from its source. You can modify the storage and CPU
resources of a read replica independently from its source and other read replicas.

Read replica limitations with SQL Server


The following limitations apply to SQL Server read replicas on Amazon RDS:

• Read replicas are only available on the SQL Server Enterprise Edition (EE) engine.
• Read replicas are available for SQL Server versions 2016–2019.
• The source DB instance to be replicated must be a Multi-AZ deployment with Always On AGs.
• You can create up to 15 read replicas from one source DB instance.
• Read replicas are only available for DB instances running on DB instance classes with four or more
vCPUs.

1446
Amazon Relational Database Service User Guide
Option considerations

• The following aren't supported on Amazon RDS for SQL Server:


• Backup retention of read replicas
• Point-in-time recovery from read replicas
• Manual snapshots of read replicas
• Multi-AZ read replicas
• Creating read replicas of read replicas
• Synchronization of user logins to read replicas
• Amazon RDS for SQL Server doesn't intervene to mitigate high replica lag between a source DB
instance and its read replicas. Make sure that the source DB instance and its read replicas are sized
properly, in terms of computing power and storage, to suit their operational load.

Option considerations for RDS for SQL Server replicas


Before you create an RDS for SQL Server replica, consider the following requirements, restrictions, and
recommendations:

• If your SQL Server replica is in the same Region as its source DB instance, make sure that it belongs
to the same option group as the source DB instance. Modifications to the source option group or
source option group membership propagate to replicas. These changes are applied to the replicas
immediately after they are applied to the source DB instance, regardless of the replica's maintenance
window.

For more information about option groups, see Working with option groups (p. 331).
• When you create a SQL Server cross-Region replica, Amazon RDS creates a dedicated option group for
it.

You can't remove an SQL Server cross-Region replica from its dedicated option group. No other DB
instances can use the dedicated option group for a SQL Server cross-Region replica.

The following options are replicated options. To add replicated options to a SQL Server cross-Region
replica, add it to the source DB instance's option group. The option is also installed on all of the source
DB instance's replicas.
• TDE

The following options are non-replicated options. You can add or remove non-replicated options from
a dedicated option group.
• MSDTC
• SQLSERVER_AUDIT
• To enable the SQLSERVER_AUDIT option on cross-Region read replica, add the SQLSERVER_AUDIT
option on the dedicated option group on the cross-region read replica and the source instance’s
option group. By adding the SQLSERVER_AUDIT option on the source instance of SQL Server
cross-Region read replica, you can create Server Level Audit Object and Server Level Audit
Specifications on each of the cross-Region read replicas of the source instance. To allow the cross-
Region read replicas access to upload the completed audit logs to an Amazon S3 bucket, add the
SQLSERVER_AUDIT option to the dedicated option group and configure the option settings. The
Amazon S3 bucket that you use as a target for audit files must be in the same Region as the cross-
Region read replica. You can modify the option setting of the SQLSERVER_AUDIT option for each
cross region read replica independently so each can access an Amazon S3 bucket in their respective
Region.

The following options are not supported for cross-Region read replicas.
• SSRS
• SSAS
1447
Amazon Relational Database Service User Guide
Synchronizing database users and
objects with a SQL Server read replica

• SSIS

The following options are partially supported for cross-Region read replicas.
• SQLSERVER_BACKUP_RESTORE
• The source DB instance of a SQL Server cross-Region replica can have the
SQLSERVER_BACKUP_RESTORE option, but you can not perform native restores on the
source DB instance until you delete all its cross-Region replicas. Any existing native restore
tasks will be cancelled during the creation of a cross-Region replica. You can't add the
SQLSERVER_BACKUP_RESTORE option to a dedicated option group.

For more information on native backup and restore, see Importing and exporting SQL Server
databases using native backup and restore (p. 1419)

When you promote a SQL Server cross-Region read replica, the promoted replica behaves the same as
other SQL Server DB instances, including the management of its options. For more information about
option groups, see Working with option groups (p. 331).

Synchronizing database users and objects with a SQL


Server read replica
Any logins, custom server roles, SQL agent jobs, or other server-level objects that exist in the primary
DB instance at the time of creating a read replica are expected to be present in the newly created read
replica. However, any server-level objects that are created in the primary DB instance after the creation
of the read replica will not be automatically replicated, and you must create them manually in the read
replica.

The database users are automatically replicated from the primary DB instance to the read replica. As the
read replica database is in read-only mode, the security identifier (SID) of the database user cannot be
updated in the database. Therefore, when creating SQL logins in the read replica, it's essential to ensure
that the SID of that login matches the SID of the corresponding SQL login in the primary DB instance. If
you don't synchronize the SIDs of the SQL logins, they won't be able to access the database in the read
replica. Windows Active Directory (AD) Authenticated Logins do not experience this issue because the
SQL Server obtains the SID from the Active Directory.

To synchronize a SQL login from the primary DB instance to the read replica

1. Connect to the primary DB instance.


2. Create a new SQL login in the primary DB instance.

USE [master]
GO
CREATE LOGIN TestLogin1
WITH PASSWORD = 'REPLACE WITH PASSWORD';

Note
Specify a password other than the prompt shown here as a security best practice.
3. Create a new database user for the SQL login in the database.

USE [REPLACE WITH YOUR DB NAME]


GO
CREATE USER TestLogin1 FOR LOGIN TestLogin1;
GO

4. Check the SID of the newly created SQL login in primary DB instance.

1448
Amazon Relational Database Service User Guide
Troubleshooting a SQL Server read replica problem

SELECT name, sid FROM sys.server_principals WHERE name = TestLogin1;

5. Connect to the read replica. Create the new SQL login.

CREATE LOGIN TestLogin1 WITH PASSWORD = 'REPLACE WITH PASSWORD', SID=[REPLACE WITH sid
FROM STEP #4];

Alternately, if you have access to the read replica database, you can fix the orphaned user as
follows:

1. Connect to the read replica.


2. Identify the orphaned users in the database.

USE [REPLACE WITH YOUR DB NAME]


GO
EXEC sp_change_users_login 'Report';
GO

3. Create a new SQL login for the orphaned database user.

CREATE LOGIN TestLogin1 WITH PASSWORD = 'REPLACE WITH PASSWORD', SID=[REPLACE WITH sid
FROM STEP #2];

Example:

CREATE LOGIN TestLogin1 WITH PASSWORD = 'TestPa$$word#1',


SID=[0x1A2B3C4D5E6F7G8H9I0J1K2L3M4N5O6P];

Note
Specify a password other than the prompt shown here as a security best practice.

Troubleshooting a SQL Server read replica problem


You can monitor replication lag in Amazon CloudWatch by viewing the Amazon RDS ReplicaLag
metric. For information about replication lag time, see Monitoring read replication (p. 449).

If replication lag is too long, you can use the following query to get information about the lag.

SELECT AR.replica_server_name
, DB_NAME (ARS.database_id) 'database_name'
, AR.availability_mode_desc
, ARS.synchronization_health_desc
, ARS.last_hardened_lsn
, ARS.last_redone_lsn
, ARS.secondary_lag_seconds
FROM sys.dm_hadr_database_replica_states ARS
INNER JOIN sys.availability_replicas AR ON ARS.replica_id = AR.replica_id
--WHERE DB_NAME(ARS.database_id) = 'database_name'
ORDER BY AR.replica_server_name;

1449
Amazon Relational Database Service User Guide
Multi-AZ for RDS for SQL Server

Multi-AZ deployments for Amazon RDS for


Microsoft SQL Server
Multi-AZ deployments provide increased availability, data durability, and fault tolerance for DB
instances. In the event of planned database maintenance or unplanned service disruption, Amazon
RDS automatically fails over to the up-to-date secondary DB instance. This functionality lets database
operations resume quickly without manual intervention. The primary and standby instances use the
same endpoint, whose physical network address transitions to the secondary replica as part of the
failover process. You don't have to reconfigure your application when a failover occurs.

Amazon RDS supports Multi-AZ deployments for Microsoft SQL Server by using either SQL Server
Database Mirroring (DBM) or Always On Availability Groups (AGs). Amazon RDS monitors and maintains
the health of your Multi-AZ deployment. If problems occur, RDS automatically repairs unhealthy DB
instances, reestablishes synchronization, and initiates failovers. Failover only occurs if the standby and
primary are fully in sync. You don't have to manage anything.

When you set up SQL Server Multi-AZ, RDS automatically configures all databases on the instance to
use DBM or AGs. Amazon RDS handles the primary, the witness, and the secondary DB instance for you.
Because configuration is automatic, RDS selects DBM or Always On AGs based on the version of SQL
Server that you deploy.

Amazon RDS supports Multi-AZ with Always On AGs for the following SQL Server versions and editions:

• SQL Server 2019:


• Standard Edition 15.00.4073.23 and higher
• Enterprise Edition
• SQL Server 2017:
• Standard Edition 14.00.3401.7 and higher
• Enterprise Edition 14.00.3049.1 and higher
• SQL Server 2016: Enterprise Edition 13.00.5216.0 and higher

Amazon RDS supports Multi-AZ with DBM for the following SQL Server versions and editions, except for
the versions noted previously:

• SQL Server 2019: Standard Edition 15.00.4043.16


• SQL Server 2017: Standard and Enterprise Editions
• SQL Server 2016: Standard and Enterprise Editions
• SQL Server 2014: Standard and Enterprise Editions

You can use the following SQL query to determine whether your SQL Server DB instance is Single-AZ,
Multi-AZ with DBM, or Multi-AZ with Always On AGs.

SELECT CASE WHEN dm.mirroring_state_desc IS NOT NULL THEN 'Multi-AZ (Mirroring)'


WHEN dhdrs.group_database_id IS NOT NULL THEN 'Multi-AZ (AlwaysOn)'
ELSE 'Single-AZ'
END 'high_availability'
FROM sys.databases sd
LEFT JOIN sys.database_mirroring dm ON sd.database_id = dm.database_id
LEFT JOIN sys.dm_hadr_database_replica_states dhdrs ON sd.database_id = dhdrs.database_id
AND dhdrs.is_local = 1
WHERE DB_NAME(sd.database_id) = 'rdsadmin';

1450
Amazon Relational Database Service User Guide
Adding Multi-AZ to a SQL Server DB instance

The output resembles the following:

high_availability
Multi-AZ (AlwaysOn)

Adding Multi-AZ to a Microsoft SQL Server DB


instance
When you create a new SQL Server DB instance using the AWS Management Console, you can add Multi-
AZ with Database Mirroring (DBM) or Always On AGs. You do so by choosing Yes (Mirroring / Always On)
from Multi-AZ deployment. For more information, see Creating an Amazon RDS DB instance (p. 300).

When you modify an existing SQL Server DB instance using the console, you can add Multi-AZ with DBM
or AGs by choosing Yes (Mirroring / Always On) from Multi-AZ deployment on the Modify DB instance
page. For more information, see Modifying an Amazon RDS DB instance (p. 401).
Note
If your DB instance is running Database Mirroring (DBM)—not Always On Availability Groups
(AGs)—you might need to disable in-memory optimization before you add Multi-AZ. Disable in-
memory optimization with DBM before you add Multi-AZ if your DB instance runs SQL Server
2014, 2016, or 2017 Enterprise Edition and has in-memory optimization enabled.
If your DB instance is running AGs, it doesn't require this step.

Removing Multi-AZ from a Microsoft SQL Server DB


instance
When you modify an existing SQL Server DB instance using the AWS Management Console, you can
remove Multi-AZ with DBM or AGs. You can do this by choosing No (Mirroring / Always On) from Multi-
AZ deployment on the Modify DB instance page. For more information, see Modifying an Amazon RDS
DB instance (p. 401).

Microsoft SQL Server Multi-AZ deployment


limitations, notes, and recommendations
The following are some limitations when working with Multi-AZ deployments on RDS for SQL Server DB
instances:

• Cross-Region Multi-AZ isn't supported.


• You can't configure the secondary DB instance to accept database read activity.
• Multi-AZ with Always On Availability Groups (AGs) supports in-memory optimization.
• Multi-AZ with Always On Availability Groups (AGs) doesn't support Kerberos authentication for the
availability group listener. This is because the listener has no Service Principal Name (SPN).
• You can't rename a database on a SQL Server DB instance that is in a SQL Server Multi-AZ deployment.
If you need to rename a database on such an instance, first turn off Multi-AZ for the DB instance, then
rename the database. Finally, turn Multi-AZ back on for the DB instance.
• You can only restore Multi-AZ DB instances that are backed up using the full recovery model.
• Multi-AZ deployments have a limit of 100 SQL Server Agent jobs.

If you need a higher limit, request an increase by contacting AWS Support. Open the AWS Support
Center page, sign in if necessary, and choose Create case. Choose Service limit increase. Complete
and submit the form.

1451
Amazon Relational Database Service User Guide
Limitations, notes, and recommendations

The following are some notes about working with Multi-AZ deployments on RDS for SQL Server DB
instances:

• Amazon RDS exposes the Always On AGs availability group listener endpoint. The endpoint is visible
in the console, and is returned by the DescribeDBInstances API operation as an entry in the
endpoints field.
• Amazon RDS supports availability group multisubnet failovers.
• To use SQL Server Multi-AZ with a SQL Server DB instance in a virtual private cloud (VPC), first create a
DB subnet group that has subnets in at least two distinct Availability Zones. Then assign the DB subnet
group to the primary replica of the SQL Server DB instance.
• When a DB instance is modified to be a Multi-AZ deployment, during the modification it has a status
of modifying. Amazon RDS creates the standby, and makes a backup of the primary DB instance. After
the process is complete, the status of the primary DB instance becomes available.
• Multi-AZ deployments maintain all databases on the same node. If a database on the primary host
fails over, all your SQL Server databases fail over as one atomic unit to your standby host. Amazon RDS
provisions a new healthy host, and replaces the unhealthy host.
• Multi-AZ with DBM or AGs supports a single standby replica.
• Users, logins, and permissions are automatically replicated for you on the secondary. You don't need to
recreate them. User-defined server roles are only replicated in DB instances that use Always On AGs for
Multi-AZ deployments.
• In Multi-AZ deployments, SQL Server Agent jobs are replicated from the primary host to the secondary
host when the job replication feature is turned on. For more information, see Turning on SQL Server
Agent job replication (p. 1617).
• You might observe elevated latencies compared to a standard DB instance deployment (in a single
Availability Zone) because of the synchronous data replication.
• Failover times are affected by the time it takes to complete the recovery process. Large transactions
increase the failover time.
• In SQL Server Multi-AZ deployments, reboot with failover reboots only the primary DB instance. After
the failover, the primary DB instance becomes the new secondary DB instance. Parameters might not
be updated for Multi-AZ instances. For reboot without failover, both the primary and secondary DB
instances reboot, and parameters are updated after the reboot. If the DB instance is unresponsive, we
recommend reboot without failover.

The following are some recommendations for working with Multi-AZ deployments on RDS for Microsoft
SQL Server DB instances:

• For databases used in production or preproduction, we recommend the following options:


• Multi-AZ deployments for high availability
• "Provisioned IOPS" for fast, consistent performance
• "Memory optimized" rather than "General purpose"
• You can't select the Availability Zone (AZ) for the secondary instance, so when you deploy application
hosts, take this into account. Your database might fail over to another AZ, and the application hosts
might not be in the same AZ as the database. For this reason, we recommend that you balance your
application hosts across all AZs in the given AWS Region.
• For best performance, don't enable Database Mirroring or Always On AGs during a large data load
operation. If you want your data load to be as fast as possible, finish loading data before you convert
your DB instance to a Multi-AZ deployment.
• Applications that access the SQL Server databases should have exception handling that catches
connection errors. The following code sample shows a try/catch block that catches a communication
error. In this example, the break statement exits the while loop if the connection is successful, but
retries up to 10 times if an exception is thrown.

1452
Amazon Relational Database Service User Guide
Determining the location of the secondary

int RetryMaxAttempts = 10;


int RetryIntervalPeriodInSeconds = 1;
int iRetryCount = 0;
while (iRetryCount < RetryMaxAttempts)
{
using (SqlConnection connection = new SqlConnection(DatabaseConnString))
{
using (SqlCommand command = connection.CreateCommand())
{
command.CommandText = "INSERT INTO SOME_TABLE VALUES ('SomeValue');";
try
{
connection.Open();
command.ExecuteNonQuery();
break;
}
catch (Exception ex)
{
Logger(ex.Message);
iRetryCount++;
}
finally {
connection.Close();
}
}
}
Thread.Sleep(RetryIntervalPeriodInSeconds * 1000);
}

• Don't use the Set Partner Off command when working with Multi-AZ instances. For example, don't
do the following.

--Don't do this
ALTER DATABASE db1 SET PARTNER off

• Don't set the recovery mode to simple. For example, don't do the following.

--Don't do this
ALTER DATABASE db1 SET RECOVERY simple

• Don't use the DEFAULT_DATABASE parameter when creating new logins on Multi-AZ DB instances,
because these settings can't be applied to the standby mirror. For example, don't do the following.

--Don't do this
CREATE LOGIN [test_dba] WITH PASSWORD=foo, DEFAULT_DATABASE=[db2]

Also, don't do the following.

--Don't do this
ALTER LOGIN [test_dba] SET DEFAULT_DATABASE=[db3]

Determining the location of the secondary


You can determine the location of the secondary replica by using the AWS Management Console. You
need to know the location of the secondary if you are setting up your primary DB instance in a VPC.

1453
Amazon Relational Database Service User Guide
Migrating to Always On AGs

You can also view the Availability Zone of the secondary using the AWS CLI command describe-db-
instances or RDS API operation DescribeDBInstances. The output shows the secondary AZ where
the standby mirror is located.

Migrating from Database Mirroring to Always On


Availability Groups
In version 14.00.3049.1 of Microsoft SQL Server Enterprise Edition, Always On Availability Groups (AGs)
are enabled by default.

To migrate from Database Mirroring (DBM) to AGs, first check your version. If you are using a DB instance
with a version prior to Enterprise Edition 13.00.5216.0, modify the instance to patch it to 13.00.5216.0
or later. If you are using a DB instance with a version prior to Enterprise Edition 14.00.3049.1, modify the
instance to patch it to 14.00.3049.1 or later.

If you want to upgrade a mirrored DB instance to use AGs, run the upgrade first, modify the instance to
remove Multi-AZ, and then modify it again to add Multi-AZ. This converts your instance to use Always On
AGs.

1454
Amazon Relational Database Service User Guide
Additional features for SQL Server

Additional features for Microsoft SQL Server on


Amazon RDS
In the following sections, you can find information about augmenting Amazon RDS instances running the
Microsoft SQL Server DB engine.

Topics
• Using SSL with a Microsoft SQL Server DB instance (p. 1456)
• Configuring security protocols and ciphers (p. 1459)
• Integrating an Amazon RDS for SQL Server DB instance with Amazon S3 (p. 1464)
• Using Database Mail on Amazon RDS for SQL Server (p. 1478)
• Instance store support for the tempdb database on Amazon RDS for SQL Server (p. 1489)
• Using extended events with Amazon RDS for Microsoft SQL Server (p. 1491)
• Access to transaction log backups with RDS for SQL Server (p. 1494)

1455
Amazon Relational Database Service User Guide
Using SSL with a SQL Server DB instance

Using SSL with a Microsoft SQL Server DB instance


You can use Secure Sockets Layer (SSL) to encrypt connections between your client applications and your
Amazon RDS DB instances running Microsoft SQL Server. SSL support is available in all AWS regions for
all supported SQL Server editions.

When you create a SQL Server DB instance, Amazon RDS creates an SSL certificate for it. The SSL
certificate includes the DB instance endpoint as the Common Name (CN) for the SSL certificate to guard
against spoofing attacks.

There are 2 ways to use SSL to connect to your SQL Server DB instance:

• Force SSL for all connections — this happens transparently to the client, and the client doesn't have to
do any work to use SSL.
• Encrypt specific connections — this sets up an SSL connection from a specific client computer, and you
must do work on the client to encrypt connections.

For information about Transport Layer Security (TLS) support for SQL Server, see TLS 1.2 support for
Microsoft SQL Server.

Forcing connections to your DB instance to use SSL


You can force all connections to your DB instance to use SSL. If you force connections to use SSL, it
happens transparently to the client, and the client doesn't have to do any work to use SSL.

If you want to force SSL, use the rds.force_ssl parameter. By default, the rds.force_ssl
parameter is set to 0 (off). Set the rds.force_ssl parameter to 1 (on) to force connections to use
SSL. The rds.force_ssl parameter is static, so after you change the value, you must reboot your DB
instance for the change to take effect.

To force all connections to your DB instance to use SSL

1. Determine the parameter group that is attached to your DB instance:

a. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
b. In the top right corner of the Amazon RDS console, choose the AWS Region of your DB instance.
c. In the navigation pane, choose Databases, and then choose the name of your DB instance to
show its details.
d. Choose the Configuration tab. Find the Parameter group in the section.
2. If necessary, create a new parameter group. If your DB instance uses the default parameter group,
you must create a new parameter group. If your DB instance uses a nondefault parameter group, you
can choose to edit the existing parameter group or to create a new parameter group. If you edit an
existing parameter group, the change affects all DB instances that use that parameter group.

To create a new parameter group, follow the instructions in Creating a DB parameter group (p. 350).
3. Edit your new or existing parameter group to set the rds.force_ssl parameter to true. To
edit the parameter group, follow the instructions in Modifying parameters in a DB parameter
group (p. 352).
4. If you created a new parameter group, modify your DB instance to attach the new parameter group.
Modify the DB Parameter Group setting of the DB instance. For more information, see Modifying an
Amazon RDS DB instance (p. 401).
5. Reboot your DB instance. For more information, see Rebooting a DB instance (p. 436).

1456
Amazon Relational Database Service User Guide
Using SSL with a SQL Server DB instance

Encrypting specific connections


You can force all connections to your DB instance to use SSL, or you can encrypting connections from
specific client computers only. To use SSL from a specific client, you must obtain certificates for the client
computer, import certificates on the client computer, and then encrypt the connections from the client
computer.
Note
All SQL Server instances created after August 5, 2014, use the DB instance endpoint in the
Common Name (CN) field of the SSL certificate. Prior to August 5, 2014, SSL certificate
verification was not available for VPC-based SQL Server instances. If you have a VPC-based
SQL Server DB instance that was created before August 5, 2014, and you want to use SSL
certificate verification and ensure that the instance endpoint is included as the CN for the SSL
certificate for that DB instance, then rename the instance. When you rename a DB instance, a
new certificate is deployed and the instance is rebooted to enable the new certificate.

Obtaining certificates for client computers


To encrypt connections from a client computer to an Amazon RDS DB instance running Microsoft SQL
Server, you need a certificate on your client computer.

To obtain that certificate, download the certificate to your client computer. You can download a root
certificate that works for all regions. You can also download a certificate bundle that contains both the
old and new root certificate. In addition, you can download region-specific intermediate certificates. For
more information about downloading certificates, see Using SSL/TLS to encrypt a connection to a DB
instance (p. 2591).

After you have downloaded the appropriate certificate, import the certificate into your Microsoft
Windows operating system by following the procedure in the section following.

Importing certificates on client computers


You can use the following procedure to import your certificate into the Microsoft Windows operating
system on your client computer.

To import the certificate into your Windows operating system:

1. On the Start menu, type Run in the search box and press Enter.
2. In the Open box, type MMC and then choose OK.
3. In the MMC console, on the File menu, choose Add/Remove Snap-in.
4. In the Add or Remove Snap-ins dialog box, for Available snap-ins, select Certificates, and then
choose Add.
5. In the Certificates snap-in dialog box, choose Computer account, and then choose Next.
6. In the Select computer dialog box, choose Finish.
7. In the Add or Remove Snap-ins dialog box, choose OK.
8. In the MMC console, expand Certificates, open the context (right-click) menu for Trusted Root
Certification Authorities, choose All Tasks, and then choose Import.
9. On the first page of the Certificate Import Wizard, choose Next.
10. On the second page of the Certificate Import Wizard, choose Browse. In the browse window, change
the file type to All files (*.*) because .pem is not a standard certificate extension. Locate the .pem
file that you downloaded previously.
11. Choose Open to select the certificate file, and then choose Next.
12. On the third page of the Certificate Import Wizard, choose Next.
13. On the fourth page of the Certificate Import Wizard, choose Finish. A dialog box appears indicating
that the import was successful.

1457
Amazon Relational Database Service User Guide
Using SSL with a SQL Server DB instance

14. In the MMC console, expand Certificates, expand Trusted Root Certification Authorities, and then
choose Certificates. Locate the certificate to confirm it exists, as shown here.

Encrypting connections to an Amazon RDS DB instance running Microsoft SQL


Server
After you have imported a certificate into your client computer, you can encrypt connections from the
client computer to an Amazon RDS DB instance running Microsoft SQL Server.

For SQL Server Management Studio, use the following procedure. For more information about SQL
Server Management Studio, see Use SQL Server management studio.

To encrypt connections from SQL Server Management Studio

1. Launch SQL Server Management Studio.


2. For Connect to server, type the server information, login user name, and password.
3. Choose Options.
4. Select Encrypt connection.
5. Choose Connect.
6. Confirm that your connection is encrypted by running the following query. Verify that the query
returns true for encrypt_option.

select ENCRYPT_OPTION from SYS.DM_EXEC_CONNECTIONS where SESSION_ID = @@SPID

For any other SQL client, use the following procedure.

To encrypt connections from other SQL clients

1. Append encrypt=true to your connection string. This string might be available as an option, or as
a property on the connection page in GUI tools.
Note
To enable SSL encryption for clients that connect using JDBC, you might need to add the
Amazon RDS SQL certificate to the Java CA certificate (cacerts) store. You can do this by
using the keytool utility.
2. Confirm that your connection is encrypted by running the following query. Verify that the query
returns true for encrypt_option.

select ENCRYPT_OPTION from SYS.DM_EXEC_CONNECTIONS where SESSION_ID = @@SPID

1458
Amazon Relational Database Service User Guide
Configuring security protocols and ciphers

Configuring security protocols and ciphers


You can turn certain security protocols and ciphers on and off using DB parameters. The security
parameters that you can configure (except for TLS version 1.2) are shown in the following table.

For parameters other than rds.fips, the value of default means that the operating system default
value is used, whether it is enabled or disabled.
Note
You can't disable TLS 1.2, because Amazon RDS uses it internally.

DB parameter Allowed values (default in Description


bold)

rds.tls10 default, enabled, disabled TLS 1.0.

rds.tls11 default, enabled, disabled TLS 1.1.

rds.tls12 default TLS 1.2. You can't modify this


value.

rds.fips 0, 1 When you set the parameter to


1, RDS forces the use of modules
that are compliant with the
Federal Information Processing
Standard (FIPS) 140-2 standard.

For more information, see Use


SQL Server 2016 in FIPS 140-2-
compliant mode in the Microsoft
documentation.
Note
You must reboot the
DB instance after the
modification to make it
effective.

rds.rc4 default, enabled, disabled RC4 stream cipher.

rds.diffie-hellman default, enabled, disabled Diffie-Hellman key-exchange


encryption.

rds.diffie-hellman-min-key-bit- default, 1024, 2048, 4096 Minimum bit length for Diffie-
length Hellman keys.

rds.curve25519 default, enabled, disabled Curve25519 elliptic-curve


encryption cipher. This
parameter isn't supported for all
engine versions.

rds.3des168 default, enabled, disabled Triple Data Encryption Standard


(DES) encryption cipher with a
168-bit key length.

1459
Amazon Relational Database Service User Guide
Configuring security protocols and ciphers

Note
For more information on the default values for SQL Server security protocols and ciphers,
see Protocols in TLS/SSL (Schannel SSP) and Cipher Suites in TLS/SSL (Schannel SSP) in the
Microsoft documentation.
For more information on viewing and setting these values in the Windows Registry, see
Transport Layer Security (TLS) best practices with the .NET Framework in the Microsoft
documentation.

Use the following process to configure the security protocols and ciphers:

1. Create a custom DB parameter group.


2. Modify the parameters in the parameter group.
3. Associate the DB parameter group with your DB instance.

For more information on DB parameter groups, see Working with parameter groups (p. 347).

Creating the security-related parameter group


Create a parameter group for your security-related parameters that corresponds to the SQL Server
edition and version of your DB instance.

Console

The following procedure creates a parameter group for SQL Server Standard Edition 2016.

To create the parameter group

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Parameter groups.
3. Choose Create parameter group.
4. In the Create parameter group pane, do the following:

a. For Parameter group family, choose sqlserver-se-13.0.


b. For Group name, enter an identifier for the parameter group, such as sqlserver-ciphers-
se-13.
c. For Description, enter Parameter group for security protocols and ciphers.
5. Choose Create.

CLI

The following procedure creates a parameter group for SQL Server Standard Edition 2016.

To create the parameter group

• Run one of the following commands.

Example

For Linux, macOS, or Unix:

aws rds create-db-parameter-group \


--db-parameter-group-name sqlserver-ciphers-se-13 \
--db-parameter-group-family "sqlserver-se-13.0" \

1460
Amazon Relational Database Service User Guide
Configuring security protocols and ciphers

--description "Parameter group for security protocols and ciphers"

For Windows:

aws rds create-db-parameter-group ^


--db-parameter-group-name sqlserver-ciphers-se-13 ^
--db-parameter-group-family "sqlserver-se-13.0" ^
--description "Parameter group for security protocols and ciphers"

Modifying security-related parameters


Modify the security-related parameters in the parameter group that corresponds to the SQL Server
edition and version of your DB instance.

Console

The following procedure modifies the parameter group that you created for SQL Server Standard Edition
2016. This example turns off TLS version 1.0.

To modify the parameter group

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Parameter groups.
3. Choose the parameter group, such as sqlserver-ciphers-se-13.
4. Under Parameters, filter the parameter list for rds.
5. Choose Edit parameters.
6. Choose rds.tls10.
7. For Values, choose disabled.
8. Choose Save changes.

CLI

The following procedure modifies the parameter group that you created for SQL Server Standard Edition
2016. This example turns off TLS version 1.0.

To modify the parameter group

• Run one of the following commands.

Example

For Linux, macOS, or Unix:

aws rds modify-db-parameter-group \


--db-parameter-group-name sqlserver-ciphers-se-13 \
--parameters
"ParameterName='rds.tls10',ParameterValue='disabled',ApplyMethod=pending-reboot"

For Windows:

aws rds modify-db-parameter-group ^

1461
Amazon Relational Database Service User Guide
Configuring security protocols and ciphers

--db-parameter-group-name sqlserver-ciphers-se-13 ^
--parameters
"ParameterName='rds.tls10',ParameterValue='disabled',ApplyMethod=pending-reboot"

Associating the security-related parameter group with your DB


instance
To associate the parameter group with your DB instance, use the AWS Management Console or the AWS
CLI.

Console

You can associate the parameter group with a new or existing DB instance:

• For a new DB instance, associate it when you launch the instance. For more information, see Creating
an Amazon RDS DB instance (p. 300).
• For an existing DB instance, associate it by modifying the instance. For more information, see
Modifying an Amazon RDS DB instance (p. 401).

CLI

You can associate the parameter group with a new or existing DB instance.

To create a DB instance with the parameter group

• Specify the same DB engine type and major version as you used when creating the parameter group.

Example

For Linux, macOS, or Unix:

aws rds create-db-instance \


--db-instance-identifier mydbinstance \
--db-instance-class db.m5.2xlarge \
--engine sqlserver-se \
--engine-version 13.00.5426.0.v1 \
--allocated-storage 100 \
--master-user-password secret123 \
--master-username admin \
--storage-type gp2 \
--license-model li \
--db-parameter-group-name sqlserver-ciphers-se-13

For Windows:

aws rds create-db-instance ^


--db-instance-identifier mydbinstance ^
--db-instance-class db.m5.2xlarge ^
--engine sqlserver-se ^
--engine-version 13.00.5426.0.v1 ^
--allocated-storage 100 ^
--master-user-password secret123 ^
--master-username admin ^
--storage-type gp2 ^
--license-model li ^
--db-parameter-group-name sqlserver-ciphers-se-13

1462
Amazon Relational Database Service User Guide
Configuring security protocols and ciphers

Note
Specify a password other than the prompt shown here as a security best practice.

To modify a DB instance and associate the parameter group

• Run one of the following commands.

Example

For Linux, macOS, or Unix:

aws rds modify-db-instance \


--db-instance-identifier mydbinstance \
--db-parameter-group-name sqlserver-ciphers-se-13 \
--apply-immediately

For Windows:

aws rds modify-db-instance ^


--db-instance-identifier mydbinstance ^
--db-parameter-group-name sqlserver-ciphers-se-13 ^
--apply-immediately

1463
Amazon Relational Database Service User Guide
Amazon S3 integration

Integrating an Amazon RDS for SQL Server DB


instance with Amazon S3
You can transfer files between a DB instance running Amazon RDS for SQL Server and an Amazon
S3 bucket. By doing this, you can use Amazon S3 with SQL Server features such as BULK INSERT. For
example, you can download .csv, .xml, .txt, and other files from Amazon S3 to the DB instance host and
import the data from D:\S3\ into the database. All files are stored in D:\S3\ on the DB instance.

The following limitations apply:

• Files in the D:\S3 folder are deleted on the standby replica after a failover on Multi-AZ instances. For
more information, see Multi-AZ limitations for S3 integration (p. 1476).
• The DB instance and the S3 bucket must be in the same AWS Region.
• If you run more than one S3 integration task at a time, the tasks run sequentially, not in parallel.
Note
S3 integration tasks share the same queue as native backup and restore tasks. At maximum,
you can have only two tasks in progress at any time in this queue. Therefore, two running
native backup and restore tasks will block any S3 integration tasks.
• You must re-enable the S3 integration feature on restored instances. S3 integration isn't propagated
from the source instance to the restored instance. Files in D:\S3 are deleted on a restored instance.
• Downloading to the DB instance is limited to 100 files. In other words, there can't be more than 100
files in D:\S3\.
• Only files without file extensions or with the following file extensions are supported for
download: .abf, .asdatabase, .bcp, .configsettings, .csv, .dat, .deploymentoptions, .deploymenttargets, .fmt, .info, .isp
and .xmla.
• The S3 bucket must have the same owner as the related AWS Identity and Access Management (IAM)
role. Therefore, cross-account S3 integration isn't supported.
• The S3 bucket can't be open to the public.
• The file size for uploads from RDS to S3 is limited to 50 GB per file.
• The file size for downloads from S3 to RDS is limited to the maximum supported by S3.

Topics
• Prerequisites for integrating RDS for SQL Server with S3 (p. 1465)
• Enabling RDS for SQL Server integration with S3 (p. 1470)
• Transferring files between RDS for SQL Server and Amazon S3 (p. 1471)
• Listing files on the RDS DB instance (p. 1473)
• Deleting files on the RDS DB instance (p. 1473)
• Monitoring the status of a file transfer task (p. 1474)
• Canceling a task (p. 1476)
• Multi-AZ limitations for S3 integration (p. 1476)
• Disabling RDS for SQL Server integration with S3 (p. 1476)

For more information on working with files in Amazon S3, see Getting started with Amazon Simple
Storage Service.

1464
Amazon Relational Database Service User Guide
Amazon S3 integration

Prerequisites for integrating RDS for SQL Server with S3


Before you begin, find or create the S3 bucket that you want to use. Also, add permissions so that the
RDS DB instance can access the S3 bucket. To configure this access, you create both an IAM policy and an
IAM role.

Console

To create an IAM policy for access to Amazon S3

1. In the IAM Management Console, choose Policies in the navigation pane.


2. Create a new policy, and use the Visual editor tab for the following steps.
3. For Service, enter S3 and then choose the S3 service.
4. For Actions, choose the following to grant the access that your DB instance requires:

• ListAllMyBuckets – required
• ListBucket – required
• GetBucketACL – required
• GetBucketLocation – required
• GetObject – required for downloading files from S3 to D:\S3\
• PutObject – required for uploading files from D:\S3\ to S3
• ListMultipartUploadParts – required for uploading files from D:\S3\ to S3
• AbortMultipartUpload – required for uploading files from D:\S3\ to S3
5. For Resources, the options that display depend on which actions you choose in the previous step.
You might see options for bucket, object, or both. For each of these, add the appropriate Amazon
Resource Name (ARN).

For bucket, add the ARN for the bucket that you want to use. For example, if your bucket is named
example-bucket, set the ARN to arn:aws:s3:::example-bucket.

For object, enter the ARN for the bucket and then choose one of the following:

• To grant access to all files in the specified bucket, choose Any for both Bucket name and Object
name.
• To grant access to specific files or folders in the bucket, provide ARNs for the specific buckets and
objects that you want SQL Server to access.
6. Follow the instructions in the console until you finish creating the policy.

The preceding is an abbreviated guide to setting up a policy. For more detailed instructions on
creating IAM policies, see Creating IAM policies in the IAM User Guide.

To create an IAM role that uses the IAM policy from the previous procedure

1. In the IAM Management Console, choose Roles in the navigation pane.


2. Create a new IAM role, and choose the following options as they appear in the console:

• AWS service
• RDS
• RDS – Add Role to Database

Then choose Next:Permissions at the bottom.


3. For Attach permissions policies, enter the name of the IAM policy that you previously created. Then
choose the policy from the list.

1465
Amazon Relational Database Service User Guide
Amazon S3 integration

4. Follow the instructions in the console until you finish creating the role.

The preceding is an abbreviated guide to setting up a role. If you want more detailed instructions on
creating roles, see IAM roles in the IAM User Guide.

AWS CLI

To grant Amazon RDS access to an Amazon S3 bucket, use the following process:

1. Create an IAM policy that grants Amazon RDS access to an S3 bucket.


2. Create an IAM role that Amazon RDS can assume on your behalf to access your S3 buckets.

For more information, see Creating a role to delegate permissions to an IAM user in the IAM User
Guide.
3. Attach the IAM policy that you created to the IAM role that you created.

To create the IAM policy

Include the appropriate actions to grant the access your DB instance requires:

• ListAllMyBuckets – required
• ListBucket – required
• GetBucketACL – required
• GetBucketLocation – required
• GetObject – required for downloading files from S3 to D:\S3\
• PutObject – required for uploading files from D:\S3\ to S3
• ListMultipartUploadParts – required for uploading files from D:\S3\ to S3
• AbortMultipartUpload – required for uploading files from D:\S3\ to S3

1. The following AWS CLI command creates an IAM policy named rds-s3-integration-policy
with these options. It grants access to a bucket named bucket_name.

Example

For Linux, macOS, or Unix:

aws iam create-policy \


--policy-name rds-s3-integration-policy \
--policy-document '{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:ListAllMyBuckets",
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetBucketACL",
"s3:GetBucketLocation"
],
"Resource": "arn:aws:s3:::bucket_name"
},
{

1466
Amazon Relational Database Service User Guide
Amazon S3 integration

"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:ListMultipartUploadParts",
"s3:AbortMultipartUpload"
],
"Resource": "arn:aws:s3:::bucket_name/key_prefix/*"
}
]
}'

For Windows:

Make sure to change the line endings to the ones supported by your interface (^ instead of \). Also,
in Windows, you must escape all double quotes with a \. To avoid the need to escape the quotes in
the JSON, you can save it to a file instead and pass that in as a parameter.

First, create the policy.json file with the following permission policy:

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:ListAllMyBuckets",
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetBucketACL",
"s3:GetBucketLocation"
],
"Resource": "arn:aws:s3:::bucket_name"
},
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:ListMultipartUploadParts",
"s3:AbortMultipartUpload"
],
"Resource": "arn:aws:s3:::bucket_name/key_prefix/*"
}
]
}

Then use the following command to create the policy:

aws iam create-policy ^


--policy-name rds-s3-integration-policy ^
--policy-document file://file_path/assume_role_policy.json

2. After the policy is created, note the Amazon Resource Name (ARN) of the policy. You need the ARN
for a later step.

1467
Amazon Relational Database Service User Guide
Amazon S3 integration

To create the IAM role

• The following AWS CLI command creates the rds-s3-integration-role IAM role for this
purpose.

Example

For Linux, macOS, or Unix:

aws iam create-role \


--role-name rds-s3-integration-role \
--assume-role-policy-document '{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "rds.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}'

For Windows:

Make sure to change the line endings to the ones supported by your interface (^ instead of \). Also,
in Windows, you must escape all double quotes with a \. To avoid the need to escape the quotes in
the JSON, you can save it to a file instead and pass that in as a parameter.

First, create the assume_role_policy.json file with the following policy:

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": [
"rds.amazonaws.com"
]
},
"Action": "sts:AssumeRole"
}
]
}

Then use the following command to create the IAM role:

aws iam create-role ^


--role-name rds-s3-integration-role ^
--assume-role-policy-document file://file_path/assume_role_policy.json

Example of using the global condition context key to create the IAM role

We recommend using the aws:SourceArn and aws:SourceAccount global condition context keys
in resource-based policies to limit the service's permissions to a specific resource. This is the most
effective way to protect against the confused deputy problem.

1468
Amazon Relational Database Service User Guide
Amazon S3 integration

You might use both global condition context keys and have the aws:SourceArn value contain the
account ID. In this case, the aws:SourceAccount value and the account in the aws:SourceArn
value must use the same account ID when used in the same policy statement.

• Use aws:SourceArn if you want cross-service access for a single resource.


• Use aws:SourceAccount if you want to allow any resource in that account to be associated with
the cross-service use.

In the policy, make sure to use the aws:SourceArn global condition context key with the full
Amazon Resource Name (ARN) of the resources accessing the role. For S3 integration, make sure to
include the DB instance ARNs, as shown in the following example.

For Linux, macOS, or Unix:

aws iam create-role \


--role-name rds-s3-integration-role \
--assume-role-policy-document '{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "rds.amazonaws.com"
},
"Action": "sts:AssumeRole",
"Condition": {
"StringEquals": {

"aws:SourceArn":"arn:aws:rds:Region:my_account_ID:db:db_instance_identifier"
}
}
}
]
}'

For Windows:

Add the global condition context key to assume_role_policy.json.

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": [
"rds.amazonaws.com"
]
},
"Action": "sts:AssumeRole",
"Condition": {
"StringEquals": {

"aws:SourceArn":"arn:aws:rds:Region:my_account_ID:db:db_instance_identifier"
}
}
}
]
}

1469
Amazon Relational Database Service User Guide
Amazon S3 integration

To attach the IAM policy to the IAM role

• The following AWS CLI command attaches the policy to the role named rds-s3-integration-
role. Replace your-policy-arn with the policy ARN that you noted in a previous step.

Example

For Linux, macOS, or Unix:

aws iam attach-role-policy \


--policy-arn your-policy-arn \
--role-name rds-s3-integration-role

For Windows:

aws iam attach-role-policy ^


--policy-arn your-policy-arn ^
--role-name rds-s3-integration-role

Enabling RDS for SQL Server integration with S3


In the following section, you can find how to enable Amazon S3 integration with Amazon RDS for SQL
Server. To work with S3 integration, your DB instance must be associated with the IAM role that you
previously created before you use the S3_INTEGRATION feature-name parameter.
Note
To add an IAM role to a DB instance, the status of the DB instance must be available.

Console

To associate your IAM role with your DB instance

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. Choose the RDS for SQL Server DB instance name to display its details.
3. On the Connectivity & security tab, in the Manage IAM roles section, choose the IAM role to add for
Add IAM roles to this instance.
4. For Feature, choose S3_INTEGRATION.

5. Choose Add role.

1470
Amazon Relational Database Service User Guide
Amazon S3 integration

AWS CLI

To add the IAM role to the RDS for SQL Server DB instance

• The following AWS CLI command adds your IAM role to an RDS for SQL Server DB instance named
mydbinstance.

Example

For Linux, macOS, or Unix:

aws rds add-role-to-db-instance \


--db-instance-identifier mydbinstance \
--feature-name S3_INTEGRATION \
--role-arn your-role-arn

For Windows:

aws rds add-role-to-db-instance ^


--db-instance-identifier mydbinstance ^
--feature-name S3_INTEGRATION ^
--role-arn your-role-arn

Replace your-role-arn with the role ARN that you noted in a previous step. S3_INTEGRATION
must be specified for the --feature-name option.

Transferring files between RDS for SQL Server and Amazon S3


You can use Amazon RDS stored procedures to download and upload files between Amazon S3 and your
RDS DB instance. You can also use Amazon RDS stored procedures to list and delete files on the RDS
instance.

The files that you download from and upload to S3 are stored in the D:\S3 folder. This is the only folder
that you can use to access your files. You can organize your files into subfolders, which are created for
you when you include the destination folder during download.

Some of the stored procedures require that you provide an Amazon Resource Name (ARN) to your S3
bucket and file. The format for your ARN is arn:aws:s3:::bucket_name/file_name. Amazon S3
doesn't require an account number or AWS Region in ARNs.

S3 integration tasks run sequentially and share the same queue as native backup and restore tasks.
At maximum, you can have only two tasks in progress at any time in this queue. It can take up to five
minutes for the task to begin processing.

Downloading files from an Amazon S3 bucket to a SQL Server DB instance


To download files from an S3 bucket to an RDS for SQL Server DB instance, use the Amazon RDS stored
procedure msdb.dbo.rds_download_from_s3 with the following parameters.

Parameter name Data type Default Required Description

@s3_arn_of_file NVARCHAR – Required The S3 ARN of the file to


download, for example:
arn:aws:s3:::bucket_name/
mydata.csv

1471
Amazon Relational Database Service User Guide
Amazon S3 integration

Parameter name Data type Default Required Description

@rds_file_path NVARCHAR – Optional The file path for the


RDS instance. If not
specified, the file path is
D:\S3\<filename in
s3>. RDS supports absolute
paths and relative paths.
If you want to create a
subfolder, include it in the
file path.

@overwrite_file INT 0 Optional Overwrite the existing file:

0 = Don't overwrite

1 = Overwrite

You can download files without a file extension and files with the following file
extensions: .bcp, .csv, .dat, .fmt, .info, .lst, .tbl, .txt, and .xml.
Note
Files with the .ispac file extension are supported for download when SQL Server Integration
Services is enabled. For more information on enabling SSIS, see SQL Server Integration
Services (p. 1562).
Files with the following file extensions are supported for download when SQL Server Analysis
Services is enabled: .abf, .asdatabase, .configsettings, .deploymentoptions, .deploymenttargets,
and .xmla. For more information on enabling SSAS, see SQL Server Analysis Services (p. 1543).

The following example shows the stored procedure to download files from S3.

exec msdb.dbo.rds_download_from_s3
@s3_arn_of_file='arn:aws:s3:::bucket_name/bulk_data.csv',
@rds_file_path='D:\S3\seed_data\data.csv',
@overwrite_file=1;

The example rds_download_from_s3 operation creates a folder named seed_data in D:\S3\, if


the folder doesn't exist yet. Then the example downloads the source file bulk_data.csv from S3 to a
new file named data.csv on the DB instance. If the file previously existed, it's overwritten because the
@overwrite_file parameter is set to 1.

Uploading files from a SQL Server DB instance to an Amazon S3 bucket


To upload files from an RDS for SQL Server DB instance to an S3 bucket, use the Amazon RDS stored
procedure msdb.dbo.rds_upload_to_s3 with the following parameters.

Parameter name Data type Default Required Description

@s3_arn_of_file NVARCHAR – Required The S3 ARN of the file to be


created in S3, for example:
arn:aws:s3:::bucket_name/
mydata.csv

@rds_file_path NVARCHAR – Required The file path of the file


to upload to S3. Absolute
and relative paths are
supported.

1472
Amazon Relational Database Service User Guide
Amazon S3 integration

Parameter name Data type Default Required Description

@overwrite_file INT – Optional Overwrite the existing file:

0 = Don't overwrite

1 = Overwrite

The following example uploads the file named data.csv from the specified location in D:
\S3\seed_data\ to a file new_data.csv in the S3 bucket specified by the ARN.

exec msdb.dbo.rds_upload_to_s3
@rds_file_path='D:\S3\seed_data\data.csv',
@s3_arn_of_file='arn:aws:s3:::bucket_name/new_data.csv',
@overwrite_file=1;

If the file previously existed in S3, it's overwritten because the @overwrite_file parameter is set to 1.

Listing files on the RDS DB instance


To list the files available on the DB instance, use both a stored procedure and a function. First, run the
following stored procedure to gather file details from the files in D:\S3\.

exec msdb.dbo.rds_gather_file_details;

The stored procedure returns the ID of the task. Like other tasks, this stored procedure runs
asynchronously. As soon as the status of the task is SUCCESS, you can use the task ID in the
rds_fn_list_file_details function to list the existing files and directories in D:\S3\, as shown
following.

SELECT * FROM msdb.dbo.rds_fn_list_file_details(TASK_ID);

The rds_fn_list_file_details function returns a table with the following columns.

Output parameter Description

filepath Absolute path of the file (for example, D:


\S3\mydata.csv)

size_in_bytes File size (in bytes)

last_modified_utc Last modification date and time in UTC format

is_directory Option that indicates whether the item is a


directory (true/false)

Deleting files on the RDS DB instance


To delete the files available on the DB instance, use the Amazon RDS stored procedure
msdb.dbo.rds_delete_from_filesystem with the following parameters.

1473
Amazon Relational Database Service User Guide
Amazon S3 integration

Parameter name Data type Default Required Description

@rds_file_path NVARCHAR – Required The file path of the


file to delete. Absolute
and relative paths are
supported.

@force_delete INT 0 Optional To delete a directory, this


flag must be included and
set to 1.

1 = delete a directory

This parameter is ignored if


you are deleting a file.

To delete a directory, the @rds_file_path must end with a backslash (\) and @force_delete must be
set to 1.

The following example deletes the file D:\S3\delete_me.txt.

exec msdb.dbo.rds_delete_from_filesystem
@rds_file_path='D:\S3\delete_me.txt';

The following example deletes the directory D:\S3\example_folder\.

exec msdb.dbo.rds_delete_from_filesystem
@rds_file_path='D:\S3\example_folder\',
@force_delete=1;

Monitoring the status of a file transfer task


To track the status of your S3 integration task, call the rds_fn_task_status function. It takes two
parameters. The first parameter should always be NULL because it doesn't apply to S3 integration. The
second parameter accepts a task ID.

To see a list of all tasks, set the first parameter to NULL and the second parameter to 0, as shown in the
following example.

SELECT * FROM msdb.dbo.rds_fn_task_status(NULL,0);

To get a specific task, set the first parameter to NULL and the second parameter to the task ID, as shown
in the following example.

SELECT * FROM msdb.dbo.rds_fn_task_status(NULL,42);

The rds_fn_task_status function returns the following information.

Output parameter Description

task_id The ID of the task.

task_type For S3 integration, tasks can have the following


task types:

1474
Amazon Relational Database Service User Guide
Amazon S3 integration

Output parameter Description


• DOWNLOAD_FROM_S3
• UPLOAD_TO_S3
• LIST_FILES_ON_DISK
• DELETE_FILES_ON_DISK

database_name Not applicable to S3 integration tasks.

% complete The progress of the task as a percentage.

duration(mins) The amount of time spent on the task, in minutes.

lifecycle The status of the task. Possible statuses are the


following:

• CREATED – After you call one of the S3


integration stored procedures, a task is created
and the status is set to CREATED.
• IN_PROGRESS – After a task starts, the status
is set to IN_PROGRESS. It can take up to five
minutes for the status to change from CREATED
to IN_PROGRESS.
• SUCCESS – After a task completes, the status is
set to SUCCESS.
• ERROR – If a task fails, the status is set to
ERROR. For more information about the error,
see the task_info column.
• CANCEL_REQUESTED – After you call
rds_cancel_task, the status of the task is set
to CANCEL_REQUESTED.
• CANCELLED – After a task is successfully
canceled, the status of the task is set to
CANCELLED.

task_info Additional information about the task. If an error


occurs during processing, this column contains
information about the error.

last_updated The date and time that the task status was last
updated.

created_at The date and time that the task was created.

S3_object_arn The ARN of the S3 object downloaded from or


uploaded to.

overwrite_S3_backup_file Not applicable to S3 integration tasks.

KMS_master_key_arn Not applicable to S3 integration tasks.

filepath The file path on the RDS DB instance.

overwrite_file An option that indicates if an existing file is


overwritten.

task_metadata Not applicable to S3 integration tasks.

1475
Amazon Relational Database Service User Guide
Amazon S3 integration

Canceling a task
To cancel S3 integration tasks, use the msdb.dbo.rds_cancel_task stored procedure with the
task_id parameter. Delete and list tasks that are in progress can't be cancelled. The following example
shows a request to cancel a task.

exec msdb.dbo.rds_cancel_task @task_id = 1234;

To get an overview of all tasks and their task IDs, use the rds_fn_task_status function as described
in Monitoring the status of a file transfer task (p. 1474).

Multi-AZ limitations for S3 integration


On Multi-AZ instances, files in the D:\S3 folder are deleted on the standby replica after a failover. A
failover can be planned, for example, during DB instance modifications such as changing the instance
class or upgrading the engine version. Or a failover can be unplanned, during an outage of the primary.
Note
We don't recommend using the D:\S3 folder for file storage. The best practice is to upload
created files to Amazon S3 to make them durable, and download files when you need to import
data.

To determine the last failover time, you can use the msdb.dbo.rds_failover_time stored procedure.
For more information, see Determining the last failover time (p. 1612).

Example of no recent failover

This example shows the output when there is no recent failover in the error logs. No failover has
happened since 2020-04-29 23:59:00.01.

Therefore, all files downloaded after that time that haven't been deleted using the
rds_delete_from_filesystem stored procedure are still accessible on the current host. Files
downloaded before that time might also be available.

errorlog_available_from recent_failover_time

2020-04-29 23:59:00.0100000 null

Example of recent failover

This example shows the output when there is a failover in the error logs. The most recent failover was at
2020-05-05 18:57:51.89.

All files downloaded after that time that haven't been deleted using the
rds_delete_from_filesystem stored procedure are still accessible on the current host.

errorlog_available_from recent_failover_time

2020-04-29 23:59:00.0100000 2020-05-05 18:57:51.8900000

Disabling RDS for SQL Server integration with S3


Following, you can find how to disable Amazon S3 integration with Amazon RDS for SQL Server. Files in
D:\S3\ aren't deleted when disabling S3 integration.

1476
Amazon Relational Database Service User Guide
Amazon S3 integration

Note
To remove an IAM role from a DB instance, the status of the DB instance must be available.

Console

To disassociate your IAM role from your DB instance

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. Choose the RDS for SQL Server DB instance name to display its details.
3. On the Connectivity & security tab, in the Manage IAM roles section, choose the IAM role to
remove.
4. Choose Delete.

AWS CLI

To remove the IAM role from the RDS for SQL Server DB instance

• The following AWS CLI command removes the IAM role from a RDS for SQL Server DB instance
named mydbinstance.

Example

For Linux, macOS, or Unix:

aws rds remove-role-from-db-instance \


--db-instance-identifier mydbinstance \
--feature-name S3_INTEGRATION \
--role-arn your-role-arn

For Windows:

aws rds remove-role-from-db-instance ^


--db-instance-identifier mydbinstance ^
--feature-name S3_INTEGRATION ^
--role-arn your-role-arn

Replace your-role-arn with the appropriate IAM role ARN for the --feature-name option.

1477
Amazon Relational Database Service User Guide
Using Database Mail

Using Database Mail on Amazon RDS for SQL Server


You can use Database Mail to send email messages to users from your Amazon RDS on SQL Server
database instance. The messages can contain files and query results. Database Mail includes the
following components:

• Configuration and security objects – These objects create profiles and accounts, and are stored in the
msdb database.
• Messaging objects – These objects include the sp_send_dbmail stored procedure used to send
messages, and data structures that hold information about messages. They're stored in the msdb
database.
• Logging and auditing objects – Database Mail writes logging information to the msdb database and
the Microsoft Windows application event log.
• Database Mail executable – DatabaseMail.exe reads from a queue in the msdb database and sends
email messages.

RDS supports Database Mail for all SQL Server versions on the Web, Standard, and Enterprise Editions.

Limitations
The following limitations apply to using Database Mail on your SQL Server DB instance:

• Database Mail isn't supported for SQL Server Express Edition.


• Modifying Database Mail configuration parameters isn't supported. To see the preset (default) values,
use the sysmail_help_configure_sp stored procedure.
• File attachments aren't fully supported. For more information, see Working with file
attachments (p. 1487).
• The maximum file attachment size is 1 MB.
• Database Mail requires additional configuration on Multi-AZ DB instances. For more information, see
Considerations for Multi-AZ deployments (p. 1488).
• Configuring SQL Server Agent to send email messages to predefined operators isn't supported.

Enabling Database Mail


Use the following process to enable Database Mail for your DB instance:

1. Create a new parameter group.


2. Modify the parameter group to set the database mail xps parameter to 1.
3. Associate the parameter group with the DB instance.

Creating the parameter group for Database Mail


Create a parameter group for the database mail xps parameter that corresponds to the SQL Server
edition and version of your DB instance.
Note
You can also modify an existing parameter group. Follow the procedure in Modifying the
parameter that enables Database Mail (p. 1479).

Console

The following example creates a parameter group for SQL Server Standard Edition 2016.

1478
Amazon Relational Database Service User Guide
Using Database Mail

To create the parameter group

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Parameter groups.
3. Choose Create parameter group.
4. In the Create parameter group pane, do the following:

a. For Parameter group family, choose sqlserver-se-13.0.


b. For Group name, enter an identifier for the parameter group, such as dbmail-sqlserver-
se-13.
c. For Description, enter Database Mail XPs.
5. Choose Create.

CLI

The following example creates a parameter group for SQL Server Standard Edition 2016.

To create the parameter group

• Use one of the following commands.

Example

For Linux, macOS, or Unix:

aws rds create-db-parameter-group \


--db-parameter-group-name dbmail-sqlserver-se-13 \
--db-parameter-group-family "sqlserver-se-13.0" \
--description "Database Mail XPs"

For Windows:

aws rds create-db-parameter-group ^


--db-parameter-group-name dbmail-sqlserver-se-13 ^
--db-parameter-group-family "sqlserver-se-13.0" ^
--description "Database Mail XPs"

Modifying the parameter that enables Database Mail


Modify the database mail xps parameter in the parameter group that corresponds to the SQL Server
edition and version of your DB instance.

To enable Database Mail, set the database mail xps parameter to 1.

Console

The following example modifies the parameter group that you created for SQL Server Standard Edition
2016.

To modify the parameter group

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Parameter groups.

1479
Amazon Relational Database Service User Guide
Using Database Mail

3. Choose the parameter group, such as dbmail-sqlserver-se-13.


4. Under Parameters, filter the parameter list for mail.
5. Choose database mail xps.
6. Choose Edit parameters.
7. Enter 1.
8. Choose Save changes.

CLI

The following example modifies the parameter group that you created for SQL Server Standard Edition
2016.

To modify the parameter group

• Use one of the following commands.

Example

For Linux, macOS, or Unix:

aws rds modify-db-parameter-group \


--db-parameter-group-name dbmail-sqlserver-se-13 \
--parameters "ParameterName='database mail
xps',ParameterValue=1,ApplyMethod=immediate"

For Windows:

aws rds modify-db-parameter-group ^


--db-parameter-group-name dbmail-sqlserver-se-13 ^
--parameters "ParameterName='database mail
xps',ParameterValue=1,ApplyMethod=immediate"

Associating the parameter group with the DB instance


You can use the AWS Management Console or the AWS CLI to associate the Database Mail parameter
group with the DB instance.

Console

You can associate the Database Mail parameter group with a new or existing DB instance.

• For a new DB instance, associate it when you launch the instance. For more information, see Creating
an Amazon RDS DB instance (p. 300).
• For an existing DB instance, associate it by modifying the instance. For more information, see
Modifying an Amazon RDS DB instance (p. 401).

CLI

You can associate the Database Mail parameter group with a new or existing DB instance.

To create a DB instance with the Database Mail parameter group

• Specify the same DB engine type and major version as you used when creating the parameter group.

1480
Amazon Relational Database Service User Guide
Using Database Mail

Example

For Linux, macOS, or Unix:

aws rds create-db-instance \


--db-instance-identifier mydbinstance \
--db-instance-class db.m5.2xlarge \
--engine sqlserver-se \
--engine-version 13.00.5426.0.v1 \
--allocated-storage 100 \
--manage-master-user-password \
--master-username admin \
--storage-type gp2 \
--license-model li
--db-parameter-group-name dbmail-sqlserver-se-13

For Windows:

aws rds create-db-instance ^


--db-instance-identifier mydbinstance ^
--db-instance-class db.m5.2xlarge ^
--engine sqlserver-se ^
--engine-version 13.00.5426.0.v1 ^
--allocated-storage 100 ^
--manage-master-user-password ^
--master-username admin ^
--storage-type gp2 ^
--license-model li ^
--db-parameter-group-name dbmail-sqlserver-se-13

To modify a DB instance and associate the Database Mail parameter group

• Use one of the following commands.

Example

For Linux, macOS, or Unix:

aws rds modify-db-instance \


--db-instance-identifier mydbinstance \
--db-parameter-group-name dbmail-sqlserver-se-13 \
--apply-immediately

For Windows:

aws rds modify-db-instance ^


--db-instance-identifier mydbinstance ^
--db-parameter-group-name dbmail-sqlserver-se-13 ^
--apply-immediately

Configuring Database Mail


You perform the following tasks to configure Database Mail:

1. Create the Database Mail profile.

1481
Amazon Relational Database Service User Guide
Using Database Mail

2. Create the Database Mail account.


3. Add the Database Mail account to the Database Mail profile.
4. Add users to the Database Mail profile.

Note
To configure Database Mail, make sure that you have execute permission on the stored
procedures in the msdb database.

Creating the Database Mail profile


To create the Database Mail profile, you use the sysmail_add_profile_sp stored procedure. The following
example creates a profile named Notifications.

To create the profile

• Use the following SQL statement.

USE msdb
GO

EXECUTE msdb.dbo.sysmail_add_profile_sp
@profile_name = 'Notifications',
@description = 'Profile used for sending outgoing notifications using
Amazon SES.';
GO

Creating the Database Mail account


To create the Database Mail account, you use the sysmail_add_account_sp stored procedure. The
following example creates an account named SES on an RDS for SQL Server DB instance in a private VPC,
using Amazon Simple Email Service.

Using Amazon SES requires the following parameters:

• @email_address – An Amazon SES verified identity. For more information, see Verified identities in
Amazon SES.
• @mailserver_name – An Amazon SES SMTP endpoint. For more information, see Connecting to an
Amazon SES SMTP endpoint.
• @username – An Amazon SES SMTP user name. For more information, see Obtaining Amazon SES
SMTP credentials.

Don't use an AWS Identity and Access Management user name.


• @password – An Amazon SES SMTP password. For more information, see Obtaining Amazon SES
SMTP credentials.

To create the account

• Use the following SQL statement.

USE msdb
GO

EXECUTE msdb.dbo.sysmail_add_account_sp
@account_name = 'SES',

1482
Amazon Relational Database Service User Guide
Using Database Mail

@description = 'Mail account for sending outgoing notifications.',


@email_address = '[email protected]',
@display_name = 'Automated Mailer',
@mailserver_name = 'vpce-0a1b2c3d4e5f-01234567.email-smtp.us-
west-2.vpce.amazonaws.com',
@port = 587,
@enable_ssl = 1,
@username = 'Smtp_Username',
@password = 'Smtp_Password';
GO

Note
Specify credentials other than the prompts shown here as a security best practice.

Adding the Database Mail account to the Database Mail profile


To add the Database Mail account to the Database Mail profile, you use the
sysmail_add_profileaccount_sp stored procedure. The following example adds the SES account to the
Notifications profile.

To add the account to the profile

• Use the following SQL statement.

USE msdb
GO

EXECUTE msdb.dbo.sysmail_add_profileaccount_sp
@profile_name = 'Notifications',
@account_name = 'SES',
@sequence_number = 1;
GO

Adding users to the Database Mail profile


To grant permission for an msdb database principal to use a Database Mail profile, you use the
sysmail_add_principalprofile_sp stored procedure. A principal is an entity that can request SQL
Server resources. The database principal must map to a SQL Server authentication user, a Windows
Authentication user, or a Windows Authentication group.

The following example grants public access to the Notifications profile.

To add a user to the profile

• Use the following SQL statement.

USE msdb
GO

EXECUTE msdb.dbo.sysmail_add_principalprofile_sp
@profile_name = 'Notifications',
@principal_name = 'public',
@is_default = 1;
GO

1483
Amazon Relational Database Service User Guide
Using Database Mail

Amazon RDS stored procedures and functions for Database Mail


Microsoft provides stored procedures for using Database Mail, such as creating, listing, updating, and
deleting accounts and profiles. In addition, RDS provides the stored procedures and functions for
Database Mail shown in the following table.

Procedure/Function Description

rds_fn_sysmail_allitems Shows sent messages, including those submitted by other users.

rds_fn_sysmail_event_log Shows events, including those for messages submitted by other


users.

rds_fn_sysmail_mailattachments Shows attachments, including those to messages submitted by


other users.

rds_sysmail_control Starts and stops the mail queue (DatabaseMail.exe process).

rds_sysmail_delete_mailitems_sp Deletes email messages sent by all users from the Database Mail
internal tables.

Sending email messages using Database Mail


You use the sp_send_dbmail stored procedure to send email messages using Database Mail.

Usage

EXEC msdb.dbo.sp_send_dbmail
@profile_name = 'profile_name',
@recipients = '[email protected][; recipient2; ... recipientn]',
@subject = 'subject',
@body = 'message_body',
[@body_format = 'HTML'],
[@file_attachments = 'file_path1; file_path2; ... file_pathn'],
[@query = 'SQL_query'],
[@attach_query_result_as_file = 0|1]';

The following parameters are required:

• @profile_name – The name of the Database Mail profile from which to send the message.
• @recipients – The semicolon-delimited list of email addresses to which to send the message.
• @subject – The subject of the message.
• @body – The body of the message. You can also use a declared variable as the body.

The following parameters are optional:

• @body_format – This parameter is used with a declared variable to send email in HTML format.
• @file_attachments – The semicolon-delimited list of message attachments. File paths must be
absolute paths.
• @query – A SQL query to run. The query results can be attached as a file or included in the body of the
message.
• @attach_query_result_as_file – Whether to attach the query result as a file. Set to 0 for no, 1
for yes. The default is 0.

1484
Amazon Relational Database Service User Guide
Using Database Mail

Examples
The following examples demonstrate how to send email messages.

Example of sending a message to a single recipient

USE msdb
GO

EXEC msdb.dbo.sp_send_dbmail
@profile_name = 'Notifications',
@recipients = '[email protected]',
@subject = 'Automated DBMail message - 1',
@body = 'Database Mail configuration was successful.';
GO

Example of sending a message to multiple recipients

USE msdb
GO

EXEC msdb.dbo.sp_send_dbmail
@profile_name = 'Notifications',
@recipients = '[email protected];[email protected]',
@subject = 'Automated DBMail message - 2',
@body = 'This is a message.';
GO

Example of sending a SQL query result as a file attachment

USE msdb
GO

EXEC msdb.dbo.sp_send_dbmail
@profile_name = 'Notifications',
@recipients = '[email protected]',
@subject = 'Test SQL query',
@body = 'This is a SQL query test.',
@query = 'SELECT * FROM abc.dbo.test',
@attach_query_result_as_file = 1;
GO

Example of sending a message in HTML format

USE msdb
GO

DECLARE @HTML_Body as NVARCHAR(500) = 'Hi, <h4> Heading </h4> </br> See the report. <b>
Regards </b>';

EXEC msdb.dbo.sp_send_dbmail
@profile_name = 'Notifications',
@recipients = '[email protected]',
@subject = 'Test HTML message',
@body = @HTML_Body,
@body_format = 'HTML';
GO

1485
Amazon Relational Database Service User Guide
Using Database Mail

Example of sending a message using a trigger when a specific event occurs in the database

USE AdventureWorks2017
GO
IF OBJECT_ID ('Production.iProductNotification', 'TR') IS NOT NULL
DROP TRIGGER Purchasing.iProductNotification
GO

CREATE TRIGGER iProductNotification ON Production.Product


FOR INSERT
AS
DECLARE @ProductInformation nvarchar(255);
SELECT
@ProductInformation = 'A new product, ' + Name + ', is now available for $' +
CAST(StandardCost AS nvarchar(20)) + '!'
FROM INSERTED i;

EXEC msdb.dbo.sp_send_dbmail
@profile_name = 'Notifications',
@recipients = '[email protected]',
@subject = 'New product information',
@body = @ProductInformation;
GO

Viewing messages, logs, and attachments


You use RDS stored procedures to view messages, event logs, and attachments.

To view all email messages

• Use the following SQL query.

SELECT * FROM msdb.dbo.rds_fn_sysmail_allitems(); --WHERE sent_status='sent' or


'failed' or 'unsent'

To view all email event logs

• Use the following SQL query.

SELECT * FROM msdb.dbo.rds_fn_sysmail_event_log();

To view all email attachments

• Use the following SQL query.

SELECT * FROM msdb.dbo.rds_fn_sysmail_mailattachments();

Deleting messages
You use the rds_sysmail_delete_mailitems_sp stored procedure to delete messages.
Note
RDS automatically deletes mail table items when DBMail history data reaches 1 GB in size, with
a retention period of at least 24 hours.

1486
Amazon Relational Database Service User Guide
Using Database Mail

If you want to keep mail items for a longer period, you can archive them. For more information,
see Create a SQL Server Agent job to archive Database Mail messages and event logs in the
Microsoft documentation.

To delete all email messages

• Use the following SQL statement.

DECLARE @GETDATE datetime


SET @GETDATE = GETDATE();
EXECUTE msdb.dbo.rds_sysmail_delete_mailitems_sp @sent_before = @GETDATE;
GO

To delete all email messages with a particular status

• Use the following SQL statement to delete all failed messages.

DECLARE @GETDATE datetime


SET @GETDATE = GETDATE();
EXECUTE msdb.dbo.rds_sysmail_delete_mailitems_sp @sent_status = 'failed';
GO

Starting the mail queue


You use the rds_sysmail_control stored procedure to start the Database Mail process.
Note
Enabling Database Mail automatically starts the mail queue.

To start the mail queue

• Use the following SQL statement.

EXECUTE msdb.dbo.rds_sysmail_control start;


GO

Stopping the mail queue


You use the rds_sysmail_control stored procedure to stop the Database Mail process.

To stop the mail queue

• Use the following SQL statement.

EXECUTE msdb.dbo.rds_sysmail_control stop;


GO

Working with file attachments


The following file attachment extensions aren't supported in Database Mail messages from RDS on SQL
Server: .ade, .adp, .apk, .appx, .appxbundle, .bat, .bak, .cab, .chm, .cmd, .com, .cpl, .dll, .dmg, .exe, .hta, .inf1, .ins, .isp, .is
and .wsh.

1487
Amazon Relational Database Service User Guide
Using Database Mail

Database Mail uses the Microsoft Windows security context of the current user to control access to files.
Users who log in with SQL Server Authentication can't attach files using the @file_attachments
parameter with the sp_send_dbmail stored procedure. Windows doesn't allow SQL Server to provide
credentials from a remote computer to another remote computer. Therefore, Database Mail can't attach
files from a network share when the command is run from a computer other than the computer running
SQL Server.

However, you can use SQL Server Agent jobs to attach files. For more information on SQL Server Agent,
see Using SQL Server Agent (p. 1617) and SQL Server Agent in the Microsoft documentation.

Considerations for Multi-AZ deployments


When you configure Database Mail on a Multi-AZ DB instance, the configuration isn't automatically
propagated to the secondary. We recommend converting the Multi-AZ instance to a Single-AZ instance,
configuring Database Mail, and then converting the DB instance back to Multi-AZ. Then both the primary
and secondary nodes have the Database Mail configuration.

If you create a read replica from your Multi-AZ instance that has Database Mail configured, the replica
inherits the configuration, but without the password to the SMTP server. Update the Database Mail
account with the password.

1488
Amazon Relational Database Service User Guide
Instance store support for tempdb

Instance store support for the tempdb database on


Amazon RDS for SQL Server
An instance store provides temporary block-level storage for your DB instance. This storage is located on
disks that are physically attached to the host computer. These disks have Non-Volatile Memory Express
(NVMe) instance storage that is based on solid-state drives (SSDs). This storage is optimized for low
latency, very high random I/O performance, and high sequential read throughput.

By placing tempdb data files and tempdb log files on the instance store, you can achieve lower read and
write latencies compared to standard storage based on Amazon EBS.
Note
SQL Server database files and database log files aren't placed on the instance store.

Enabling the instance store


When RDS provisions DB instances with one of the following instance classes, the tempdb database is
automatically placed onto the instance store:

• db.m5d
• db.r5d

To enable the instance store, do one of the following:

• Create a SQL Server DB instance using one of these instance types. For more information, see Creating
an Amazon RDS DB instance (p. 300).
• Modify an existing SQL Server DB instance to use one of them. For more information, see Modifying an
Amazon RDS DB instance (p. 401).

The instance store is available in all AWS Regions where one or more of these instance types are
supported. For more information on the db.m5d and db.r5d instance classes, see DB instance
classes (p. 11). For more information on the instance classes supported by Amazon RDS for SQL Server,
see DB instance class support for Microsoft SQL Server (p. 1358).

File location and size considerations


On instances without an instance store, RDS stores the tempdb data and log files in the D:\rdsdbdata
\DATA directory. Both files start at 8 MB by default.

On instances with an instance store, RDS stores the tempdb data and log files in the T:\rdsdbdata
\DATA directory.

When tempdb has only one data file (tempdb.mdf) and one log file (templog.ldf), templog.ldf
starts at 8 MB by default and tempdb.mdf starts at 80% or more of the instance's storage capacity.
Twenty percent of the storage capacity or 200 GB, whichever is less, is kept free to start. Multiple
tempdb data files split the 80% disk space evenly, while log files always have an 8-MB initial size.

For example, if you modify your DB instance class from db.m5.2xlarge to db.m5d.2xlarge, the size
of tempdb data files increases from 8 MB each to 234 GB in total.
Note
Besides the tempdb data and log files on the instance store (T:\rdsdbdata\DATA), you can
still create extra tempdb data and log files on the data volume (D:\rdsdbdata\DATA). Those
files always have an 8 MB initial size.

1489
Amazon Relational Database Service User Guide
Instance store support for tempdb

Backup considerations
You might need to retain backups for long periods, incurring costs over time. The tempdb data and log
blocks can change very often depending on the workload. This can greatly increase the DB snapshot size.

When tempdb is on the instance store, snapshots don't include temporary files. This means that
snapshot sizes are smaller and consume less of the free backup allocation compared to EBS-only storage.

Disk full errors


If you use all of the available space in the instance store, you might receive errors such as the following:

• The transaction log for database 'tempdb' is full due to 'ACTIVE_TRANSACTION'.


• Could not allocate space for object 'dbo.SORT temporary run storage: 140738941419520' in database
'tempdb' because the 'PRIMARY' filegroup is full. Create disk space by deleting unneeded files,
dropping objects in the filegroup, adding additional files to the filegroup, or setting autogrowth on for
existing files in the filegroup.

You can do one or more of the following when the instance store is full:

• Adjust your workload or the way you use tempdb.


• Scale up to use a DB instance class with more NVMe storage.
• Stop using the instance store, and use an instance class with only EBS storage.
• Use a mixed mode by adding secondary data or log files for tempdb on the EBS volume.

Removing the instance store


To remove the instance store, modify your SQL Server DB instance to use an instance type that doesn't
support instance store, such as db.m5 or db.r5.
Note
When you remove the instance store, the temporary files are moved to the D:\rdsdbdata
\DATA directory and reduced in size to 8 MB.

1490
Amazon Relational Database Service User Guide
Using extended events

Using extended events with Amazon RDS for


Microsoft SQL Server
You can use extended events in Microsoft SQL Server to capture debugging and troubleshooting
information for Amazon RDS for SQL Server. Extended events replace SQL Trace and Server Profiler,
which have been deprecated by Microsoft. Extended events are similar to profiler traces but with more
granular control on the events being traced. Extended events are supported for SQL Server versions
2014 and later on Amazon RDS. For more information, see Extended events overview in the Microsoft
documentation.

Extended events are turned on automatically for users with master user privileges in Amazon RDS for
SQL Server.

Topics
• Limitations and recommendations (p. 1491)
• Configuring extended events on RDS for SQL Server (p. 1491)
• Considerations for Multi-AZ deployments (p. 1492)
• Querying extended event files (p. 1493)

Limitations and recommendations


When using extended events on RDS for SQL Server, the following limitations apply:

• Extended events are supported only for the Enterprise and Standard Editions.
• You can't alter default extended event sessions.
• Make sure to set the session memory partition mode to NONE.
• Session event retention mode can be either ALLOW_SINGLE_EVENT_LOSS or
ALLOW_MULTIPLE_EVENT_LOSS.
• Event Tracing for Windows (ETW) targets aren't supported.
• Make sure that file targets are in the D:\rdsdbdata\log directory.
• For pair matching targets, set the respond_to_memory_pressure property to 1.
• Ring buffer target memory can't be greater than 4 MB.
• The following actions aren't supported:
• debug_break
• create_dump_all_threads
• create_dump_single_threads
• The rpc_completed event is supported on the following versions and later: 15.0.4083.2, 14.0.3370.1,
13.0.5865.1, 12.0.6433.1, 11.0.7507.2.

Configuring extended events on RDS for SQL Server


On RDS for SQL Server, you can configure the values of certain parameters of extended event sessions.
The following table describes the configurable parameters.

Parameter name Description

xe_session_max_memory Specifies the maximum amount o

xe_session_max_event_size Specifies the maximum memory

1491
Amazon Relational Database Service User Guide
Using extended events

Parameter name Description

xe_session_max_dispatch_latency Specifies the amount of time tha


the event session.

xe_file_target_size Specifies the maximum size of th

xe_file_retention Specifies the retention time in da

Note
Setting xe_file_retention to zero causes .xel files to be removed automatically after the
lock on these files is released by SQL Server. The lock is released whenever an .xel file reaches
the size limit set in xe_file_target_size.

You can use the rdsadmin.dbo.rds_show_configuration stored procedure to show the current
values of these parameters. For example, use the following SQL statement to view the current setting of
xe_session_max_memory.

exec rdsadmin..rds_show_configuration 'xe_session_max_memory'

You can use the rdsadmin.dbo.rds_set_configuration stored procedure to modify them. For
example, use the following SQL statement to set xe_session_max_memory to 4 MB.

exec rdsadmin..rds_set_configuration 'xe_session_max_memory', 4

Considerations for Multi-AZ deployments


When you create an extended event session on a primary DB instance, it doesn't propagate to the
standby replica. You can fail over and create the extended event session on the new primary DB instance.
Or you can remove and then re-add the Multi-AZ configuration to propagate the extended event session
to the standby replica. RDS stops all nondefault extended event sessions on the standby replica, so that
these sessions don't consume resources on the standby. Because of this, after a standby replica becomes
the primary DB instance, make sure to manually start the extended event sessions on the new primary.
Note
This approach applies to both Always On Availability Groups and Database Mirroring.

You can also use a SQL Server Agent job to track the standby replica and start the sessions if the standby
becomes the primary. For example, use the following query in your SQL Server Agent job step to restart
event sessions on a primary DB instance.

BEGIN
IF (DATABASEPROPERTYEX('rdsadmin','Updateability')='READ_WRITE'
AND DATABASEPROPERTYEX('rdsadmin','status')='ONLINE'
AND (DATABASEPROPERTYEX('rdsadmin','Collation') IS NOT NULL OR
DATABASEPROPERTYEX('rdsadmin','IsAutoClose')=1)
)
BEGIN
IF NOT EXISTS (SELECT 1 FROM sys.dm_xe_sessions WHERE name='xe1')
ALTER EVENT SESSION xe1 ON SERVER STATE=START
IF NOT EXISTS (SELECT 1 FROM sys.dm_xe_sessions WHERE name='xe2')
ALTER EVENT SESSION xe2 ON SERVER STATE=START
END
END

This query restarts the event sessions xe1 and xe2 on a primary DB instance if these sessions are in a
stopped state. You can also add a schedule with a convenient interval to this query.

1492
Amazon Relational Database Service User Guide
Using extended events

Querying extended event files


You can either use SQL Server Management Studio or the sys.fn_xe_file_target_read_file
function to view data from extended events that use file targets. For more information on this function,
see sys.fn_xe_file_target_read_file (Transact-SQL) in the Microsoft documentation.

Extended event file targets can only write files to the D:\rdsdbdata\log directory on RDS for SQL
Server.

As an example, use the following SQL query to list the contents of all files of extended event sessions
whose names start with xe.

SELECT * FROM sys.fn_xe_file_target_read_file('d:\rdsdbdata\log\xe*', null,null,null);

1493
Amazon Relational Database Service User Guide
Access to transaction log backups

Access to transaction log backups with RDS for SQL


Server
With access to transaction log backups for RDS for SQL Server, you can list the transaction log backup
files for a database and copy them to a target Amazon S3 bucket. By copying transaction log backups in
an Amazon S3 bucket, you can use them in combination with full and differential database backups to
perform point in time database restores. You use RDS stored procedures to set up access to transaction
log backups, list available transaction log backups, and copy them to your Amazon S3 bucket.

Access to transaction log backups provides the following capabilities and benefits:

• List and view the metadata of available transaction log backups for a database on an RDS for SQL
Server DB instance.
• Copy available transaction log backups from RDS for SQL Server to a target Amazon S3 bucket.
• Perform point-in-time restores of databases without the need to restore an entire DB instance. For
more information on restoring a DB instance to a point in time, see Restoring a DB instance to a
specified time (p. 660).

Availability and support


Access to transaction log backups is supported in all AWS Regions. Access to transaction log backups is
available for all editions and versions of Microsoft SQL Server supported on Amazon RDS.

Requirements
The following requirements must be met before enabling access to transaction log backups:

• Automated backups must be enabled on the DB instance and the backup retention must be set to a
value of one or more days. For more information on enabling automated backups and configuring a
retention policy, see Enabling automated backups (p. 593).
• An Amazon S3 bucket must exist in the same account and Region as the source DB instance. Before
enabling access to transaction log backups, choose an existing Amazon S3 bucket or create a new
bucket to use for your transaction log backup files.
• An Amazon S3 bucket permissions policy must be configured as follows to allow Amazon RDS to copy
transaction log files into it:
1. Set the object account ownership property on the bucket to Bucket Owner Preferred.
2. Add the following policy. There will be no policy by default, so use the bucket Access Control Lists
(ACL) to edit the bucket policy and add it.

The following example uses an ARN to specify a resource. We recommend using the SourceArn
and SourceAccount global condition context keys in resource-based trust relationships to limit the
service's permissions to a specific resource. For more information on working with ARNs, see Amazon
resource names (ARNs) and Working with Amazon Resource Names (ARNs) in Amazon RDS (p. 471).

Example of an Amazon S3 permissions policy for access to transaction log backups

{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Only allow writes to my bucket with bucket owner full control",

1494
Amazon Relational Database Service User Guide
Access to transaction log backups

"Effect": "Allow",
"Principal": {
"Service": "backups.rds.amazonaws.com"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::{customer_bucket}/{customer_path}/*",
"Condition": {
"StringEquals": {
"s3:x-amz-acl": "bucket-owner-full-control",
"aws:sourceAccount": "{customer_account}",
"aws:sourceArn": "{db_instance_arn}"
}
}
}
]
}

• An AWS Identity and Access Management (IAM) role to access the Amazon S3 bucket. If you already
have an IAM role, you can use that. You can choose to have a new IAM role created for you when you
add the SQLSERVER_BACKUP_RESTORE option by using the AWS Management Console. Alternatively,
you can create a new one manually. For more information on creating and configuring an IAM role
with SQLSERVER_BACKUP_RESTORE, see Manually creating an IAM role for native backup and
restore (p. 1422).
• The SQLSERVER_BACKUP_RESTORE option must be added to an option group on your DB instance.
For more information on adding the SQLSERVER_BACKUP_RESTORE option, see Support for native
backup and restore in SQL Server (p. 1525).
Note
If your DB instance has storage encryption enabled , the AWS KMS (KMS) actions and key must
be provided in the IAM role provided in the native backup and restore option group.

Optionally, if you intend to use the rds_restore_log stored procedure to perform point in time
database restores, we recommend using the same Amazon S3 path for the native backup and restore
option group and access to transaction log backups. This method ensures that when Amazon RDS
assumes the role from the option group to perform the restore log functions, it has access to retrieve
transaction log backups from the same Amazon S3 path.
• If the DB instance is encrypted, regardless of encryption type (AWS managed key or Customer
managed key), you must provide a Customer managed KMS key in the IAM role and in the
rds_tlog_backup_copy_to_S3 stored procedure.

Limitations and recommendations


Access to transaction log backups has the following limitations and recommendations:

• You can list and copy up to the last seven days of transaction log backups for any DB instance that has
backup retention configured between one to 35 days.
• The Amazon S3 bucket used for access to transaction log backups must exist in the same account and
Region as the source DB instance. Cross-account and cross-region copy is not supported.
• Only one Amazon S3 bucket can be configured as a target to copy transaction log backups into. You
can choose a new target Amazon S3 bucket with the rds_tlog_copy_setup stored procedure. For
more information on choosing a new target Amazon S3 bucket, see Setting up access to transaction
log backups (p. 1496).
• You cannot specify the KMS key when using the rds_tlog_backup_copy_to_S3 stored procedure if
your RDS instance is not enabled for storage encryption.
• Multi-account copying is not supported. The IAM role used for copying will only permit write access to
Amazon S3 buckets within the owner account of the DB instance.
• Only two concurrent tasks of any type may be run on an RDS for SQL Server DB instance.

1495
Amazon Relational Database Service User Guide
Access to transaction log backups

• Only one copy task can run for a single database at a given time. If you want to copy transaction log
backups for multiple databases on the DB instance, use a separate copy task for each database.
• If you copy a transaction log backup that already exists with the same name in the Amazon S3 bucket,
the existing transaction log backup will be overwritten.
• You can only run the stored procedures that are provided with access to transaction log backups on the
primary DB instance. You can’t run these stored procedures on an RDS for SQL Server read replica or
on a secondary instance of a Multi-AZ DB cluster.
• If the RDS for SQL Server DB instance is rebooted while the rds_tlog_backup_copy_to_S3 stored
procedure is running, the task will automatically restart from the beginning when the DB instance is
back online. Any transaction log backups that had been copied to the Amazon S3 bucket while the task
was running before the reboot will be overwritten.
• The Microsoft SQL Server system databases and the RDSAdmin database cannot be configured for
access to transaction log backups.
• Copying to buckets encrypted by SSE-KMS isn't supported.

Setting up access to transaction log backups


To set up access to transaction log backups, complete the list of requirements in the
Requirements (p. 1494) section, and then run the rds_tlog_copy_setup stored procedure. The
procedure will enable the access to transaction log backups feature at the DB instance level. You don't
need to run it for each individual database on the DB instance.
Important
The database user must be granted the db_owner role within SQL Server on each database to
configure and use the access to transaction log backups feature.

Example usage:

exec msdb.dbo.rds_tlog_copy_setup
@target_s3_arn='arn:aws:s3:::mybucket/myfolder';

The following parameter is required:

• @target_s3_arn – The ARN of the target Amazon S3 bucket to copy transaction log backups files to.

Example of setting an Amazon S3 target bucket:

exec msdb.dbo.rds_tlog_copy_setup @target_s3_arn='arn:aws:s3:::accesstlogs-


testbucket/mytestdb1';

To validate the configuration, call the rds_show_configuration stored procedure.

Example of validating the configuration:

exec rdsadmin.dbo.rds_show_configuration @name='target_s3_arn_for_tlog_copy';

To modify access to transaction log backups to point to a different Amazon S3 bucket, you can view the
current Amazon S3 bucket value and re-run the rds_tlog_copy_setup stored procedure using a new
value for the @target_s3_arn.

1496
Amazon Relational Database Service User Guide
Access to transaction log backups

Example of viewing the existing Amazon S3 bucket configured for access to transaction log
backups

exec rdsadmin.dbo.rds_show_configuration @name='target_s3_arn_for_tlog_copy';

Example of updating to a new target Amazon S3 bucket

exec msdb.dbo.rds_tlog_copy_setup @target_s3_arn='arn:aws:s3:::mynewbucket/mynewfolder';

Listing available transaction log backups


With RDS for SQL Server, databases configured to use the full recovery model and a DB instance backup
retention set to one or more days have transaction log backups automatically enabled. By enabling
access to transaction log backups, up to seven days of those transaction log backups are made available
for you to copy into your Amazon S3 bucket.

After you have enabled access to transaction log backups, you can start using it to list and copy available
transaction log backup files.

Listing transaction log backups

To list all transaction log backups available for an individual database, call the
rds_fn_list_tlog_backup_metadata function. You can use an ORDER BY or a WHERE clause when
calling the function.

Example of listing and filtering available transaction log backup files

SELECT * from msdb.dbo.rds_fn_list_tlog_backup_metadata('mydatabasename');


SELECT * from msdb.dbo.rds_fn_list_tlog_backup_metadata('mydatabasename') WHERE
rds_backup_seq_id = 3507;
SELECT * from msdb.dbo.rds_fn_list_tlog_backup_metadata('mydatabasename') WHERE
backup_file_time_utc > '2022-09-15 20:44:01' ORDER BY backup_file_time_utc DESC;

1497
Amazon Relational Database Service User Guide
Access to transaction log backups

The rds_fn_list_tlog_backup_metadata function returns the following output:

Column name Data type Description

db_name sysname The database name provided to list the transaction


log backups for.

db_id int The internal database identifier for the input


parameter db_name.

family_guid uniqueidentifier The unique ID of the original database at creation.


This value remains the same when the database is
restored, even to a different database name.

rds_backup_seq_id int The ID that RDS uses internally to maintain a


sequence number for each transaction log backup file.

backup_file_epoch bigint The epoch time that a transaction backup file was
generated.

datetime
backup_file_time_utc The UTC time-converted value for the
backup_file_epoch value.

starting_lsn numeric(25,0) The log sequence number of the first or oldest log
record of a transaction log backup file.

ending_lsn numeric(25,0) The log sequence number of the last or next log
record of a transaction log backup file.

is_log_chain_brokenbit A boolean value indicating if the log chain is broken


between the current transaction log backup file and
the previous transaction log backup file.

1498
Amazon Relational Database Service User Guide
Access to transaction log backups

Column name Data type Description

file_size_bytes bigint The size of the transactional backup set in bytes.

Error varchar(4000) Error message if the


rds_fn_list_tlog_backup_metadata function
throws an exception. NULL if no exceptions.

Copying transaction log backups


To copy a set of available transaction log backups for an individual database to your Amazon S3 bucket,
call the rds_tlog_backup_copy_to_S3 stored procedure. The rds_tlog_backup_copy_to_S3
stored procedure will initiate a new task to copy transaction log backups.
Note
The rds_tlog_backup_copy_to_S3 stored procedure will copy the transaction log backups
without validating against is_log_chain_broken attribute. For this reason, you should
manually confirm an unbroken log chain before running the rds_tlog_backup_copy_to_S3
stored procedure. For further explanation, see Validating the transaction log backup log
chain (p. 1503).

Example usage of the rds_tlog_backup_copy_to_S3 stored procedure

exec msdb.dbo.rds_tlog_backup_copy_to_S3
@db_name='mydatabasename',
[@kms_key_arn='arn:aws:kms:region:account-id:key/key-id'],
[@backup_file_start_time='2022-09-01 01:00:15'],
[@backup_file_end_time='2022-09-01 21:30:45'],
[@starting_lsn=149000000112100001],
[@ending_lsn=149000000120400001],
[@rds_backup_starting_seq_id=5],
[@rds_backup_ending_seq_id=10];

The following input parameters are available:

Parameter Description

@db_name The name of the database to copy transaction log backups for

@kms_key_arn The ARN of the KMS key used to encrypt a storage-encrypted DB


instance.

@backup_file_start_time The UTC timestamp as provided from the [backup_file_time_utc]


column of the rds_fn_list_tlog_backup_metadata function.

@backup_file_end_time The UTC timestamp as provided from the [backup_file_time_utc]


column of the rds_fn_list_tlog_backup_metadata function.

@starting_lsn The log sequence number (LSN) as provided


from the [starting_lsn] column of the
rds_fn_list_tlog_backup_metadata function

@ending_lsn The log sequence number (LSN) as provided from the [ending_lsn]
column of the rds_fn_list_tlog_backup_metadata function.

@rds_backup_starting_seq_id
The sequence ID as provided from the [rds_backup_seq_id]
column of the rds_fn_list_tlog_backup_metadata function.

1499
Amazon Relational Database Service User Guide
Access to transaction log backups

Parameter Description

@rds_backup_ending_seq_id
The sequence ID as provided from the [rds_backup_seq_id]
column of the rds_fn_list_tlog_backup_metadata function.

You can specify a set of either the time, LSN, or sequence ID parameters. Only one set of parameters are
required.

You can also specify just a single parameter in any of the sets. For example, by providing a value for only
the backup_file_end_time parameter, all available transaction log backup files prior to that time
within the seven-day limit will be copied to your Amazon S3 bucket.

Following are the valid input parameter combinations for the rds_tlog_backup_copy_to_S3 stored
procedure.

Parameters provided Expected result

Copies transaction
exec
log backups from the
msdb.dbo.rds_tlog_backup_copy_to_S3
last seven days and
@db_name = exist between the
'testdb1', provided range of
backup_file_start_time
@backup_file_start_time='2022-08-23
and
00:00:00',
backup_file_end_time.
In this example, the
@backup_file_end_time='2022-08-30
00:00:00'; stored procedure
will copy transaction
log backups that
were generated
between '2022-08-23
00:00:00' and
'2022-08-30
00:00:00'.

Copies transaction
exec
log backups from
msdb.dbo.rds_tlog_backup_copy_to_S3
@db_name the last seven
= 'testdb1', days and starting
from the provided
@backup_file_start_time='2022-08-23
backup_file_start_time.
00:00:00'; In this example, the
stored procedure
will copy transaction
log backups from
'2022-08-23
00:00:00' up to the
latest transaction log
backup.

Copies transaction
exec
log backups from
msdb.dbo.rds_tlog_backup_copy_to_S3
@db_name the last seven days
= 'testdb1', up to the provided
backup_file_end_time.
In this example, the

1500
Amazon Relational Database Service User Guide
Access to transaction log backups

Parameters provided Expected result


stored procedure
@backup_file_end_time='2022-08-30
will copy transaction
00:00:00'; log backups from
'2022-08-23 00:00:00
up to '2022-08-30
00:00:00'.

Copies transaction
exec
log backups that
msdb.dbo.rds_tlog_backup_copy_to_S3
are available from
@db_name='testdb1', the last seven days
and are between the
@starting_lsn provided range of
=1490000000040007, the starting_lsn
and ending_lsn.
@ending_lsn =
1490000000050009; In this example, the
stored procedure will
copy transaction log
backups from the last
seven days with an
LSN range between
1490000000040007
and
1490000000050009.

Copies transaction
exec
log backups that
msdb.dbo.rds_tlog_backup_copy_to_S3
are available from
@db_name='testdb1', the last seven
days, beginning
@starting_lsn from the provided
=1490000000040007; starting_lsn. In
this example, the
stored procedure will
copy transaction log
backups from LSN
1490000000040007
up to the latest
transaction log
backup.

Copies transaction
exec
log backups that
msdb.dbo.rds_tlog_backup_copy_to_S3
are available from
@db_name='testdb1', the last seven days,
up to the provided
@ending_lsn ending_lsn. In
=1490000000050009; this example, the
stored procedure will
copy transaction log
backups beginning
from the last seven
days up to lsn
1490000000050009.

1501
Amazon Relational Database Service User Guide
Access to transaction log backups

Parameters provided Expected result

Copies transaction
exec
log backups that are
msdb.dbo.rds_tlog_backup_copy_to_S3
available from the
@db_name='testdb1', last seven days, and
exist between the
@rds_backup_starting_seq_id=
provided range of
2000, rds_backup_starting_seq_id
and
@rds_backup_ending_seq_id=
5000; rds_backup_ending_seq_id.
In this example, the
stored procedure will
copy transaction log
backups beginning
from the last seven
days and within the
provided rds backup
sequence id range,
starting from seq_id
2000 up to seq_id
5000.

Copies transaction
exec
log backups that
msdb.dbo.rds_tlog_backup_copy_to_S3
are available from
@db_name='testdb1', the last seven
days, beginning
@rds_backup_starting_seq_id=
from the provided
2000; rds_backup_starting_seq_id.
In this example, the
stored procedure will
copy transaction log
backups beginning
from seq_id 2000,
up to the latest
transaction log
backup.

Copies transaction
exec
log backups that
msdb.dbo.rds_tlog_backup_copy_to_S3
are available from
@db_name='testdb1', the last seven days,
up to the provided
@rds_backup_ending_seq_id=
rds_backup_ending_seq_id.
5000; In this example, the
stored procedure will
copy transaction log
backups beginning
from the last seven
days, up to seq_id
5000.

1502
Amazon Relational Database Service User Guide
Access to transaction log backups

Parameters provided Expected result

Copies a single
exec
transaction log
msdb.dbo.rds_tlog_backup_copy_to_S3
backup with
@db_name='testdb1', the provided
rds_backup_starting_seq_id,
@rds_backup_starting_seq_id=
if available within
2000; the last seven days.
In this example, the
@rds_backup_ending_seq_id=
2000; stored procedure
will copy a single
transaction log
backup that has a
seq_id of 2000, if it
exists within the last
seven days.

Validating the transaction log backup log chain


Databases configured for access to transaction log backups must have automated backup retention
enabled. Automated backup retention sets the databases on the DB instance to the FULL recovery
model. To support point in time restore for a database, avoid changing the database recovery model,
which can result in a broken log chain. We recommend keeping the database set to the FULL recovery
model.

To manually validate the log chain before copying transaction log backups, call the
rds_fn_list_tlog_backup_metadata function and review the values in the
is_log_chain_broken column. A value of "1" indicates the log chain was broken between the current
log backup and the previous log backup.

The following example shows a broken log chain in the output from the
rds_fn_list_tlog_backup_metadata stored procedure.

In a normal log chain, the log sequence number (LSN) value for first_lsn for given rds_sequence_id
should match the value of last_lsn in the preceding rds_sequence_id. In the image, the rds_sequence_id
of 45 has a first_lsn value 90987, which does not match the last_lsn value of 90985 for preceeding
rds_sequence_id 44.

For more information about SQL Server transaction log architecture and log sequence numbers, see
Transaction Log Logical Architecture in the Microsoft SQL Server documentation.

Amazon S3 bucket folder and file structure


Transaction log backups have the following standard structure and naming convention within an Amazon
S3 bucket:

• A new folder is created under the target_s3_arn path for each database with the naming structure
as {db_id}.{family_guid}.
• Within the folder, transaction log backups have a filename structure as {db_id}.{family_guid}.
{rds_backup_seq_id}.{backup_file_epoch}.

1503
Amazon Relational Database Service User Guide
Access to transaction log backups

• You can view the details of family_guid,db_id,rds_backup_seq_id and backup_file_epoch


with the rds_fn_list_tlog_backup_metadatafunction.

The following example shows the folder and file structure of a set of transaction log backups within an
Amazon S3 bucket.

Tracking the status of tasks


To track the status of your copy tasks, call the rds_task_status stored procedure. If you don't provide
any parameters, the stored procedure returns the status of all tasks.

Example usage:

exec msdb.dbo.rds_task_status
@db_name='database_name',

1504
Amazon Relational Database Service User Guide
Access to transaction log backups

@task_id=ID_number;

The following parameters are optional:

• @db_name – The name of the database to show the task status for.
• @task_id – The ID of the task to show the task status for.

Example of listing the status for a specific task ID:

exec msdb.dbo.rds_task_status @task_id=5;

Example of listing the status for a specific database and task:

exec msdb.dbo.rds_task_status@db_name='my_database',@task_id=5;

Example of listing all tasks and their status for a specific database:

exec msdb.dbo.rds_task_status @db_name='my_database';

Example of listing all tasks and their status on the current DB instance:

exec msdb.dbo.rds_task_status;

Canceling a task
To cancel a running task, call the rds_cancel_task stored procedure.

Example usage:

exec msdb.dbo.rds_cancel_task @task_id=ID_number;

The following parameter is required:

• @task_id – The ID of the task to cancel. You can view the task ID by calling the rds_task_status
stored procedure.

For more information on viewing and canceling running tasks, see Importing and exporting SQL Server
databases using native backup and restore (p. 1419).

Troubleshooting access to transaction log backups


The following are issues you might encounter when you use the stored procedures for access to
transaction log backups.

Stored Error Issue Troubleshooting suggestions


Procedure Message

rds_tlog_copy_setup
Backups are Automated backups are not DB instance backup retention
disabled enabled for the DB instance. must be enabled with a retention

1505
Amazon Relational Database Service User Guide
Access to transaction log backups

Stored Error Issue Troubleshooting suggestions


Procedure Message
on this DB of at least one day. For more
instance. information on enabling
Enable DB automated backups and
instance configuring backup retention, see
backups with Backup retention period (p. 593).
a retention
of at least
"1" and try
again.

rds_tlog_copy_setup
Error An internal error occurred. Reconnect to the RDS
running the endpoint and run the
rds_tlog_copy_setup rds_tlog_copy_setup stored
stored procedure again.
procedure.
Reconnect
to the RDS
endpoint and
try again.

rds_tlog_copy_setup
Running the The stored procedure was Avoid using BEGIN and
rds_tlog_backup_copy_setup
attempted within a transaction END when running the
stored using BEGIN and END. rds_tlog_copy_setup stored
procedure procedure.
inside a
transaction
is not
supported.
Verify that
the session
has no open
transactions
and try
again.

rds_tlog_copy_setup
The S3 An incorrect value was provided Ensure the input parameter
bucket name for the input parameter @target_s3_arn specifies the
for the input @target_s3_arn. complete Amazon S3 bucket ARN.
parameter
@target_s3_arn
should
contain at
least one
character
other than a
space.

1506
Amazon Relational Database Service User Guide
Access to transaction log backups

Stored Error Issue Troubleshooting suggestions


Procedure Message

rds_tlog_copy_setup
The The Enable the
SQLSERVER_BACKUP_RESTORE
SQLSERVER_BACKUP_RESTORE SQLSERVER_BACKUP_RESTORE
option isn't option is not enabled on the DB option as specified in the
enabled instance or was just enabled and Requirements section. Wait
or is in the pending internal activation. a few minutes and run the
process rds_tlog_copy_setup stored
of being procedure again.
enabled.
Enable the
option or try
again later.

rds_tlog_copy_setup
The target An NULL value was provided Ensure the input parameter
S3 arn for for the input parameter @target_s3_arn specifies the
the input @target_s3_arn, or the value complete Amazon S3 bucket ARN.
parameter wasn't provided.
@target_s3_arn
can't be
empty or
null.

rds_tlog_copy_setup
The target The input parameter Ensure the input parameter
S3 arn for @target_s3_arn was provide @target_s3_arn specifies the
the input without arn:aws on the front. complete Amazon S3 bucket ARN.
parameter
@target_s3_arn
must begin
with arn:aws.

rds_tlog_copy_setup
The target The rds_tlog_copy_setup To modify the Amazon S3 bucket
S3 ARN is stored procedure previously value for access to transaction
already set to ran and was configured with an log backups, provide a different
the provided Amazon S3 bucket ARN. target S3 ARN.
value.

rds_tlog_copy_setup
Unable to There was an unspecified error Review your setup configuration
generate while generating credentials to and try again.
credentials enable access to transaction log
for enabling backups.
Access to
Transaction
Log Backups.
Confirm the
S3 path ARN
provided
with
rds_tlog_copy_setup,
and try again
later.

1507
Amazon Relational Database Service User Guide
Access to transaction log backups

Stored Error Issue Troubleshooting suggestions


Procedure Message

rds_tlog_copy_setup
You cannot Only two tasks may run at any View pending tasks and wait
run the time. There are pending tasks for them to complete. For more
rds_tlog_copy_setup
awaiting completion. information on monitoring task
stored status, see Tracking the status of
procedure tasks (p. 1504).
while there
are pending
tasks. Wait
for the
pending
tasks to
complete and
try again.

rds_tlog_backup_copy_to_S3
A T-log Only one copy task may run at any View pending tasks and wait
backup file time for a given database. There for them to complete. For more
copy task has is a pending copy task awaiting information on monitoring task
already been completion. status, see Tracking the status of
issued for tasks (p. 1504).
database: %s
with task Id:
%d, please
try again
later.

rds_tlog_backup_copy_to_S3
At least None of the three parameter You can specify either the time,
one of sets were provided, or a provided lsn, or sequence ID parameters.
these three parameter set is missing a One set from these three sets
parameter required parameter. of parameters are required. For
sets must more information on required
be provided. parameters, see Copying
SET-1: transaction log backups (p. 1499).
(@backup_file_start_time,
@backup_file_end_time)
| SET-2:
(@starting_lsn,
@ending_lsn)
| SET-3:
(@rds_backup_starting_seq_id,
@rds_backup_ending_seq_id)

rds_tlog_backup_copy_to_S3
Backups are Automated backups are not For more information on
disabled enabled for the DB instance. enabling automated backups and
on your configuring backup retention, see
instance. Backup retention period (p. 593).
Please enable
backups and
try again in
some time.

rds_tlog_backup_copy_to_S3
Cannot find The value provided for input Use the correct database
the given parameter @db_name does not name. To list all databases by
database %s. match a database name on the DB name, run SELECT * from
instance. sys.databases

1508
Amazon Relational Database Service User Guide
Access to transaction log backups

Stored Error Issue Troubleshooting suggestions


Procedure Message

rds_tlog_backup_copy_to_S3
Cannot The value provided for input The following databases are not
run the parameter @db_name matches a allowed to be used with access to
rds_tlog_backup_copy_to_S3
SQL Server system database name transaction log backups: master,
stored or the RDSAdmin database. model, msdb, tempdb,
procedure for RDSAdmin.
SQL Server
system
databases or
the rdsadmin
database.

rds_tlog_backup_copy_to_S3
Database The value provided for input Use the correct database
name for parameter @db_name was empty name. To list all databases by
the input or NULL. name, run SELECT * from
parameter sys.databases
@db_name
can't be
empty or
null.

rds_tlog_backup_copy_to_S3
DB instance Automated backups are not For more information on
backup enabled for the DB instance. enabling automated backups and
retention configuring backup retention, see
period must Backup retention period (p. 593).
be set to
at least 1
to run the
rds_tlog_backup_copy_setup
stored
procedure.

rds_tlog_backup_copy_to_S3
Error running An internal error occurred. Reconnect to the RDS
the stored endpoint and run the
procedure rds_tlog_backup_copy_to_S3
rds_tlog_backup_copy_to_S3. stored procedure again.
Reconnect
to the RDS
endpoint and
try again.

rds_tlog_backup_copy_to_S3
Only one of Multiple parameter sets were You can specify either the time,
these three provided. lsn, or sequence ID parameters.
parameter One set from these three sets
sets can be of parameters are required. For
provided. more information on required
SET-1: parameters, see Copying
(@backup_file_start_time, transaction log backups (p. 1499).
@backup_file_end_time)
| SET-2:
(@starting_lsn,
@ending_lsn)
| SET-3:
(@rds_backup_starting_seq_id,
@rds_backup_ending_seq_id)

1509
Amazon Relational Database Service User Guide
Access to transaction log backups

Stored Error Issue Troubleshooting suggestions


Procedure Message

rds_tlog_backup_copy_to_S3
Running the The stored procedure was Avoid using BEGIN and
rds_tlog_backup_copy_to_S3
attempted within a transaction END when running the
stored using BEGIN and END. rds_tlog_backup_copy_to_S3
procedure stored procedure.
inside a
transaction
is not
supported.
Verify that
the session
has no open
transactions
and try
again.

rds_tlog_backup_copy_to_S3
The provided There are no available Try again with a valid set of
parameters transactional log backups for the parameters. For more information
fall outside provided input parameters that fit on required parameters,
of the in the copy retention window. see Copying transaction log
transaction backups (p. 1499).
backup log
retention
period. To list
of available
transaction
log backup
files, run the
rds_fn_list_tlog_backup_metadata
function.

rds_tlog_backup_copy_to_S3
There was a There was an issue detected with Confirm your setup for access to
permissions the provided S3 bucket or its transaction log backups is correct.
error in policy permissions. For more information on setup
processing requirements for your S3 bucket,
the request. see Requirements (p. 1494).
Ensure the
bucket is in
the same
Account and
Region as the
DB Instance,
and confirm
the S3
bucket policy
permissions
against the
template in
the public
documentation.

1510
Amazon Relational Database Service User Guide
Access to transaction log backups

Stored Error Issue Troubleshooting suggestions


Procedure Message

rds_tlog_backup_copy_to_S3
Running the The stored procedure was Connect to the RDS primary
attempted on a RDS read replica
rds_tlog_backup_copy_to_S3 DB instance to run the
stored instance. rds_tlog_backup_copy_to_S3
procedure stored procedure.
on an RDS
read replica
instance isn't
permitted.

rds_tlog_backup_copy_to_S3
The LSN for The value provided for input Ensure the value provided for
the input parameter @starting_lsn input parameter @starting_lsn
parameter was greater than the value is less than the value provided for
@starting_lsnprovided for input parameter input parameter @ending_lsn.
must be @ending_lsn.
less than
@ending_lsn.

rds_tlog_backup_copy_to_S3
The The db_owner role has Ensure the account running the
rds_tlog_backup_copy_to_S3
not been granted for the stored procedure is permissioned
stored account attempting to run the with the db_owner role for the
procedure rds_tlog_backup_copy_to_S3 provided db_name.
can only be stored procedure on the provided
performed by db_name.
the members
of db_owner
role in the
source
database.

rds_tlog_backup_copy_to_S3
The sequence The value provided Ensure the value provided
ID for for input parameter for input parameter
the input @rds_backup_starting_seq_id @rds_backup_starting_seq_id
parameter was greater than the value is less than the value
@rds_backup_starting_seq_id
provided for input parameter provided for input parameter
must be @rds_backup_ending_seq_id. @rds_backup_ending_seq_id.
less than
or equal to
@rds_backup_ending_seq_id.

rds_tlog_backup_copy_to_S3
The The Enable the
SQLSERVER_BACKUP_RESTORE
SQLSERVER_BACKUP_RESTORE SQLSERVER_BACKUP_RESTORE
option isn't option is not enabled on the DB option as specified in the
enabled instance or was just enabled and Requirements section. Wait
or is in the pending internal activation. a few minutes and run the
process rds_tlog_backup_copy_to_S3
of being stored procedure again.
enabled.
Enable the
option or try
again later.

1511
Amazon Relational Database Service User Guide
Access to transaction log backups

Stored Error Issue Troubleshooting suggestions


Procedure Message

rds_tlog_backup_copy_to_S3
The start The value provided Ensure the value provided
time for for input parameter for input parameter
the input @backup_file_start_time @backup_file_start_time
parameter was greater than the value is less than the value
@backup_file_start_time
provided for input parameter provided for input parameter
must be @backup_file_end_time. @backup_file_end_time.
less than
@backup_file_end_time.

rds_tlog_backup_copy_to_S3
We were There may be an issue with the Ensure the Amazon S3
unable to Amazon S3 bucket permissions, or bucket policy permissions are
process the the Amazon S3 bucket provided is permissioned to allow RDS access.
request due in another account or Region. Ensure the Amazon S3 bucket is in
to a lack the same account and Region as
of access. the DB instance.
Please
check your
setup and
permissions
for the
feature.

rds_tlog_backup_copy_to_S3
You cannot When storage encryption is not Do not provide an input
provide a enabled on the DB instance, the parameter for @kms_key_arn.
KMS Key input parameter @kms_key_arn
ARN as input should not be provided.
parameter
to the stored
procedure
for instances
that are not
storage-
encrypted.

rds_tlog_backup_copy_to_S3
You must When storage encryption is Provide an input parameter for
provide a enabled on the DB instance, the @kms_key_arn with a value that
KMS Key input parameter @kms_key_arn matches the ARN of the Amazon
ARN as input must be provided. S3 bucket to use for transaction
parameter log backups.
to the stored
procedure
for storage
encrypted
instances.

1512
Amazon Relational Database Service User Guide
Access to transaction log backups

Stored Error Issue Troubleshooting suggestions


Procedure Message

rds_tlog_backup_copy_to_S3
You must The access to transaction Run the rds_tlog_copy_setup
run the log backups setup procedure stored procedure
was not completed before
rds_tlog_copy_setup before running the
stored attempting to run the rds_tlog_backup_copy_to_S3
procedure rds_tlog_backup_copy_to_S3 stored procedure. For more
and set the stored procedure. information on running the
@target_s3_arn, setup procedure for access to
before transaction log backups, see
running the Setting up access to transaction
rds_tlog_backup_copy_to_S3 log backups (p. 1496).
stored
procedure.

1513
Amazon Relational Database Service User Guide
Options for SQL Server

Options for the Microsoft SQL Server database


engine
In this section, you can find descriptions for options that are available for Amazon RDS instances running
the Microsoft SQL Server DB engine. To enable these options, you add them to an option group, and
then associate the option group with your DB instance. For more information, see Working with option
groups (p. 331).

If you're looking for optional features that aren't added through RDS option groups (such as SSL,
Microsoft Windows Authentication, and Amazon S3 integration), see Additional features for Microsoft
SQL Server on Amazon RDS (p. 1455).

Amazon RDS supports the following options for Microsoft SQL Server DB instances.

Option Option ID Engine editions

Linked Servers with Oracle OLEDB (p. 1517) OLEDB_ORACLE SQL Server Enterprise
Edition

SQL Server Standard


Edition

Native backup and restore (p. 1525) SQL Server Enterprise


SQLSERVER_BACKUP_RESTORE
Edition

SQL Server Standard


Edition

SQL Server Web Edition

SQL Server Express


Edition

Transparent Data Encryption (p. 1528) SQL Server 2014–2019


TRANSPARENT_DATA_ENCRYPTION
(RDS console) Enterprise Edition

TDE (AWS CLI and RDS SQL Server 2019


API) Standard Edition

SQL Server Audit (p. 1536) SQLSERVER_AUDIT In RDS, starting with


SQL Server 2014, all
editions of SQL Server
support server-level
audits, and Enterprise
Edition also supports
database-level audits.

Starting with SQL


Server SQL Server 2016
(13.x) SP1, all editions
support both server-
level and database-
level audits.

For more information,


see SQL Server Audit
(database engine)

1514
Amazon Relational Database Service User Guide
Listing the available options for
SQL Server versions and editions

Option Option ID Engine editions


in the SQL Server
documentation.

SQL Server Analysis Services (p. 1543) SSAS SQL Server Enterprise
Edition

SQL Server Standard


Edition

SQL Server Integration Services (p. 1562) SSIS SQL Server Enterprise
Edition

SQL Server Standard


Edition

SQL Server Reporting Services (p. 1577) SSRS SQL Server Enterprise
Edition

SQL Server Standard


Edition

Microsoft Distributed Transaction MSDTC In RDS, starting with


Coordinator (p. 1590) SQL Server 2014, all
editions of SQL Server
support distributed
transactions.

Listing the available options for SQL Server versions


and editions
You can use the describe-option-group-options AWS CLI command to list the available options
for SQL Server versions and editions, and the settings for those options.

The following example shows the options and option settings for SQL Server 2019 Enterprise Edition.
The --engine-name option is required.

aws rds describe-option-group-options --engine-name sqlserver-ee --major-engine-version


15.00

The output resembles the following:

{
"OptionGroupOptions": [
{
"Name": "MSDTC",
"Description": "Microsoft Distributed Transaction Coordinator",
"EngineName": "sqlserver-ee",
"MajorEngineVersion": "15.00",
"MinimumRequiredMinorEngineVersion": "4043.16.v1",
"PortRequired": true,
"DefaultPort": 5000,
"OptionsDependedOn": [],
"OptionsConflictsWith": [],
"Persistent": false,
"Permanent": false,

1515
Amazon Relational Database Service User Guide
Listing the available options for
SQL Server versions and editions

"RequiresAutoMinorEngineVersionUpgrade": false,
"VpcOnly": false,
"OptionGroupOptionSettings": [
{
"SettingName": "ENABLE_SNA_LU",
"SettingDescription": "Enable support for SNA LU protocol",
"DefaultValue": "true",
"ApplyType": "DYNAMIC",
"AllowedValues": "true,false",
"IsModifiable": true,
"IsRequired": false,
"MinimumEngineVersionPerAllowedValue": []
},
...

{
"Name": "TDE",
"Description": "SQL Server - Transparent Data Encryption",
"EngineName": "sqlserver-ee",
"MajorEngineVersion": "15.00",
"MinimumRequiredMinorEngineVersion": "4043.16.v1",
"PortRequired": false,
"OptionsDependedOn": [],
"OptionsConflictsWith": [],
"Persistent": true,
"Permanent": false,
"RequiresAutoMinorEngineVersionUpgrade": false,
"VpcOnly": false,
"OptionGroupOptionSettings": []
}
]
}

1516
Amazon Relational Database Service User Guide
Linked Servers with Oracle OLEDB

Support for Linked Servers with Oracle OLEDB in


Amazon RDS for SQL Server
Linked servers with the Oracle Provider for OLEDB on RDS for SQL Server lets you access external data
sources on an Oracle database. You can read data from remote Oracle data sources and run commands
against remote Oracle database servers outside of your RDS for SQL Server DB instance. Using linked
servers with Oracle OLEDB, you can:

• Directly access data sources other than SQL Server


• Query against diverse Oracle data sources with the same query without moving the data
• Issue distributed queries, updates, commands, and transactions on data sources across an enterprise
ecosystem
• Integrate connections to an Oracle database from within the Microsoft Business Intelligence suite
(SSIS, SSRS, SSAS)
• Migrate from an Oracle database to RDS for SQL Server

You can activate one or more linked servers for Oracle on either an existing or new RDS for SQL Server
DB instance. Then you can integrate external Oracle data sources with your DB instance.

Contents
• Supported versions and Regions (p. 1517)
• Limitations and recommendations (p. 1517)
• Activating linked servers with Oracle (p. 1518)
• Creating the option group for OLEDB_ORACLE (p. 1518)
• Adding the OLEDB_ORACLE option to the option group (p. 1519)
• Associating the option group with your DB instance (p. 1520)
• Modifying OLEDB provider properties (p. 1521)
• Modifying OLEDB driver properties (p. 1522)
• Deactivating linked servers with Oracle (p. 1523)

Supported versions and Regions


RDS for SQL Server supports linked servers with Oracle OLEDB in all Regions for SQL Server Standard
and Enterprise Editions on the following versions:

• SQL Server 2019, all versions


• SQL Server 2017, all versions

Linked servers with Oracle OLEDB is supported for the following Oracle Database versions:

• Oracle Database 21c, all versions


• Oracle Database 19c, all versions
• Oracle Database 18c, all versions

Limitations and recommendations


Keep in mind the following limitations and recommendations that apply to linked servers with Oracle
OLEDB:

1517
Amazon Relational Database Service User Guide
Linked Servers with Oracle OLEDB

• Allow network traffic by adding the applicable TCP port in the security group for each RDS for SQL
Server DB instance. For example, if you’re configuring a linked server between an EC2 Oracle DB
instance and an RDS for SQL Server DB instance, then you must allow traffic from the IP address of
the EC2 Oracle DB instance. You also must allow traffic on the port that SQL Server is using to listen
for database communication. For more information on security groups, see Controlling access with
security groups (p. 2680).
• Perform a reboot of the RDS for SQL Server DB instance after turning on, turning off, or modifying the
OLEDB_ORACLE option in your option group. The option group status displays pending_reboot for
these events and is required.
• Only simple authentication is supported with a user name and password for the Oracle data source.
• Open Database Connectivity (ODBC) drivers are not supported. Only the latest version of the OLEDB
driver is supported.
• Distributed transactions (XA) are supported. To activate distributed transactions, turn on the MSDTC
option in the Option Group for your DB instance and make sure XA transactions are turned on. For
more information, see Support for Microsoft Distributed Transaction Coordinator in RDS for SQL
Server (p. 1590).
• Creating data source names (DSNs) to use as a shortcut for a connection string is not supported.
• OLEDB driver tracing is not supported. You can use SQL Server Extended Events to trace OLEDB
events. For more information, see Set up Extended Events in RDS for SQL Server.
• Access to the catalogs folder for an Oracle linked server is not supported using SQL Server
Management Studio (SSMS).

Activating linked servers with Oracle


Activate linked servers with Oracle by adding the OLEDB_ORACLE option to your RDS for SQL Server DB
instance. Use the following process:

1. Create a new option group, or choose an existing option group.


2. Add the OLEDB_ORACLE option to the option group.
3. Choose a version of the OLEDB driver to use.
4. Associate the option group with the DB instance.
5. Reboot the DB instance.

Creating the option group for OLEDB_ORACLE


To work with linked servers with Oracle, create an option group or modify an option group that
corresponds to the SQL Server edition and version of the DB instance that you plan to use. To complete
this procedure, use the AWS Management Console or the AWS CLI.

Console

The following procedure creates an option group for SQL Server Standard Edition 2019.

To create the option group

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Option groups.
3. Choose Create group.
4. In the Create option group window, do the following:

1518
Amazon Relational Database Service User Guide
Linked Servers with Oracle OLEDB

a. For Name, enter a name for the option group that is unique within your AWS account, such as
oracle-oledb-se-2019. The name can contain only letters, digits, and hyphens.
b. For Description, enter a brief description of the option group, such as OLEDB_ORACLE option
group for SQL Server SE 2019. The description is used for display purposes.
c. For Engine, choose sqlserver-se.
d. For Major engine version, choose 15.00.
5. Choose Create.

CLI

The following procedure creates an option group for SQL Server Standard Edition 2019.

To create the option group

• Run one of the following commands.

Example

For Linux, macOS, or Unix:

aws rds create-option-group \


--option-group-name oracle-oledb-se-2019 \
--engine-name sqlserver-se \
--major-engine-version 15.00 \
--option-group-description "OLEDB_ORACLE option group for SQL Server SE 2019"

For Windows:

aws rds create-option-group ^


--option-group-name oracle-oledb-se-2019 ^
--engine-name sqlserver-se ^
--major-engine-version 15.00 ^
--option-group-description "OLEDB_ORACLE option group for SQL Server SE 2019"

Adding the OLEDB_ORACLE option to the option group


Next, use the AWS Management Console or the AWS CLI to add the OLEDB_ORACLE option to your
option group.

Console

To add the OLEDB_ORACLE option

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Option groups.
3. Choose the option group that you just created, which is oracle-oledb-se-2019 in this example.
4. Choose Add option.
5. Under Option details, choose OLEDB_ORACLE for Option name.
6. Under Scheduling, choose whether to add the option immediately or at the next maintenance
window.
7. Choose Add option.

1519
Amazon Relational Database Service User Guide
Linked Servers with Oracle OLEDB

CLI

To add the OLEDB_ORACLE option

• Add the OLEDB_ORACLE option to the option group.

Example
For Linux, macOS, or Unix:

aws rds add-option-to-option-group \


--option-group-name oracle-oledb-se-2019 \
--options OptionName=OLEDB_ORACLE \
--apply-immediately

For Windows:

aws rds add-option-to-option-group ^


--option-group-name oracle-oledb-se-2019 ^
--options OptionName=OLEDB_ORACLE ^
--apply-immediately

Associating the option group with your DB instance


To associate the OLEDB_ORACLE option group and parameter group with your DB instance, use the AWS
Management Console or the AWS CLI

Console

To finish activating linked servers for Oracle, associate your OLEDB_ORACLE option group with a new or
existing DB instance:

• For a new DB instance, associate them when you launch the instance. For more information, see
Creating an Amazon RDS DB instance (p. 300).
• For an existing DB instance, associate them by modifying the instance. For more information, see
Modifying an Amazon RDS DB instance (p. 401).

CLI

You can associate the OLEDB_ORACLE option group and parameter group with a new or existing DB
instance.

To create an instance with the OLEDB_ORACLE option group and parameter group

• Specify the same DB engine type and major version that you used when creating the option group.

Example
For Linux, macOS, or Unix:

aws rds create-db-instance \


--db-instance-identifier mytestsqlserveroracleoledbinstance \
--db-instance-class db.m5.2xlarge \
--engine sqlserver-se \
--engine-version 15.0.4236.7.v1 \
--allocated-storage 100 \
--manage-master-user-password \

1520
Amazon Relational Database Service User Guide
Linked Servers with Oracle OLEDB

--master-username admin \
--storage-type gp2 \
--license-model li \
--domain-iam-role-name my-directory-iam-role \
--domain my-domain-id \
--option-group-name oracle-oledb-se-2019 \
--db-parameter-group-name my-parameter-group-name

For Windows:

aws rds create-db-instance ^


--db-instance-identifier mytestsqlserveroracleoledbinstance ^
--db-instance-class db.m5.2xlarge ^
--engine sqlserver-se ^
--engine-version 15.0.4236.7.v1 ^
--allocated-storage 100 ^
--manage-master-user-password ^
--master-username admin ^
--storage-type gp2 ^
--license-model li ^
--domain-iam-role-name my-directory-iam-role ^
--domain my-domain-id ^
--option-group-name oracle-oledb-se-2019 ^
--db-parameter-group-name my-parameter-group-name

To modify an instance and associate the OLEDB_ORACLE option group

• Run one of the following commands.

Example

For Linux, macOS, or Unix:

aws rds modify-db-instance \


--db-instance-identifier mytestsqlserveroracleoledbinstance \
--option-group-name oracle-oledb-se-2019 \
--db-parameter-group-name my-parameter-group-name \
--apply-immediately

For Windows:

aws rds modify-db-instance ^


--db-instance-identifier mytestsqlserveroracleoledbinstance ^
--option-group-name oracle-oledb-se-2019 ^
--db-parameter-group-name my-parameter-group-name ^
--apply-immediately

Modifying OLEDB provider properties


You can view and change the properties of the OLEDB provider. Only the master user can perform this
task. All linked servers for Oracle that are created on the DB instance use the same properties of that
OLEDB provider. Call the sp_MSset_oledb_prop stored procedure to change the properties of the
OLEDB provider.

To change the OLEDB provider properties

1521
Amazon Relational Database Service User Guide
Linked Servers with Oracle OLEDB

USE [master]
GO
EXEC sp_MSset_oledb_prop N'OraOLEDB.Oracle', N'AllowInProcess', 1
EXEC sp_MSset_oledb_prop N'OraOLEDB.Oracle', N'DynamicParameters', 0
GO

The following properties can be modified:

Property name Recommended Value Description


(1 = On, 0 = Off)

Dynamic 1 Allows SQL placeholders (represented by '?') in


parameter parameterized queries.

Nested queries 1 Allows nested SELECT statements in the FROM clause,


such as sub-queries.

Level zero only 0 Only base-level OLEDB interfaces are called against
the provider.

Allow inprocess 1 If turned on, Microsoft SQL Server allows the provider
to be instantiated as an in-process server. Set this
property to 1 to use Oracle linked servers.

Non transacted 0 If non-zero, SQL Server allows updates.


updates

Index as access False If non-zero, SQL Server attempts to use indexes of the
path provider to fetch data.

Disallow adhoc False If set, SQL Server does not allow running pass-
access through queries against the OLEDB provider.
While this option can be checked, it is sometimes
appropriate to run pass-through queries.

Supports LIKE 1 Indicates that the provider supports queries using the
operator LIKE keyword.

Modifying OLEDB driver properties


You can view and change the properties of the OLEDB driver when creating a linked server for Oracle.
Only the master user can perform this task. Driver properties define how the OLEDB driver handles
data when working with a remote Oracle data source. Driver properties are specific to each Oracle linked
server created on the DB instance. Call the master.dbo.sp_addlinkedserver stored procedure to
change the properties of the OLEDB driver.

Example: To create a linked server and change the OLEDB driver FetchSize property

EXEC master.dbo.sp_addlinkedserver
@server = N'Oracle_link2',
@srvproduct=N'Oracle',
@provider=N'OraOLEDB.Oracle',
@datasrc=N'my-oracle-test.cnetsipka.us-west-2.rds.amazonaws.com:1521/ORCL,
@provstr='FetchSize=200'
GO

1522
Amazon Relational Database Service User Guide
Linked Servers with Oracle OLEDB

EXEC master.dbo.sp_addlinkedsrvlogin
@rmtsrvname=N'Oracle_link2',
@useself=N'False',
@locallogin=NULL,
@rmtuser=N'master',
@rmtpassword='Test#1234'
GO

Note
Specify a password other than the prompt shown here as a security best practice.

Deactivating linked servers with Oracle


To deactivate linked servers with Oracle, remove the OLEDB_ORACLE option from its option group.
Important
Removing the option doesn't delete the existing linked server configurations on the DB instance.
You must manually drop them to remove them from the DB instance.
You can reactivate the OLEDB_ORACLE option after removal to reuse the linked server
configurations that were previously configured on the DB instance.

Console

The following procedure removes the OLEDB_ORACLE option.

To remove the OLEDB_ORACLE option from its option group

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Option groups.
3. Choose the option group with the OLEDB_ORACLE option (oracle-oledb-se-2019 in the previous
examples).
4. Choose Delete option.
5. Under Deletion options, choose OLEDB_ORACLE for Options to delete.
6. Under Apply immediately, choose Yes to delete the option immediately, or No to delete it during
the next maintenance window.
7. Choose Delete.

CLI

The following procedure removes the OLEDB_ORACLE option.

To remove the OLEDB_ORACLE option from its option group

• Run one of the following commands.

Example

For Linux, macOS, or Unix:

aws rds remove-option-from-option-group \


--option-group-name oracle-oledb-se-2019 \
--options OLEDB_ORACLE \
--apply-immediately

For Windows:

1523
Amazon Relational Database Service User Guide
Linked Servers with Oracle OLEDB

aws rds remove-option-from-option-group ^


--option-group-name oracle-oledb-se-2019 ^
--options OLEDB_ORACLE ^
--apply-immediately

1524
Amazon Relational Database Service User Guide
Native backup and restore

Support for native backup and restore in SQL Server


By using native backup and restore for SQL Server databases, you can create a differential or full
backup of your on-premises database and store the backup files on Amazon S3. You can then restore
to an existing Amazon RDS DB instance running SQL Server. You can also back up an RDS for SQL
Server database, store it on Amazon S3, and restore it in other locations. In addition, you can restore
the backup to an on-premises server, or a different Amazon RDS DB instance running SQL Server.
For more information, see Importing and exporting SQL Server databases using native backup and
restore (p. 1419).

Amazon RDS supports native backup and restore for Microsoft SQL Server databases by using differential
and full backup files (.bak files).

Adding the native backup and restore option


The general process for adding the native backup and restore option to a DB instance is the following:

1. Create a new option group, or copy or modify an existing option group.


2. Add the SQLSERVER_BACKUP_RESTORE option to the option group.
3. Associate an AWS Identity and Access Management (IAM) role with the option. The IAM role must have
access to an S3 bucket to store the database backups.

That is, the option must have as its option setting a valid Amazon Resource Name (ARN) in the format
arn:aws:iam::account-id:role/role-name. For more information, see Amazon Resource
Names (ARNs) in the AWS General Reference.

The IAM role must also have a trust relationship and a permissions policy attached. The trust
relationship allows RDS to assume the role, and the permissions policy defines the actions that the
role can perform. For more information, see Manually creating an IAM role for native backup and
restore (p. 1422).
4. Associate the option group with the DB instance.

After you add the native backup and restore option, you don't need to restart your DB instance. As soon
as the option group is active, you can begin backing up and restoring immediately.

Console

To add the native backup and restore option

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Option groups.
3. Create a new option group or use an existing option group. For information on how to create a
custom DB option group, see Creating an option group (p. 332).

To use an existing option group, skip to the next step.


4. Add the SQLSERVER_BACKUP_RESTORE option to the option group. For more information about
adding options, see Adding an option to an option group (p. 335).
5. Do one of the following:

• To use an existing IAM role and Amazon S3 settings, choose an existing IAM role for IAM Role. If
you use an existing IAM role, RDS uses the Amazon S3 settings configured for this role.
• To create a new role and configure Amazon S3 settings, do the following:
1. For IAM role, choose Create a new role.

1525
Amazon Relational Database Service User Guide
Native backup and restore

2. For S3 bucket, choose an S3 bucket from the list.


3. For S3 prefix (optional), specify a prefix to use for the files stored in your Amazon S3 bucket.

This prefix can include a file path but doesn't have to. If you provide a prefix, RDS attaches that
prefix to all backup files. RDS then uses the prefix during a restore to identify related files and
ignore irrelevant files. For example, you might use the S3 bucket for purposes besides holding
backup files. In this case, you can use the prefix to have RDS perform native backup and restore
only on a particular folder and its subfolders.

If you leave the prefix blank, then RDS doesn't use a prefix to identify backup files or files to
restore. As a result, during a multiple-file restore, RDS attempts to restore every file in every
folder of the S3 bucket.
4. Choose the Enable encryption check box to encrypt the backup file. Leave the check box
cleared (the default) to have the backup file unencrypted.

If you chose Enable encryption, choose an encryption key for AWS KMS key. For more
information about encryption keys, see Getting started in the AWS Key Management Service
Developer Guide.
6. Choose Add option.
7. Apply the option group to a new or existing DB instance:

• For a new DB instance, apply the option group when you launch the instance. For more
information, see Creating an Amazon RDS DB instance (p. 300).
• For an existing DB instance, apply the option group by modifying the instance and attaching the
new option group. For more information, see Modifying an Amazon RDS DB instance (p. 401).

CLI

This procedure makes the following assumptions:

• You're adding the SQLSERVER_BACKUP_RESTORE option to an option group that already exists. For
more information about adding options, see Adding an option to an option group (p. 335).
• You're associating the option with an IAM role that already exists and has access to an S3 bucket to
store the backups.
• You're applying the option group to a DB instance that already exists. For more information, see
Modifying an Amazon RDS DB instance (p. 401).

To add the native backup and restore option

1. Add the SQLSERVER_BACKUP_RESTORE option to the option group.

Example

For Linux, macOS, or Unix:

aws rds add-option-to-option-group \


--apply-immediately \
--option-group-name mybackupgroup \
--options "OptionName=SQLSERVER_BACKUP_RESTORE, \
OptionSettings=[{Name=IAM_ROLE_ARN,Value=arn:aws:iam::account-id:role/role-name}]"

For Windows:

aws rds add-option-to-option-group ^


--option-group-name mybackupgroup ^

1526
Amazon Relational Database Service User Guide
Native backup and restore

--options "[{\"OptionName\": \"SQLSERVER_BACKUP_RESTORE\", ^


\"OptionSettings\": [{\"Name\": \"IAM_ROLE_ARN\", ^
\"Value\": \"arn:aws:iam::account-id:role/role-name"}]}]" ^
--apply-immediately

Note
When using the Windows command prompt, you must escape double quotes (") in JSON
code by prefixing them with a backslash (\).
2. Apply the option group to the DB instance.

Example

For Linux, macOS, or Unix:

aws rds modify-db-instance \


--db-instance-identifier mydbinstance \
--option-group-name mybackupgroup \
--apply-immediately

For Windows:

aws rds modify-db-instance ^


--db-instance-identifier mydbinstance ^
--option-group-name mybackupgroup ^
--apply-immediately

Modifying native backup and restore option settings


After you enable the native backup and restore option, you can modify the settings for the option. For
more information about how to modify option settings, see Modifying an option setting (p. 340).

Removing the native backup and restore option


You can turn off native backup and restore by removing the option from your DB instance. After you
remove the native backup and restore option, you don't need to restart your DB instance.

To remove the native backup and restore option from a DB instance, do one of the following:

• Remove the option from the option group it belongs to. This change affects all DB instances that use
the option group. For more information, see Removing an option from an option group (p. 343).
• Modify the DB instance and specify a different option group that doesn't include the native backup
and restore option. This change affects a single DB instance. You can specify the default (empty)
option group, or a different custom option group. For more information, see Modifying an Amazon
RDS DB instance (p. 401).

1527
Amazon Relational Database Service User Guide
Transparent Data Encryption

Support for Transparent Data Encryption in SQL


Server
Amazon RDS supports using Transparent Data Encryption (TDE) to encrypt stored data on your DB
instances running Microsoft SQL Server. TDE automatically encrypts data before it is written to storage,
and automatically decrypts data when the data is read from storage.

Amazon RDS supports TDE for the following SQL Server versions and editions:

• SQL Server 2019 Standard and Enterprise Editions


• SQL Server 2017 Enterprise Edition
• SQL Server 2016 Enterprise Edition
• SQL Server 2014 Enterprise Edition

Transparent Data Encryption for SQL Server provides encryption key management by using a two-tier
key architecture. A certificate, which is generated from the database master key, is used to protect the
data encryption keys. The database encryption key performs the actual encryption and decryption of
data on the user database. Amazon RDS backs up and manages the database master key and the TDE
certificate.

Transparent Data Encryption is used in scenarios where you need to encrypt sensitive data. For example,
you might want to provide data files and backups to a third party, or address security-related regulatory
compliance issues. You can't encrypt the system databases for SQL Server, such as the model or master
databases.

A detailed discussion of Transparent Data Encryption is beyond the scope of this guide, but make sure
that you understand the security strengths and weaknesses of each encryption algorithm and key. For
information about Transparent Data Encryption for SQL Server, see Transparent Data Encryption (TDE) in
the Microsoft documentation.

Topics
• Turning on TDE for RDS for SQL Server (p. 1528)
• Encrypting data on RDS for SQL Server (p. 1529)
• Backing up and restoring TDE certificates on RDS for SQL Server (p. 1530)
• Backing up and restoring TDE certificates for on-premises databases (p. 1533)
• Turning off TDE for RDS for SQL Server (p. 1535)

Turning on TDE for RDS for SQL Server


To turn on Transparent Data Encryption for an RDS for SQL Server DB instance, specify the TDE option in
an RDS option group that's associated with that DB instance:

1. Determine whether your DB instance is already associated with an option group that has the TDE
option. To view the option group that a DB instance is associated with, use the RDS console, the
describe-db-instance AWS CLI command, or the API operation DescribeDBInstances.
2. If the DB instance isn't associated with an option group that has TDE turned on, you have two choices.
You can create an option group and add the TDE option, or you can modify the associated option
group to add it.
Note
In the RDS console, the option is named TRANSPARENT_DATA_ENCRYPTION. In the AWS CLI
and RDS API, it's named TDE.

1528
Amazon Relational Database Service User Guide
Transparent Data Encryption

For information about creating or modifying an option group, see Working with option
groups (p. 331). For information about adding an option to an option group, see Adding an option to
an option group (p. 335).
3. Associate the DB instance with the option group that has the TDE option. For information about
associating a DB instance with an option group, see Modifying an Amazon RDS DB instance (p. 401).

Option group considerations


The TDE option is a persistent option. You can't remove it from an option group unless all DB instances
and backups are no longer associated with the option group. After you add the TDE option to an option
group, the option group can be associated only with DB instances that use TDE. For more information
about persistent options in an option group, see Option groups overview (p. 331).

Because the TDE option is a persistent option, you can have a conflict between the option group and an
associated DB instance. You can have a conflict in the following situations:

• The current option group has the TDE option, and you replace it with an option group that doesn't
have the TDE option.
• You restore from a DB snapshot to a new DB instance that doesn't have an option group that contains
the TDE option. For more information about this scenario, see Option group considerations (p. 624).

SQL Server performance considerations


Using Transparent Data Encryption can affect the performance of a SQL Server DB instance.

Performance for unencrypted databases can also be degraded if the databases are on a DB instance
that has at least one encrypted database. As a result, we recommend that you keep encrypted and
unencrypted databases on separate DB instances.

Encrypting data on RDS for SQL Server


When the TDE option is added to an option group, Amazon RDS generates a certificate that's used in
the encryption process. You can then use the certificate to run SQL statements that encrypt data in a
database on the DB instance.

The following example uses the RDS-created certificate called RDSTDECertificateName to encrypt a
database called myDatabase.

---------- Turning on TDE -------------

-- Find an RDS TDE certificate to use


USE [master]
GO
SELECT name FROM sys.certificates WHERE name LIKE 'RDSTDECertificate%'
GO

USE [myDatabase]
GO
-- Create a database encryption key (DEK) using one of the certificates from the previous
step
CREATE DATABASE ENCRYPTION KEY WITH ALGORITHM = AES_256
ENCRYPTION BY SERVER CERTIFICATE [RDSTDECertificateName]
GO

-- Turn on encryption for the database


ALTER DATABASE [myDatabase] SET ENCRYPTION ON
GO

1529
Amazon Relational Database Service User Guide
Transparent Data Encryption

-- Verify that the database is encrypted


USE [master]
GO
SELECT name FROM sys.databases WHERE is_encrypted = 1
GO
SELECT db_name(database_id) as DatabaseName, * FROM sys.dm_database_encryption_keys
GO

The time that it takes to encrypt a SQL Server database using TDE depends on several factors. These
include the size of the DB instance, whether the instance uses Provisioned IOPS storage, the amount of
data, and other factors.

Backing up and restoring TDE certificates on RDS for SQL Server


RDS for SQL Server provides stored procedures for backing up, restoring, and dropping TDE certificates.
RDS for SQL Server also provides a function for viewing restored user TDE certificates.

User TDE certificates are used to restore databases to RDS for SQL Server that are on-premises and have
TDE turned on. These certificates have the prefix UserTDECertificate_. After restoring databases,
and before making them available to use, RDS modifies the databases that have TDE turned on to use
RDS-generated TDE certificates. These certificates have the prefix RDSTDECertificate.

User TDE certificates remain on the RDS for SQL Server DB instance, unless you drop them using the
rds_drop_tde_certificate stored procedure. For more information, see Dropping restored TDE
certificates (p. 1533).

You can use a user TDE certificate to restore other databases from the source DB instance. The databases
to restore must use the same TDE certificate and have TDE turned on. You don't have to import (restore)
the same certificate again.

Topics
• Prerequisites (p. 1530)
• Limitations (p. 1531)
• Backing up a TDE certificate (p. 1531)
• Restoring a TDE certificate (p. 1532)
• Viewing restored TDE certificates (p. 1533)
• Dropping restored TDE certificates (p. 1533)

Prerequisites
Before you can back up or restore TDE certificates on RDS for SQL Server, make sure to perform the
following tasks. The first three are described in Setting up for native backup and restore (p. 1421).

1. Create Amazon S3 buckets for storing files to back up and restore.

We recommend that you use separate buckets for database backups and for TDE certificate backups.
2. Create an IAM role for backing up and restoring files.

The IAM role must be both a user and an administrator for the AWS KMS key.

In addition to the permissions required for SQL Server native backup and restore, the IAM role also
requires the following permissions:
• s3:GetBucketACL, s3:GetBucketLocation, and s3:ListBucket on the S3 bucket resource
• s3:ListAllMyBuckets on the * resource

1530
Amazon Relational Database Service User Guide
Transparent Data Encryption

3. Add the SQLSERVER_BACKUP_RESTORE option to an option group on your DB instance.

This is in addition to the TRANSPARENT_DATA_ENCRYPTION (TDE) option.


4. Make sure that you have a symmetric encryption KMS key. You have the following options:
• If you have an existing KMS key in your account, you can use it. No further action is necessary.
• If you don't have an existing symmetric encryption KMS key in your account, create a KMS key by
following the instructions in Creating keys in the AWS Key Management Service Developer Guide.
5. Enable Amazon S3 integration to transfer files between the DB instance and Amazon S3.

For more information on enabling Amazon S3 integration, see Integrating an Amazon RDS for SQL
Server DB instance with Amazon S3 (p. 1464).

Limitations
Using stored procedures to back up and restore TDE certificates has the following limitations:

• Both the SQLSERVER_BACKUP_RESTORE and TRANSPARENT_DATA_ENCRYPTION (TDE) options must


be added to the option group that you associated with your DB instance.
• TDE certificate backup and restore aren't supported on Multi-AZ DB instances.
• Canceling TDE certificate backup and restore tasks isn't supported.
• You can't use a user TDE certificate for TDE encryption of any other database on your RDS for SQL
Server DB instance. You can use it to restore only other databases from the source DB instance that
have TDE turned on and that use the same TDE certificate.
• You can drop only user TDE certificates.
• The maximum number of user TDE certificates supported on RDS is 10. If the number exceeds 10, drop
unused TDE certificates and try again.
• The certificate name can't be empty or null.
• When restoring a certificate, the certificate name can't include the keyword RDSTDECERTIFICATE,
and must start with the UserTDECertificate_ prefix.
• The @certificate_name parameter can include only the following characters: a-z, 0-9, @, $, #, and
underscore (_).
• The file extension for @certificate_file_s3_arn must be .cer (case-insensitive).
• The file extension for @private_key_file_s3_arn must be .pvk (case-insensitive).
• The S3 metadata for the private key file must include the x-amz-meta-rds-tde-pwd tag. For more
information, see Backing up and restoring TDE certificates for on-premises databases (p. 1533).

Backing up a TDE certificate


To back up TDE certificates, use the rds_backup_tde_certificate stored procedure. It has the
following syntax.

EXECUTE msdb.dbo.rds_backup_tde_certificate
@certificate_name='UserTDECertificate_certificate_name | RDSTDECertificatetimestamp',
@certificate_file_s3_arn='arn:aws:s3:::bucket_name/certificate_file_name.cer',
@private_key_file_s3_arn='arn:aws:s3:::bucket_name/key_file_name.pvk',
@kms_password_key_arn='arn:aws:kms:region:account-id:key/key-id',
[@overwrite_s3_files=0|1];

The following parameters are required:

• @certificate_name – The name of the TDE certificate to back up.

1531
Amazon Relational Database Service User Guide
Transparent Data Encryption

• @certificate_file_s3_arn – The destination Amazon Resource Name (ARN) for the certificate
backup file in Amazon S3.
• @private_key_file_s3_arn – The destination S3 ARN of the private key file that secures the TDE
certificate.
• @kms_password_key_arn – The ARN of the symmetric KMS key used to encrypt the private key
password.

The following parameter is optional:

• @overwrite_s3_files – Indicates whether to overwrite the existing certificate and private key files
in S3:
• 0 – Doesn't overwrite the existing files. This value is the default.

Setting @overwrite_s3_files to 0 returns an error if a file already exists.


• 1 – Overwrites an existing file that has the specified name, even if it isn't a backup file.

Example of backing up a TDE certificate

EXECUTE msdb.dbo.rds_backup_tde_certificate
@certificate_name='RDSTDECertificate20211115T185333',
@certificate_file_s3_arn='arn:aws:s3:::TDE_certs/mycertfile.cer',
@private_key_file_s3_arn='arn:aws:s3:::TDE_certs/mykeyfile.pvk',
@kms_password_key_arn='arn:aws:kms:us-west-2:123456789012:key/AKIAIOSFODNN7EXAMPLE',
@overwrite_s3_files=1;

Restoring a TDE certificate


You use the rds_restore_tde_certificate stored procedure to restore (import) user TDE
certificates. It has the following syntax.

EXECUTE msdb.dbo.rds_restore_tde_certificate
@certificate_name='UserTDECertificate_certificate_name',
@certificate_file_s3_arn='arn:aws:s3:::bucket_name/certificate_file_name.cer',
@private_key_file_s3_arn='arn:aws:s3:::bucket_name/key_file_name.pvk',
@kms_password_key_arn='arn:aws:kms:region:account-id:key/key-id';

The following parameters are required:

• @certificate_name – The name of the TDE certificate to restore. The name must start with the
UserTDECertificate_ prefix.
• @certificate_file_s3_arn – The S3 ARN of the backup file used to restore the TDE certificate.
• @private_key_file_s3_arn – The S3 ARN of the private key backup file of the TDE certificate to
be restored.
• @kms_password_key_arn – The ARN of the symmetric KMS key used to encrypt the private key
password.

Example of restoring a TDE certificate

EXECUTE msdb.dbo.rds_restore_tde_certificate
@certificate_name='UserTDECertificate_myTDEcertificate',
@certificate_file_s3_arn='arn:aws:s3:::TDE_certs/mycertfile.cer',
@private_key_file_s3_arn='arn:aws:s3:::TDE_certs/mykeyfile.pvk',
@kms_password_key_arn='arn:aws:kms:us-west-2:123456789012:key/AKIAIOSFODNN7EXAMPLE';

1532
Amazon Relational Database Service User Guide
Transparent Data Encryption

Viewing restored TDE certificates


You use the rds_fn_list_user_tde_certificates function to view restored (imported) user TDE
certificates. It has the following syntax.

SELECT * FROM msdb.dbo.rds_fn_list_user_tde_certificates();

The output resembles the following. Not all columns are shown here.

name certificate_id
principal_id
pvt_key_encryption_type_desc
issuer_name
cert_serial_number
thumbprint
subject start_date
expiry_date
pvt_key_last_backu

UserTDECertificate_tde_cert
343 1 ENCRYPTED_BY_MASTER_KEY
AnyCompany
79 3e 0x6BB218B34110388680B
AnyCompany
2022-04-05
2023-04-05
NULL
Shipping57 a3 FE1BA2D86C695096485B5
Shipping19:49:45.0000000
19:49:45.0000000
69 fd
1d 9e
47 2c
32 67
1d 9c
ca af

Dropping restored TDE certificates


To drop restored (imported) user TDE certificates that you aren't using, use the
rds_drop_tde_certificate stored procedure. It has the following syntax.

EXECUTE msdb.dbo.rds_drop_tde_certificate
@certificate_name='UserTDECertificate_certificate_name';

The following parameter is required:

• @certificate_name – The name of the TDE certificate to drop.

You can drop only restored (imported) TDE certificates. You can't drop RDS-created certificates.

Example of dropping a TDE certificate

EXECUTE msdb.dbo.rds_drop_tde_certificate
@certificate_name='UserTDECertificate_myTDEcertificate';

Backing up and restoring TDE certificates for on-premises


databases
You can back up TDE certificates for on-premises databases, then later restore them to RDS for SQL
Server. You can also restore an RDS for SQL Server TDE certificate to an on-premises DB instance.

The following procedure backs up a TDE certificate and private key. The private key is encrypted using a
data key generated from your symmetric encryption KMS key.

To back up an on-premises TDE certificate

1. Generate the data key using the AWS CLI generate-data-key command.

aws kms generate-data-key \

1533
Amazon Relational Database Service User Guide
Transparent Data Encryption

--key-id my_KMS_key_ID \
--key-spec AES_256

The output resembles the following.

{
"CiphertextBlob": "AQIDAHimL2NEoAlOY6Bn7LJfnxi/OZe9kTQo/
XQXduug1rmerwGiL7g5ux4av9GfZLxYTDATAAAAfjB8BgkqhkiG9w0B
BwagbzBtAgEAMGgGCSqGSIb3DQEHATAeBglghkgBZQMEAS4wEQQMyCxLMi7GRZgKqD65AgEQgDtjvZLJo2cQ31Vetngzm2ybHDc
2RezQy3sAS6ZHrCjfnfn0c65bFdhsXxjSMnudIY7AKw==",
"Plaintext": "U/fpGtmzGCYBi8A2+0/9qcRQRK2zmG/aOn939ZnKi/0=",
"KeyId": "arn:aws:kms:us-west-2:123456789012:key/1234abcd-00ee-99ff-88dd-aa11bb22cc33"
}

You use the plain text output in the next step as the private key password.
2. Back up your TDE certificate as shown in the following example.

BACKUP CERTIFICATE myOnPremTDEcertificate TO FILE = 'D:\tde-cert-backup.cer'


WITH PRIVATE KEY (
FILE = 'C:\Program Files\Microsoft SQL Server\MSSQL14.MSSQLSERVER\MSSQL\DATA\cert-
backup-key.pvk',
ENCRYPTION BY PASSWORD = 'U/fpGtmzGCYBi8A2+0/9qcRQRK2zmG/aOn939ZnKi/0=');

3. Save the certificate backup file to your Amazon S3 certificate bucket.


4. Save the private key backup file to your S3 certificate bucket, with the following tag in the file's
metadata:

• Key – x-amz-meta-rds-tde-pwd
• Value – The CiphertextBlob value from generating the data key, as in the following example.

AQIDAHimL2NEoAlOY6Bn7LJfnxi/OZe9kTQo/
XQXduug1rmerwGiL7g5ux4av9GfZLxYTDATAAAAfjB8BgkqhkiG9w0B
BwagbzBtAgEAMGgGCSqGSIb3DQEHATAeBglghkgBZQMEAS4wEQQMyCxLMi7GRZgKqD65AgEQgDtjvZLJo2cQ31Vetngzm2ybH
2RezQy3sAS6ZHrCjfnfn0c65bFdhsXxjSMnudIY7AKw==

The following procedure restores an RDS for SQL Server TDE certificate to an on-premises DB instance.
You copy and restore the TDE certificate on your destination DB instance using the certificate backup,
corresponding private key file, and data key. The restored certificate is encrypted by the database master
key of the new server.

To restore a TDE certificate

1. Copy the TDE certificate backup file and private key file from Amazon S3 to the destination instance.
For more information on copying files from Amazon S3, see Transferring files between RDS for SQL
Server and Amazon S3 (p. 1471).
2. Use your KMS key to decrypt the output cipher text to retrieve the plain text of the data key. The
cipher text is located in the S3 metadata of the private key backup file.

aws kms decrypt \


--key-id my_KMS_key_ID \
--ciphertext-blob fileb://exampleCiphertextFile | base64 -d \
--output text \
--query Plaintext

You use the plain text output in the next step as the private key password.
3. Use the following SQL command to restore your TDE certificate.

1534
Amazon Relational Database Service User Guide
Transparent Data Encryption

CREATE CERTIFICATE myOnPremTDEcertificate FROM FILE='D:\tde-cert-backup.cer'


WITH PRIVATE KEY (FILE = N'D:\tde-cert-key.pvk',
DECRYPTION BY PASSWORD = 'plain_text_output');

For more information on KMS decryption, see decrypt in the KMS section of the AWS CLI Command
Reference.

After the TDE certificate is restored on the destination DB instance, you can restore encrypted databases
with that certificate.
Note
You can use the same TDE certificate to encrypt multiple SQL Server databases on the source
DB instance. To migrate multiple databases to a destination instance, copy the TDE certificate
associated with them to the destination instance only once.

Turning off TDE for RDS for SQL Server


To turn off TDE for an RDS for SQL Server DB instance, first make sure that there are no encrypted
objects left on the DB instance. To do so, either decrypt the objects or drop them. If any encrypted
objects exist on the DB instance, you can't turn off TDE for the DB instance. When you use the console to
remove the TDE option from an option group, the console indicates that it's processing. In addition, an
error event is created if the option group is associated with an encrypted DB instance or DB snapshot.

The following example removes the TDE encryption from a database called customerDatabase.

------------- Removing TDE ----------------

USE [customerDatabase]
GO

-- Turn off encryption of the database


ALTER DATABASE [customerDatabase]
SET ENCRYPTION OFF
GO

-- Wait until the encryption state of the database becomes 1. The state is 5 (Decryption in
progress) for a while
SELECT db_name(database_id) as DatabaseName, * FROM sys.dm_database_encryption_keys
GO

-- Drop the DEK used for encryption


DROP DATABASE ENCRYPTION KEY
GO

-- Alter to SIMPLE Recovery mode so that your encrypted log gets truncated
USE [master]
GO
ALTER DATABASE [customerDatabase] SET RECOVERY SIMPLE
GO

When all objects are decrypted, you have two options:

1. You can modify the DB instance to be associated with an option group without the TDE option.
2. You can remove the TDE option from the option group.

1535
Amazon Relational Database Service User Guide
SQL Server Audit

SQL Server Audit


In Amazon RDS, you can audit Microsoft SQL Server databases by using the built-in SQL Server auditing
mechanism. You can create audits and audit specifications in the same way that you create them for on-
premises database servers.

RDS uploads the completed audit logs to your S3 bucket, using the IAM role that you provide. If you
enable retention, RDS keeps your audit logs on your DB instance for the configured period of time.

For more information, see SQL Server Audit (database engine) in the Microsoft SQL Server
documentation.

SQL Server Audit with Database Activity Streams


You can use Database Activity Streams for RDS to integrate SQL Server Audit events with database
activity monitoring tools from Imperva, McAfee, and IBM. For more information about auditing with
Database Activity Streams for RDS SQL Server, see Auditing in Microsoft SQL Server (p. 945)

Topics
• Support for SQL Server Audit (p. 1536)
• Adding SQL Server Audit to the DB instance options (p. 1537)
• Using SQL Server Audit (p. 1538)
• Viewing audit logs (p. 1538)
• Using SQL Server Audit with Multi-AZ instances (p. 1539)
• Configuring an S3 bucket (p. 1539)
• Manually creating an IAM role for SQL Server Audit (p. 1540)

Support for SQL Server Audit


In Amazon RDS, starting with SQL Server 2014, all editions of SQL Server support server-level audits,
and the Enterprise edition also supports database-level audits. Starting with SQL Server 2016 (13.x) SP1,
all editions support both server-level and database-level audits. For more information, see SQL Server
Audit (database engine) in the SQL Server documentation.

RDS supports configuring the following option settings for SQL Server Audit.

Option setting Valid values Description

IAM_ROLE_ARN A valid Amazon Resource The ARN of the IAM role


Name (ARN) in the format that grants access to the S3
arn:aws:iam::account- bucket where you want to
id:role/role-name. store your audit logs. For more
information, see Amazon
Resource Names (ARNs) in the
AWS General Reference.

S3_BUCKET_ARN A valid ARN in the format The ARN for the S3 bucket
arn:aws:s3:::bucket-name where you want to store your
or arn:aws:s3:::bucket- audit logs.
name/key-prefix

ENABLE_COMPRESSION true or false Controls audit log compression.


By default, compression is
enabled (set to true).

1536
Amazon Relational Database Service User Guide
SQL Server Audit

Option setting Valid values Description

RETENTION_TIME 0 to 840 The retention time (in hours)


that SQL Server audit records
are kept on your RDS instance.
By default, retention is disabled.

RDS supports SQL Server Audit in all AWS Regions except Middle East (Bahrain).

Adding SQL Server Audit to the DB instance options


Enabling SQL Server Audit requires two steps: enabling the option on the DB instance, and enabling
the feature inside SQL Server. The process for adding the SQL Server Audit option to a DB instance is as
follows:

1. Create a new option group, or copy or modify an existing option group.


2. Add and configure all required options.
3. Associate the option group with the DB instance.

After you add the SQL Server Audit option, you don't need to restart your DB instance. As soon as the
option group is active, you can create audits and store audit logs in your S3 bucket.

To add and configure SQL Server Audit on a DB instance's option group

1. Choose one of the following:

• Use an existing option group.


• Create a custom DB option group and use that option group. For more information, see Creating
an option group (p. 332).
2. Add the SQLSERVER_AUDIT option to the option group, and configure the option settings. For more
information about adding options, see Adding an option to an option group (p. 335).

• For IAM role, if you already have an IAM role with the required policies, you can choose that role.
To create a new IAM role, choose Create a New Role. For information about the required policies,
see Manually creating an IAM role for SQL Server Audit (p. 1540).
• For Select S3 destination, if you already have an S3 bucket that you want to use, choose it. To
create an S3 bucket, choose Create a New S3 Bucket.
• For Enable Compression, leave this option chosen to compress audit files. Compression is enabled
by default. To disable compression, clear Enable Compression.
• For Audit log retention, to keep audit records on the DB instance, choose this option. Specify a
retention time in hours. The maximum retention time is 35 days.
3. Apply the option group to a new or existing DB instance. Choose one of the following:

• If you are creating a new DB instance, apply the option group when you launch the instance.
• On an existing DB instance, apply the option group by modifying the instance and then attaching
the new option group. For more information, see Modifying an Amazon RDS DB instance (p. 401).

Modifying the SQL Server Audit option


After you enable the SQL Server Audit option, you can modify the settings. For information about how
to modify option settings, see Modifying an option setting (p. 340).

1537
Amazon Relational Database Service User Guide
SQL Server Audit

Removing SQL Server Audit from the DB instance options


You can turn off the SQL Server Audit feature by disabling audits and then deleting the option.

To remove auditing

1. Disable all of the audit settings inside SQL Server. To learn where audits are running, query the SQL
Server security catalog views. For more information, see Security catalog views in the Microsoft SQL
Server documentation.
2. Delete the SQL Server Audit option from the DB instance. Choose one of the following:

• Delete the SQL Server Audit option from the option group that the DB instance uses. This change
affects all DB instances that use the same option group. For more information, see Removing an
option from an option group (p. 343).
• Modify the DB instance, and then choose an option group without the SQL Server Audit option.
This change affects only the DB instance that you modify. You can specify the default (empty)
option group, or a different custom option group. For more information, see Modifying an
Amazon RDS DB instance (p. 401).
3. After you delete the SQL Server Audit option from the DB instance, you don't need to restart the
instance. Remove unneeded audit files from your S3 bucket.

Using SQL Server Audit


You can control server audits, server audit specifications, and database audit specifications the same way
that you control them for on-premises database servers.

Creating audits
You create server audits in the same way that you create them for on-premises database servers. For
information about how to create server audits, see CREATE SERVER AUDIT in the Microsoft SQL Server
documentation.

To avoid errors, adhere to the following limitations:

• Don't exceed the maximum number of supported server audits per instance of 50.
• Instruct SQL Server to write data to a binary file.
• Don't use RDS_ as a prefix in the server audit name.
• For FILEPATH, specify D:\rdsdbdata\SQLAudit.
• For MAXSIZE, specify a size between 2 MB and 50 MB.
• Don't configure MAX_ROLLOVER_FILES or MAX_FILES.
• Don't configure SQL Server to shut down the DB instance if it fails to write the audit record.

Creating audit specifications


You create server audit specifications and database audit specifications the same way that you create
them for on-premises database servers. For information about creating audit specifications, see CREATE
SERVER AUDIT SPECIFICATION and CREATE DATABASE AUDIT SPECIFICATION in the Microsoft SQL
Server documentation.

To avoid errors, don't use RDS_ as a prefix in the name of the database audit specification or server audit
specification.

Viewing audit logs


Your audit logs are stored in D:\rdsdbdata\SQLAudit.

1538
Amazon Relational Database Service User Guide
SQL Server Audit

After SQL Server finishes writing to an audit log file—when the file reaches its size limit—Amazon RDS
uploads the file to your S3 bucket. If retention is enabled, Amazon RDS moves the file into the retention
folder: D:\rdsdbdata\SQLAudit\transmitted.

For information about configuring retention, see Adding SQL Server Audit to the DB instance
options (p. 1537).

Audit records are kept on the DB instance until the audit log file is uploaded. You can view the audit
records by running the following command.

SELECT *
FROM msdb.dbo.rds_fn_get_audit_file
('D:\rdsdbdata\SQLAudit\*.sqlaudit'
, default
, default )

You can use the same command to view audit records in your retention folder by changing the filter to
D:\rdsdbdata\SQLAudit\transmitted\*.sqlaudit.

SELECT *
FROM msdb.dbo.rds_fn_get_audit_file
('D:\rdsdbdata\SQLAudit\transmitted\*.sqlaudit'
, default
, default )

Using SQL Server Audit with Multi-AZ instances


For Multi-AZ instances, the process for sending audit log files to Amazon S3 is similar to the process for
Single-AZ instances. However, there are some important differences:

• Database audit specification objects are replicated to all nodes.


• Server audits and server audit specifications aren't replicated to secondary nodes. Instead, you have to
create or modify them manually.

To capture server audits or a server audit specification from both nodes:

1. Create a server audit or a server audit specification on the primary node.


2. Fail over to the secondary node and create a server audit or a server audit specification with the same
name and GUID on the secondary node. Use the AUDIT_GUID parameter to specify the GUID.

Configuring an S3 bucket
The audit log files are automatically uploaded from the DB instance to your S3 bucket. The following
restrictions apply to the S3 bucket that you use as a target for audit files:

• It must be in the same AWS Region as the DB instance.


• It must not be open to the public.
• It can't use S3 Object Lock.
• The bucket owner must also be the IAM role owner.

The target key that is used to store the data follows this naming schema: bucket-name/key-prefix/
instance-name/audit-name/node_file-name.ext

1539
Amazon Relational Database Service User Guide
SQL Server Audit

Note
You set both the bucket name and the key prefix values with the (S3_BUCKET_ARN) option
setting.

The schema is composed of the following elements:

• bucket-name – The name of your S3 bucket.


• key-prefix – The custom key prefix you want to use for audit logs.
• instance-name – The name of your Amazon RDS instance.
• audit-name – The name of the audit.
• node – The identifier of the node that is the source of the audit logs (node1 or node2). There is one
node for a Single-AZ instance and two replication nodes for a Multi-AZ instance. These are not primary
and secondary nodes, because the roles of primary and secondary change over time. Instead, the node
identifier is a simple label.
• node1 – The first replication node (Single-AZ has one node only).
• node2 – The second replication node (Multi-AZ has two nodes).
• file-name – The target file name. The file name is taken as-is from SQL Server.
• ext – The extension of the file (zip or sqlaudit):
• zip – If compression is enabled (default).
• sqlaudit – If compression is disabled.

Manually creating an IAM role for SQL Server Audit


Typically, when you create a new option, the AWS Management Console creates the IAM role and the
IAM trust policy for you. However, you can manually create a new IAM role to use with SQL Server
Audits, so that you can customize it with any additional requirements you might have. To do this, you
create an IAM role and delegate permissions so that the Amazon RDS service can use your Amazon S3
bucket. When you create this IAM role, you attach trust and permissions policies. The trust policy allows
Amazon RDS to assume this role. The permission policy defines the actions that this role can do. For
more information, see Creating a role to delegate permissions to an AWS service in the AWS Identity and
Access Management User Guide.

You can use the examples in this section to create the trust relationships and permissions policies you
need.

The following example shows a trust relationship for SQL Server Audit. It uses the service principal
rds.amazonaws.com to allow RDS to write to the S3 bucket. A service principal is an identifier that is
used to grant permissions to a service. Anytime you allow access to rds.amazonaws.com in this way,
you are allowing RDS to perform an action on your behalf. For more information about service principals,
see AWS JSON policy elements: Principal.

Example trust relationship for SQL Server Audit

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "rds.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}

1540
Amazon Relational Database Service User Guide
SQL Server Audit

We recommend using the aws:SourceArn and aws:SourceAccount global condition context keys in
resource-based trust relationships to limit the service's permissions to a specific resource. This is the most
effective way to protect against the confused deputy problem.

You might use both global condition context keys and have the aws:SourceArn value contain the
account ID. In this case, the aws:SourceAccount value and the account in the aws:SourceArn value
must use the same account ID when used in the same statement.

• Use aws:SourceArn if you want cross-service access for a single resource.


• Use aws:SourceAccount if you want to allow any resource in that account to be associated with the
cross-service use.

In the trust relationship, make sure to use the aws:SourceArn global condition context key with the full
Amazon Resource Name (ARN) of the resources accessing the role. For SQL Server Audit, make sure to
include both the DB option group and the DB instances, as shown in the following example.

Example trust relationship with global condition context key for SQL Server Audit

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "rds.amazonaws.com"
},
"Action": "sts:AssumeRole",
"Condition": {
"StringEquals": {
"aws:SourceArn": [
"arn:aws:rds:Region:my_account_ID:db:db_instance_identifier",
"arn:aws:rds:Region:my_account_ID:og:option_group_name"
]
}
}
}
]
}

In the following example of a permissions policy for SQL Server Audit, we specify an ARN for the Amazon
S3 bucket. You can use ARNs to identify a specific account, user, or role that you want grant access to. For
more information about using ARNs, see Amazon resource names (ARNs).

Example permissions policy for SQL Server Audit

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:ListAllMyBuckets",
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetBucketACL",
"s3:GetBucketLocation"
],

1541
Amazon Relational Database Service User Guide
SQL Server Audit

"Resource": "arn:aws:s3:::bucket_name"
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:ListMultipartUploadParts",
"s3:AbortMultipartUpload"
],
"Resource": "arn:aws:s3:::bucket_name/key_prefix/*"
}
]
}

Note
The s3:ListAllMyBuckets action is required for verifying that the same AWS account owns
both the S3 bucket and the SQL Server DB instance. The action lists the names of the buckets in
the account.
S3 bucket namespaces are global. If you accidentally delete your bucket, another user can create
a bucket with the same name in a different account. Then the SQL Server Audit data is written
to the new bucket.

1542
Amazon Relational Database Service User Guide
SQL Server Analysis Services

Support for SQL Server Analysis Services in Amazon


RDS for SQL Server
Microsoft SQL Server Analysis Services (SSAS) is part of the Microsoft Business Intelligence (MSBI)
suite. SSAS is an online analytical processing (OLAP) and data mining tool that is installed within SQL
Server. You use SSAS to analyze data to help make business decisions. SSAS differs from the SQL Server
relational database because SSAS is optimized for queries and calculations common in a business
intelligence environment.

You can turn on SSAS for existing or new DB instances. It's installed on the same DB instance as your
database engine. For more information on SSAS, see the Microsoft Analysis services documentation.

Amazon RDS supports SSAS for SQL Server Standard and Enterprise Editions on the following versions:

• Tabular mode:
• SQL Server 2019, version 15.00.4043.16.v1 and higher
• SQL Server 2017, version 14.00.3223.3.v1 and higher
• SQL Server 2016, version 13.00.5426.0.v1 and higher
• Multidimensional mode:
• SQL Server 2017, version 14.00.3381.3.v1 and higher
• SQL Server 2016, version 13.00.5882.1.v1 and higher

Contents
• Limitations (p. 1543)
• Turning on SSAS (p. 1544)
• Creating an option group for SSAS (p. 1544)
• Adding the SSAS option to the option group (p. 1545)
• Associating the option group with your DB instance (p. 1547)
• Allowing inbound access to your VPC security group (p. 1548)
• Enabling Amazon S3 integration (p. 1548)
• Deploying SSAS projects on Amazon RDS (p. 1549)
• Monitoring the status of a deployment task (p. 1549)
• Using SSAS on Amazon RDS (p. 1551)
• Setting up a Windows-authenticated user for SSAS (p. 1551)
• Adding a domain user as a database administrator (p. 1552)
• Creating an SSAS proxy (p. 1553)
• Scheduling SSAS database processing using SQL Server Agent (p. 1554)
• Revoking SSAS access from the proxy (p. 1555)
• Backing up an SSAS database (p. 1556)
• Restoring an SSAS database (p. 1556)
• Restoring a DB instance to a specified time (p. 1557)
• Changing the SSAS mode (p. 1557)
• Turning off SSAS (p. 1558)
• Troubleshooting SSAS issues (p. 1559)

Limitations
The following limitations apply to using SSAS on RDS for SQL Server:

1543
Amazon Relational Database Service User Guide
SQL Server Analysis Services

• RDS for SQL Server supports running SSAS in Tabular or Multidimensional mode. For more
information, see Comparing tabular and multidimensional solutions in the Microsoft documentation.
• You can only use one SSAS mode at a time. Before changing modes, make sure to delete all of the
SSAS databases.

For more information, see Changing the SSAS mode (p. 1557).
• Multidimensional mode isn't supported on SQL Server 2019.
• Multi-AZ instances aren't supported.
• Instances must use self-managed Active Directory or AWS Directory Service for Microsoft Active
Directory for SSAS authentication. For more information, see Working with Active Directory with RDS
for SQL Server (p. 1387).
• Users aren't given SSAS server administrator access, but they can be granted database-level
administrator access.
• The only supported port for accessing SSAS is 2383.
• You can't deploy projects directly. We provide an RDS stored procedure to do this. For more
information, see Deploying SSAS projects on Amazon RDS (p. 1549).
• Processing during deployment isn't supported.
• Using .xmla files for deployment isn't supported.
• SSAS project input files and database backup output files can only be in the D:\S3 folder on the DB
instance.

Turning on SSAS
Use the following process to turn on SSAS for your DB instance:

1. Create a new option group, or choose an existing option group.


2. Add the SSAS option to the option group.
3. Associate the option group with the DB instance.
4. Allow inbound access to the virtual private cloud (VPC) security group for the SSAS listener port.
5. Turn on Amazon S3 integration.

Creating an option group for SSAS


Use the AWS Management Console or the AWS CLI to create an option group that corresponds to the
SQL Server engine and version of the DB instance that you plan to use.
Note
You can also use an existing option group if it's for the correct SQL Server engine and version.

Console

The following console procedure creates an option group for SQL Server Standard Edition 2017.

To create the option group

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Option groups.
3. Choose Create group.

1544
Amazon Relational Database Service User Guide
SQL Server Analysis Services

4. In the Create option group pane, do the following:

a. For Name, enter a name for the option group that is unique within your AWS account, such as
ssas-se-2017. The name can contain only letters, digits, and hyphens.
b. For Description, enter a brief description of the option group, such as SSAS option group
for SQL Server SE 2017. The description is used for display purposes.
c. For Engine, choose sqlserver-se.
d. For Major engine version, choose 14.00.
5. Choose Create.

CLI

The following CLI example creates an option group for SQL Server Standard Edition 2017.

To create the option group

• Use one of the following commands.

Example

For Linux, macOS, or Unix:

aws rds create-option-group \


--option-group-name ssas-se-2017 \
--engine-name sqlserver-se \
--major-engine-version 14.00 \
--option-group-description "SSAS option group for SQL Server SE 2017"

For Windows:

aws rds create-option-group ^


--option-group-name ssas-se-2017 ^
--engine-name sqlserver-se ^
--major-engine-version 14.00 ^
--option-group-description "SSAS option group for SQL Server SE 2017"

Adding the SSAS option to the option group


Next, use the AWS Management Console or the AWS CLI to add the SSAS option to the option group.

Console

To add the SSAS option

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Option groups.
3. Choose the option group that you just created.
4. Choose Add option.
5. Under Option details, choose SSAS for Option name.
6. Under Option settings, do the following:

a. For Max memory, enter a value in the range 10–80.

1545
Amazon Relational Database Service User Guide
SQL Server Analysis Services

Max memory specifies the upper threshold above which SSAS begins releasing memory more
aggressively to make room for requests that are running, and also new high-priority requests.
The number is a percentage of the total memory of the DB instance. The allowed values are 10–
80, and the default is 45.
b. For Mode, choose the SSAS server mode, Tabular or Multidimensional.

If you don't see the Mode option setting, it means that Multidimensional mode isn't supported
in your AWS Region. For more information, see Limitations (p. 1543).

Tabular is the default.


c. For Security groups, choose the VPC security group to associate with the option.

Note
The port for accessing SSAS, 2383, is prepopulated.
7. Under Scheduling, choose whether to add the option immediately or at the next maintenance
window.
8. Choose Add option.

CLI

To add the SSAS option

1. Create a JSON file, for example ssas-option.json, with the following parameters:

• OptionGroupName – The name of option group that you created or chose previously (ssas-
se-2017 in the following example).
• Port – The port that you use to access SSAS. The only supported port is 2383.
• VpcSecurityGroupMemberships – Memberships for VPC security groups for your RDS DB
instance.
• MAX_MEMORY – The upper threshold above which SSAS should begin releasing memory more
aggressively to make room for requests that are running, and also new high-priority requests. The
number is a percentage of the total memory of the DB instance. The allowed values are 10–80,
and the default is 45.
• MODE – The SSAS server mode, either Tabular or Multidimensional. Tabular is the default.

If you receive an error that the MODE option setting isn't valid, it means that Multidimensional
mode isn't supported in your AWS Region. For more information, see Limitations (p. 1543).

The following is an example of a JSON file with SSAS option settings.

{
"OptionGroupName": "ssas-se-2017",
"OptionsToInclude": [
{
"OptionName": "SSAS",
"Port": 2383,
"VpcSecurityGroupMemberships": ["sg-0abcdef123"],
"OptionSettings": [{"Name":"MAX_MEMORY","Value":"60"},
{"Name":"MODE","Value":"Multidimensional"}]
}],
"ApplyImmediately": true
}

2. Add the SSAS option to the option group.

1546
Amazon Relational Database Service User Guide
SQL Server Analysis Services

Example

For Linux, macOS, or Unix:

aws rds add-option-to-option-group \


--cli-input-json file://ssas-option.json \
--apply-immediately

For Windows:

aws rds add-option-to-option-group ^


--cli-input-json file://ssas-option.json ^
--apply-immediately

Associating the option group with your DB instance


You can use the console or the CLI to associate the option group with your DB instance.

Console

Associate your option group with a new or existing DB instance:

• For a new DB instance, associate the option group with the DB instance when you launch the instance.
For more information, see Creating an Amazon RDS DB instance (p. 300).
• For an existing DB instance, modify the instance and associate the new option group with it. For more
information, see Modifying an Amazon RDS DB instance (p. 401).
Note
If you use an existing instance, it must already have an Active Directory domain and AWS
Identity and Access Management (IAM) role associated with it. If you create a new instance,
specify an existing Active Directory domain and IAM role. For more information, see Working
with Active Directory with RDS for SQL Server (p. 1387).

CLI

You can associate your option group with a new or existing DB instance.
Note
If you use an existing instance, it must already have an Active Directory domain and IAM role
associated with it. If you create a new instance, specify an existing Active Directory domain
and IAM role. For more information, see Working with Active Directory with RDS for SQL
Server (p. 1387).

To create a DB instance that uses the option group

• Specify the same DB engine type and major version that you used when creating the option group.

Example

For Linux, macOS, or Unix:

aws rds create-db-instance \


--db-instance-identifier myssasinstance \
--db-instance-class db.m5.2xlarge \
--engine sqlserver-se \

1547
Amazon Relational Database Service User Guide
SQL Server Analysis Services

--engine-version 14.00.3223.3.v1 \
--allocated-storage 100 \
--manage-master-user-password \
--master-username admin \
--storage-type gp2 \
--license-model li \
--domain-iam-role-name my-directory-iam-role \
--domain my-domain-id \
--option-group-name ssas-se-2017

For Windows:

aws rds create-db-instance ^


--db-instance-identifier myssasinstance ^
--db-instance-class db.m5.2xlarge ^
--engine sqlserver-se ^
--engine-version 14.00.3223.3.v1 ^
--allocated-storage 100 ^
--manage-master-user-password ^
--master-username admin ^
--storage-type gp2 ^
--license-model li ^
--domain-iam-role-name my-directory-iam-role ^
--domain my-domain-id ^
--option-group-name ssas-se-2017

To modify a DB instance to associate the option group

• Use one of the following commands.

Example

For Linux, macOS, or Unix:

aws rds modify-db-instance \


--db-instance-identifier myssasinstance \
--option-group-name ssas-se-2017 \
--apply-immediately

For Windows:

aws rds modify-db-instance ^


--db-instance-identifier myssasinstance ^
--option-group-name ssas-se-2017 ^
--apply-immediately

Allowing inbound access to your VPC security group


Create an inbound rule for the specified SSAS listener port in the VPC security group associated with
your DB instance. For more information about setting up security groups, see Provide access to your DB
instance in your VPC by creating a security group (p. 177).

Enabling Amazon S3 integration


To download model configuration files to your host for deployment, use Amazon S3 integration. For
more information, see Integrating an Amazon RDS for SQL Server DB instance with Amazon S3 (p. 1464).

1548
Amazon Relational Database Service User Guide
SQL Server Analysis Services

Deploying SSAS projects on Amazon RDS


On RDS, you can't deploy SSAS projects directly by using SQL Server Management Studio (SSMS). To
deploy projects, use an RDS stored procedure.
Note
Using .xmla files for deployment isn't supported.

Before you deploy projects, make sure of the following:

• Amazon S3 integration is turned on. For more information, see Integrating an Amazon RDS for SQL
Server DB instance with Amazon S3 (p. 1464).
• The Processing Option configuration setting is set to Do Not Process. This setting means that
no processing happens after deployment.
• You have both the myssasproject.asdatabase and myssasproject.deploymentoptions files.
They're automatically generated when you build the SSAS project.

To deploy an SSAS project on RDS

1. Download the .asdatabase (SSAS model) file from your S3 bucket to your DB instance, as shown
in the following example. For more information on the download parameters, see Downloading files
from an Amazon S3 bucket to a SQL Server DB instance (p. 1471).

exec msdb.dbo.rds_download_from_s3
@s3_arn_of_file='arn:aws:s3:::bucket_name/myssasproject.asdatabase',
[@rds_file_path='D:\S3\myssasproject.asdatabase'],
[@overwrite_file=1];

2. Download the .deploymentoptions file from your S3 bucket to your DB instance.

exec msdb.dbo.rds_download_from_s3
@s3_arn_of_file='arn:aws:s3:::bucket_name/myssasproject.deploymentoptions',
[@rds_file_path='D:\S3\myssasproject.deploymentoptions'],
[@overwrite_file=1];

3. Deploy the project.

exec msdb.dbo.rds_msbi_task
@task_type='SSAS_DEPLOY_PROJECT',
@file_path='D:\S3\myssasproject.asdatabase';

Monitoring the status of a deployment task


To track the status of your deployment (or download) task, call the rds_fn_task_status function. It
takes two parameters. The first parameter should always be NULL because it doesn't apply to SSAS. The
second parameter accepts a task ID.

To see a list of all tasks, set the first parameter to NULL and the second parameter to 0, as shown in the
following example.

SELECT * FROM msdb.dbo.rds_fn_task_status(NULL,0);

To get a specific task, set the first parameter to NULL and the second parameter to the task ID, as shown
in the following example.

1549
Amazon Relational Database Service User Guide
SQL Server Analysis Services

SELECT * FROM msdb.dbo.rds_fn_task_status(NULL,42);

The rds_fn_task_status function returns the following information.

Output parameter Description

task_id The ID of the task.

task_type For SSAS, tasks can have the following task types:

• SSAS_DEPLOY_PROJECT
• SSAS_ADD_DB_ADMIN_MEMBER
• SSAS_BACKUP_DB
• SSAS_RESTORE_DB

database_name Not applicable to SSAS tasks.

% complete The progress of the task as a percentage.

duration (mins) The amount of time spent on the task, in minutes.

lifecycle The status of the task. Possible statuses are the


following:

• CREATED – After you call one of the SSAS


stored procedures, a task is created and the
status is set to CREATED.
• IN_PROGRESS – After a task starts, the status
is set to IN_PROGRESS. It can take up to five
minutes for the status to change from CREATED
to IN_PROGRESS.
• SUCCESS – After a task completes, the status is
set to SUCCESS.
• ERROR – If a task fails, the status is set to
ERROR. For more information about the error,
see the task_info column.
• CANCEL_REQUESTED – After you call
rds_cancel_task, the status of the task is set
to CANCEL_REQUESTED.
• CANCELLED – After a task is successfully
canceled, the status of the task is set to
CANCELLED.

task_info Additional information about the task. If an error


occurs during processing, this column contains
information about the error.

For more information, see Troubleshooting SSAS


issues (p. 1559).

last_updated The date and time that the task status was last
updated.

created_at The date and time that the task was created.

S3_object_arn Not applicable to SSAS tasks.

1550
Amazon Relational Database Service User Guide
SQL Server Analysis Services

Output parameter Description

overwrite_S3_backup_file Not applicable to SSAS tasks.

KMS_master_key_arn Not applicable to SSAS tasks.

filepath Not applicable to SSAS tasks.

overwrite_file Not applicable to SSAS tasks.

task_metadata Metadata associated with the SSAS task.

Using SSAS on Amazon RDS


After deploying the SSAS project, you can directly process the OLAP database on SSMS.

To use SSAS on RDS

1. In SSMS, connect to SSAS using the user name and password for the Active Directory domain.
2. Expand Databases. The newly deployed SSAS database appears.
3. Locate the connection string, and update the user name and password to give access to the source
SQL database. Doing this is required for processing SSAS objects.

a. For Tabular mode, do the following:

1. Expand the Connections tab.


2. Open the context (right-click) menu for the connection object, and then choose Properties.
3. Update the user name and password in the connection string.
b. For Multidimensional mode, do the following:

1. Expand the Data Sources tab.


2. Open the context (right-click) menu for the data source object, and then choose Properties.
3. Update the user name and password in the connection string.
4. Open the context (right-click) menu for the SSAS database that you created and choose Process
Database.

Depending on the size of the input data, the processing operation might take several minutes to
complete.

Topics
• Setting up a Windows-authenticated user for SSAS (p. 1551)
• Adding a domain user as a database administrator (p. 1552)
• Creating an SSAS proxy (p. 1553)
• Scheduling SSAS database processing using SQL Server Agent (p. 1554)
• Revoking SSAS access from the proxy (p. 1555)

Setting up a Windows-authenticated user for SSAS


The main administrator user (sometimes called the master user) can use the following code example to
set up a Windows-authenticated login and grant the required procedure permissions. Doing this grants
permissions to the domain user to run SSAS customer tasks, use S3 file transfer procedures, create

1551
Amazon Relational Database Service User Guide
SQL Server Analysis Services

credentials, and work with the SQL Server Agent proxy. For more information, see Credentials (database
engine) and Create a SQL Server Agent proxy in the Microsoft documentation.

You can grant some or all of the following permissions as needed to Windows-authenticated users.

Example

-- Create a server-level domain user login, if it doesn't already exist


USE [master]
GO
CREATE LOGIN [mydomain\user_name] FROM WINDOWS
GO

-- Create domain user, if it doesn't already exist


USE [msdb]
GO
CREATE USER [mydomain\user_name] FOR LOGIN [mydomain\user_name]
GO

-- Grant necessary privileges to the domain user


USE [master]
GO
GRANT ALTER ANY CREDENTIAL TO [mydomain\user_name]
GO

USE [msdb]
GO
GRANT EXEC ON msdb.dbo.rds_msbi_task TO [mydomain\user_name] with grant option
GRANT SELECT ON msdb.dbo.rds_fn_task_status TO [mydomain\user_name] with grant option
GRANT EXEC ON msdb.dbo.rds_task_status TO [mydomain\user_name] with grant option
GRANT EXEC ON msdb.dbo.rds_cancel_task TO [mydomain\user_name] with grant option
GRANT EXEC ON msdb.dbo.rds_download_from_s3 TO [mydomain\user_name] with grant option
GRANT EXEC ON msdb.dbo.rds_upload_to_s3 TO [mydomain\user_name] with grant option
GRANT EXEC ON msdb.dbo.rds_delete_from_filesystem TO [mydomain\user_name] with grant option
GRANT EXEC ON msdb.dbo.rds_gather_file_details TO [mydomain\user_name] with grant option
GRANT EXEC ON msdb.dbo.sp_add_proxy TO [mydomain\user_name] with grant option
GRANT EXEC ON msdb.dbo.sp_update_proxy TO [mydomain\user_name] with grant option
GRANT EXEC ON msdb.dbo.sp_grant_login_to_proxy TO [mydomain\user_name] with grant option
GRANT EXEC ON msdb.dbo.sp_revoke_login_from_proxy TO [mydomain\user_name] with grant option
GRANT EXEC ON msdb.dbo.sp_delete_proxy TO [mydomain\user_name] with grant option
GRANT EXEC ON msdb.dbo.sp_enum_login_for_proxy to [mydomain\user_name] with grant option
GRANT EXEC ON msdb.dbo.sp_enum_proxy_for_subsystem TO [mydomain\user_name] with grant
option
GRANT EXEC ON msdb.dbo.rds_sqlagent_proxy TO [mydomain\user_name] with grant option
ALTER ROLE [SQLAgentUserRole] ADD MEMBER [mydomain\user_name]
GO

Adding a domain user as a database administrator


You can add a domain user as an SSAS database administrator in the following ways:

• A database administrator can use SSMS to create a role with admin privileges, then add users to that
role.
• You can use the following stored procedure.

exec msdb.dbo.rds_msbi_task
@task_type='SSAS_ADD_DB_ADMIN_MEMBER',
@database_name='myssasdb',
@ssas_role_name='exampleRole',
@ssas_role_member='domain_name\domain_user_name';

The following parameters are required:

1552
Amazon Relational Database Service User Guide
SQL Server Analysis Services

• @task_type – The type of the MSBI task, in this case SSAS_ADD_DB_ADMIN_MEMBER.


• @database_name – The name of the SSAS database to which you're granting administrator
privileges.
• @ssas_role_name – The SSAS database administrator role name. If the role doesn't already exist,
it's created.
• @ssas_role_member – The SSAS database user that you're adding to the administrator role.

Creating an SSAS proxy


To be able to schedule SSAS database processing using SQL Server Agent, create an SSAS credential and
an SSAS proxy. Run these procedures as a Windows-authenticated user.

To create the SSAS credential

• Create the credential for the proxy. To do this, you can use SSMS or the following SQL statement.

USE [master]
GO
CREATE CREDENTIAL [SSAS_Credential] WITH IDENTITY = N'mydomain\user_name', SECRET =
N'mysecret'
GO

Note
IDENTITY must be a domain-authenticated login. Replace mysecret with the password for
the domain-authenticated login.

To create the SSAS proxy

1. Use the following SQL statement to create the proxy.

USE [msdb]
GO
EXEC msdb.dbo.sp_add_proxy
@proxy_name=N'SSAS_Proxy',@credential_name=N'SSAS_Credential',@description=N''
GO

2. Use the following SQL statement to grant access to the proxy to other users.

USE [msdb]
GO
EXEC msdb.dbo.sp_grant_login_to_proxy
@proxy_name=N'SSAS_Proxy',@login_name=N'mydomain\user_name'
GO

3. Use the following SQL statement to give the SSAS subsystem access to the proxy.

USE [msdb]
GO
EXEC msdb.dbo.rds_sqlagent_proxy
@task_type='GRANT_SUBSYSTEM_ACCESS',@proxy_name='SSAS_Proxy',@proxy_subsystem='SSAS'
GO

To view the proxy and grants on the proxy

1. Use the following SQL statement to view the grantees of the proxy.

1553
Amazon Relational Database Service User Guide
SQL Server Analysis Services

USE [msdb]
GO
EXEC sp_help_proxy
GO

2. Use the following SQL statement to view the subsystem grants.

USE [msdb]
GO
EXEC msdb.dbo.sp_enum_proxy_for_subsystem
GO

Scheduling SSAS database processing using SQL Server Agent


After you create the credential and proxy and grant SSAS access to the proxy, you can create a SQL
Server Agent job to schedule SSAS database processing.

To schedule SSAS database processing

• Use SSMS or T-SQL for creating the SQL Server Agent job. The following example uses T-SQL. You
can further configure its job schedule through SSMS or T-SQL.

• The @command parameter outlines the XML for Analysis (XMLA) command to be run by the SQL
Server Agent job. This example configures SSAS Multidimensional database processing.
• The @server parameter outlines the target SSAS server name of the SQL Server Agent job.

To call the SSAS service within the same RDS DB instance where the SQL Server Agent job resides,
use localhost:2383.

To call the SSAS service from outside the RDS DB instance, use the RDS endpoint. You can also
use the Kerberos Active Directory (AD) endpoint (your-DB-instance-name.your-AD-domain-
name) if the RDS DB instances are joined by the same domain. For external DB instances, make
sure to properly configure the VPC security group associated with the RDS DB instance for a secure
connection.

You can further edit the query to support various XMLA operations. Make edits either by directly
modifying the T-SQL query or by using the SSMS UI following SQL Server Agent job creation.

USE [msdb]
GO
DECLARE @jobId BINARY(16)
EXEC msdb.dbo.sp_add_job @job_name=N'SSAS_Job',
@enabled=1,
@notify_level_eventlog=0,
@notify_level_email=0,
@notify_level_netsend=0,
@notify_level_page=0,
@delete_level=0,
@category_name=N'[Uncategorized (Local)]',
@job_id = @jobId OUTPUT
GO
EXEC msdb.dbo.sp_add_jobserver
@job_name=N'SSAS_Job',
@server_name = N'(local)'
GO
EXEC msdb.dbo.sp_add_jobstep @job_name=N'SSAS_Job', @step_name=N'Process_SSAS_Object',
@step_id=1,

1554
Amazon Relational Database Service User Guide
SQL Server Analysis Services

@cmdexec_success_code=0,
@on_success_action=1,
@on_success_step_id=0,
@on_fail_action=2,
@on_fail_step_id=0,
@retry_attempts=0,
@retry_interval=0,
@os_run_priority=0, @subsystem=N'ANALYSISCOMMAND',
@command=N'<Batch xmlns="http://schemas.microsoft.com/analysisservices/2003/
engine">
<Parallel>
<Process xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://
www.w3.org/2001/XMLSchema-instance"
xmlns:ddl2="http://schemas.microsoft.com/analysisservices/2003/
engine/2" xmlns:ddl2_2="http://schemas.microsoft.com/analysisservices/2003/engine/2/2"
xmlns:ddl100_100="http://schemas.microsoft.com/analysisservices/2008/
engine/100/100" xmlns:ddl200="http://schemas.microsoft.com/analysisservices/2010/
engine/200"
xmlns:ddl200_200="http://schemas.microsoft.com/analysisservices/2010/
engine/200/200" xmlns:ddl300="http://schemas.microsoft.com/analysisservices/2011/
engine/300"
xmlns:ddl300_300="http://schemas.microsoft.com/analysisservices/2011/
engine/300/300" xmlns:ddl400="http://schemas.microsoft.com/analysisservices/2012/
engine/400"
xmlns:ddl400_400="http://schemas.microsoft.com/analysisservices/2012/
engine/400/400" xmlns:ddl500="http://schemas.microsoft.com/analysisservices/2013/
engine/500"
xmlns:ddl500_500="http://schemas.microsoft.com/analysisservices/2013/
engine/500/500">
<Object>
<DatabaseID>Your_SSAS_Database_ID</DatabaseID>
</Object>
<Type>ProcessFull</Type>
<WriteBackTableCreation>UseExisting</WriteBackTableCreation>
</Process>
</Parallel>
</Batch>',
@server=N'localhost:2383',
@database_name=N'master',
@flags=0,
@proxy_name=N'SSAS_Proxy'
GO

Revoking SSAS access from the proxy


You can revoke access to the SSAS subsystem and delete the SSAS proxy using the following stored
procedures.

To revoke access and delete the proxy

1. Revoke subsystem access.

USE [msdb]
GO
EXEC msdb.dbo.rds_sqlagent_proxy
@task_type='REVOKE_SUBSYSTEM_ACCESS',@proxy_name='SSAS_Proxy',@proxy_subsystem='SSAS'
GO

2. Revoke the grants on the proxy.

USE [msdb]
GO

1555
Amazon Relational Database Service User Guide
SQL Server Analysis Services

EXEC msdb.dbo.sp_revoke_login_from_proxy
@proxy_name=N'SSAS_Proxy',@name=N'mydomain\user_name'
GO

3. Delete the proxy.

USE [msdb]
GO
EXEC dbo.sp_delete_proxy @proxy_name = N'SSAS_Proxy'
GO

Backing up an SSAS database


You can create SSAS database backup files only in the D:\S3 folder on the DB instance. To move the
backup files to your S3 bucket, use Amazon S3.

You can back up an SSAS database as follows:

• A domain user with the admin role for a particular database can use SSMS to back up the database to
the D:\S3 folder.

For more information, see Adding a domain user as a database administrator (p. 1552).
• You can use the following stored procedure. This stored procedure doesn't support encryption.

exec msdb.dbo.rds_msbi_task
@task_type='SSAS_BACKUP_DB',
@database_name='myssasdb',
@file_path='D:\S3\ssas_db_backup.abf',
[@ssas_apply_compression=1],
[@ssas_overwrite_file=1];

The following parameters are required:


• @task_type – The type of the MSBI task, in this case SSAS_BACKUP_DB.
• @database_name – The name of the SSAS database that you're backing up.
• @file_path – The path for the SSAS backup file. The .abf extension is required.

The following parameters are optional:


• @ssas_apply_compression – Whether to apply SSAS backup compression. Valid values are 1
(Yes) and 0 (No).
• @ssas_overwrite_file – Whether to overwrite the SSAS backup file. Valid values are 1 (Yes) and
0 (No).

Restoring an SSAS database


Use the following stored procedure to restore an SSAS database from a backup.

You can't restore a database if there is an existing SSAS database with the same name. The stored
procedure for restoring doesn't support encrypted backup files.

exec msdb.dbo.rds_msbi_task
@task_type='SSAS_RESTORE_DB',
@database_name='mynewssasdb',
@file_path='D:\S3\ssas_db_backup.abf';

The following parameters are required:

1556
Amazon Relational Database Service User Guide
SQL Server Analysis Services

• @task_type – The type of the MSBI task, in this case SSAS_RESTORE_DB.


• @database_name – The name of the new SSAS database that you're restoring to.
• @file_path – The path to the SSAS backup file.

Restoring a DB instance to a specified time


Point-in-time recovery (PITR) doesn't apply to SSAS databases. If you do PITR, only the SSAS data in the
last snapshot before the requested time is available on the restored instance.

To have up-to-date SSAS databases on a restored DB instance

1. Back up your SSAS databases to the D:\S3 folder on the source instance.
2. Transfer the backup files to the S3 bucket.
3. Transfer the backup files from the S3 bucket to the D:\S3 folder on the restored instance.
4. Run the stored procedure to restore the SSAS databases onto the restored instance.

You can also reprocess the SSAS project to restore the databases.

Changing the SSAS mode


You can change the mode in which SSAS runs, either Tabular or Multidimensional. To change the mode,
use the AWS Management Console or the AWS CLI to modify the options settings in the SSAS option.
Important
You can only use one SSAS mode at a time. Make sure to delete all of the SSAS databases before
changing the mode, or you receive an error.

Console

The following Amazon RDS console procedure changes the SSAS mode to Tabular and sets the
MAX_MEMORY parameter to 70 percent.

To modify the SSAS option

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Option groups.
3. Choose the option group with the SSAS option that you want to modify (ssas-se-2017 in the
previous examples).
4. Choose Modify option.
5. Change the option settings:

a. For Max memory, enter 70.


b. For Mode, choose Tabular.
6. Choose Modify option.

AWS CLI

The following AWS CLI example changes the SSAS mode to Tabular and sets the MAX_MEMORY parameter
to 70 percent.

For the CLI command to work, make sure to include all of the required parameters, even if you're not
modifying them.

1557
Amazon Relational Database Service User Guide
SQL Server Analysis Services

To modify the SSAS option

• Use one of the following commands.

Example

For Linux, macOS, or Unix:

aws rds add-option-to-option-group \


--option-group-name ssas-se-2017 \
--options
"OptionName=SSAS,VpcSecurityGroupMemberships=sg-12345e67,OptionSettings=[{Name=MAX_MEMORY,Value=70
{Name=MODE,Value=Tabular}]" \
--apply-immediately

For Windows:

aws rds add-option-to-option-group ^


--option-group-name ssas-se-2017 ^
--options
OptionName=SSAS,VpcSecurityGroupMemberships=sg-12345e67,OptionSettings=[{Name=MAX_MEMORY,Value=70}
{Name=MODE,Value=Tabular}] ^
--apply-immediately

Turning off SSAS
To turn off SSAS, remove the SSAS option from its option group.
Important
Before you remove the SSAS option, delete your SSAS databases.
We highly recommend that you back up your SSAS databases before deleting them and
removing the SSAS option.

Console

To remove the SSAS option from its option group

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Option groups.
3. Choose the option group with the SSAS option that you want to remove (ssas-se-2017 in the
previous examples).
4. Choose Delete option.
5. Under Deletion options, choose SSAS for Options to delete.
6. Under Apply immediately, choose Yes to delete the option immediately, or No to delete it at the
next maintenance window.
7. Choose Delete.

AWS CLI

To remove the SSAS option from its option group

• Use one of the following commands.

1558
Amazon Relational Database Service User Guide
SQL Server Analysis Services

Example

For Linux, macOS, or Unix:

aws rds remove-option-from-option-group \


--option-group-name ssas-se-2017 \
--options SSAS \
--apply-immediately

For Windows:

aws rds remove-option-from-option-group ^


--option-group-name ssas-se-2017 ^
--options SSAS ^
--apply-immediately

Troubleshooting SSAS issues


You might encounter the following issues when using SSAS.

Issue Type Troubleshooting suggestions

Unable to configure the SSAS option. RDS event You can't change the SSAS mode if you still
The requested SSAS mode is new_mode, have SSAS databases that use the current
but the current DB instance has number mode. Delete the SSAS databases, then try
current_mode databases. Delete the again.
existing databases before switching
to new_mode mode. To regain access
to current_mode mode for database
deletion, either update the current DB
option group, or attach a new option
group with %s as the MODE option
setting value for the SSAS option.

Unable to remove the SSAS option RDS event You can't turn off SSAS if you still have SSAS
because there are number existing mode databases. Delete the SSAS databases, then try
databases. The SSAS option can't be again.
removed until all SSAS databases are
deleted. Add the SSAS option again,
delete all SSAS databases, and try again.

The SSAS option isn't enabled or is in RDS stored You can't run SSAS stored procedures when
the process of being enabled. Try again procedure the option is turned off, or when it's being
later. turned on.

The SSAS option is configured RDS stored You can't run SSAS stored procedures when
incorrectly. Make sure that the option procedure your option group membership isn't in the in-
group membership status is "in- sync status. This puts the SSAS option in an
sync", and review the RDS event incorrect configuration state.
logs for relevant SSAS configuration
error messages. Following these If your option group membership status
investigations, try again. If errors changes to failed due to SSAS option
continue to occur, contact AWS Support. modification, there are two possible reasons:

1559
Amazon Relational Database Service User Guide
SQL Server Analysis Services

Issue Type Troubleshooting suggestions


1. The SSAS option was removed without the
SSAS databases being deleted.
2. The SSAS mode was updated from
Tabular to Multidimensional, or from
Multidimenisonal to Tabular, without the
existing SSAS databases being deleted.

Reconfigure the SSAS option, because RDS


allows only one SSAS mode at a time, and
doesn't support SSAS option removal with
SSAS databases present.

Check the RDS event logs for configuration


errors for your SSAS instance, and resolve the
issues accordingly.

Deployment failed. The change can RDS stored You can't deploy a Tabular database to a
only be deployed on a server running in procedure Multidimensional server, or a Multidimensional
deployment_file_mode mode. The database to a Tabular server.
current server mode is current_mode.
Make sure that you're using files with the
correct mode, and verify that the MODE option
setting is set to the appropriate value.

The restore failed. The backup file can RDS stored You can't restore a Tabular database to a
only be restored on a server running procedure Multidimensional server, or a Multidimensional
in restore_file_mode mode. The database to a Tabular server.
current server mode is current_mode.
Make sure that you're using files with the
correct mode, and verify that the MODE option
setting is set to the appropriate value.

The restore failed. The backup file RDS stored You can't restore an SSAS database with
and the RDS DB instance versions are procedure a version incompatible to the SQL Server
incompatible. instance version.

For more information, see Compatibility levels


for tabular models and Compatibility level of
a multidimensional database in the Microsoft
documentation.

The restore failed. The backup file RDS stored You can't restore an SSAS database with a
specified in the restore operation is procedure damaged file.
damaged or is not an SSAS backup
file. Make sure that @rds_file_path is Make sure that the file isn't damaged or
correctly formatted. corrupted.

This error can also be raised when


@rds_file_path isn't correctly formatted
(for example, it has double backslashes as in
D:\S3\\incorrect_format.abf).

1560
Amazon Relational Database Service User Guide
SQL Server Analysis Services

Issue Type Troubleshooting suggestions

The restore failed. The restored RDS stored The restored database name can't contain any
database name can't contain any procedure reserved words or characters that aren't valid,
reserved words or invalid characters: . , ; or be longer than 100 characters.
' ` : / \\ * | ? \" & % $ ! + = ( ) [ ] { } < >,
or be longer than 100 characters. For SSAS object naming conventions,
see Object naming rules in the Microsoft
documentation.

An invalid role name was provided. The RDS stored The role name can't contain any reserved
role name can't contain any reserved procedure strings.
strings.
For SSAS object naming conventions,
see Object naming rules in the Microsoft
documentation.

An invalid role name was provided. RDS stored The role name can't contain any reserved
The role name can't contain any of the procedure characters.
following reserved characters: . , ; ' ` : /
\\ * | ? \" & % $ ! + = ( ) [ ] { } < > For SSAS object naming conventions,
see Object naming rules in the Microsoft
documentation.

1561
Amazon Relational Database Service User Guide
SQL Server Integration Services

Support for SQL Server Integration Services in


Amazon RDS for SQL Server
Microsoft SQL Server Integration Services (SSIS) is a component that you can use to perform a broad
range of data migration tasks. SSIS is a platform for data integration and workflow applications. It
features a data warehousing tool used for data extraction, transformation, and loading (ETL). You can
also use this tool to automate maintenance of SQL Server databases and updates to multidimensional
cube data.

SSIS projects are organized into packages saved as XML-based .dtsx files. Packages can contain control
flows and data flows. You use data flows to represent ETL operations. After deployment, packages are
stored in SQL Server in the SSISDB database. SSISDB is an online transaction processing (OLTP) database
in the full recovery mode.

Amazon RDS for SQL Server supports running SSIS directly on an RDS DB instance. You can enable SSIS
on an existing or new DB instance. SSIS is installed on the same DB instance as your database engine.

RDS supports SSIS for SQL Server Standard and Enterprise Editions on the following versions:

• SQL Server 2019, version 15.00.4043.16.v1 and higher


• SQL Server 2017, version 14.00.3223.3.v1 and higher
• SQL Server 2016, version 13.00.5426.0.v1 and higher

Contents
• Limitations and recommendations (p. 1562)
• Enabling SSIS (p. 1564)
• Creating the option group for SSIS (p. 1564)
• Adding the SSIS option to the option group (p. 1565)
• Creating the parameter group for SSIS (p. 1566)
• Modifying the parameter for SSIS (p. 1567)
• Associating the option group and parameter group with your DB instance (p. 1567)
• Enabling S3 integration (p. 1569)
• Administrative permissions on SSISDB (p. 1569)
• Setting up a Windows-authenticated user for SSIS (p. 1569)
• Deploying an SSIS project (p. 1570)
• Monitoring the status of a deployment task (p. 1571)
• Using SSIS (p. 1572)
• Setting database connection managers for SSIS projects (p. 1573)
• Creating an SSIS proxy (p. 1573)
• Scheduling an SSIS package using SQL Server Agent (p. 1574)
• Revoking SSIS access from the proxy (p. 1575)
• Disabling SSIS (p. 1575)
• Dropping the SSISDB database (p. 1576)

Limitations and recommendations


The following limitations and recommendations apply to running SSIS on RDS for SQL Server:
1562
Amazon Relational Database Service User Guide
SQL Server Integration Services

• The DB instance must have an associated parameter group with the clr enabled parameter set to 1.
For more information, see Modifying the parameter for SSIS (p. 1567).
Note
If you enable the clr enabled parameter on SQL Server 2017 or 2019, you can't use the
common language runtime (CLR) on your DB instance. For more information, see Features not
supported and features with limited support (p. 1367).
• The following control flow tasks are supported:
• Analysis Services Execute DDL Task
• Analysis Services Processing Task
• Bulk Insert Task
• Check Database Integrity Task
• Data Flow Task
• Data Mining Query Task
• Data Profiling Task
• Execute Package Task
• Execute SQL Server Agent Job Task
• Execute SQL Task
• Execute T-SQL Statement Task
• Notify Operator Task
• Rebuild Index Task
• Reorganize Index Task
• Shrink Database Task
• Transfer Database Task
• Transfer Jobs Task
• Transfer Logins Task
• Transfer SQL Server Objects Task
• Update Statistics Task
• Only project deployment is supported.
• Running SSIS packages by using SQL Server Agent is supported.
• SSIS log records can be inserted only into user-created databases.
• Use only the D:\S3 folder for working with files. Files placed in any other directory are deleted. Be
aware of a few other file location details:
• Place SSIS project input and output files in the D:\S3 folder.
• For the Data Flow Task, change the location for BLOBTempStoragePath and
BufferTempStoragePath to a file inside the D:\S3 folder. The file path must start with D:\S3\.
• Ensure that all parameters, variables, and expressions used for file connections point to the D:\S3
folder.
• On Multi-AZ instances, files created by SSIS in the D:\S3 folder are deleted after a failover. For more
information, see Multi-AZ limitations for S3 integration (p. 1476).
• Upload the files created by SSIS in the D:\S3 folder to your Amazon S3 bucket to make them
durable.
• Import Column and Export Column transformations and the Script component on the Data Flow Task
aren't supported.
• You can't enable dump on running SSIS packages, and you can't add data taps on SSIS packages.
• The SSIS Scale Out feature isn't supported.
• You can't deploy projects directly. We provide RDS stored procedures to do this. For more information,
see Deploying an SSIS project (p. 1570). 1563
Amazon Relational Database Service User Guide
SQL Server Integration Services

• Build SSIS project (.ispac) files with the DoNotSavePasswords protection mode for deploying on
RDS.
• SSIS isn't supported on Always On instances with read replicas.
• You can't back up the SSISDB database that is associated with the SSIS option.
• Importing and restoring the SSISDB database from other instances of SSIS isn't supported.
• You can connect to other SQL Server DB instances or to an Oracle data source. Connecting to other
database engines, such as MySQL or PostgreSQL, isn't supported for SSIS on RDS for SQL Server.
For more information on connecting to an Oracle data source, see Linked Servers with Oracle
OLEDB (p. 1517).

Enabling SSIS
You enable SSIS by adding the SSIS option to your DB instance. Use the following process:

1. Create a new option group, or choose an existing option group.


2. Add the SSIS option to the option group.
3. Create a new parameter group, or choose an existing parameter group.
4. Modify the parameter group to set the clr enabled parameter to 1.
5. Associate the option group and parameter group with the DB instance.
6. Enable Amazon S3 integration.

Note
If a database with the name SSISDB or a reserved SSIS login already exists on the DB instance,
you can't enable SSIS on the instance.

Creating the option group for SSIS


To work with SSIS, create an option group or modify an option group that corresponds to the SQL
Server edition and version of the DB instance that you plan to use. To do this, use the AWS Management
Console or the AWS CLI.

Console

The following procedure creates an option group for SQL Server Standard Edition 2016.

To create the option group

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Option groups.
3. Choose Create group.
4. In the Create option group window, do the following:

a. For Name, enter a name for the option group that is unique within your AWS account, such as
ssis-se-2016. The name can contain only letters, digits, and hyphens.
b. For Description, enter a brief description of the option group, such as SSIS option group
for SQL Server SE 2016. The description is used for display purposes.
c. For Engine, choose sqlserver-se.
d. For Major engine version, choose 13.00.
5. Choose Create.

1564
Amazon Relational Database Service User Guide
SQL Server Integration Services

CLI

The following procedure creates an option group for SQL Server Standard Edition 2016.

To create the option group

• Run one of the following commands.

Example

For Linux, macOS, or Unix:

aws rds create-option-group \


--option-group-name ssis-se-2016 \
--engine-name sqlserver-se \
--major-engine-version 13.00 \
--option-group-description "SSIS option group for SQL Server SE 2016"

For Windows:

aws rds create-option-group ^


--option-group-name ssis-se-2016 ^
--engine-name sqlserver-se ^
--major-engine-version 13.00 ^
--option-group-description "SSIS option group for SQL Server SE 2016"

Adding the SSIS option to the option group


Next, use the AWS Management Console or the AWS CLI to add the SSIS option to your option group.

Console

To add the SSIS option

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Option groups.
3. Choose the option group that you just created, ssis-se-2016 in this example.
4. Choose Add option.
5. Under Option details, choose SSIS for Option name.
6. Under Scheduling, choose whether to add the option immediately or at the next maintenance
window.
7. Choose Add option.

CLI

To add the SSIS option

• Add the SSIS option to the option group.

Example

For Linux, macOS, or Unix:

aws rds add-option-to-option-group \

1565
Amazon Relational Database Service User Guide
SQL Server Integration Services

--option-group-name ssis-se-2016 \
--options OptionName=SSIS \
--apply-immediately

For Windows:

aws rds add-option-to-option-group ^


--option-group-name ssis-se-2016 ^
--options OptionName=SSIS ^
--apply-immediately

Creating the parameter group for SSIS


Create or modify a parameter group for the clr enabled parameter that corresponds to the SQL
Server edition and version of the DB instance that you plan to use for SSIS.

Console

The following procedure creates a parameter group for SQL Server Standard Edition 2016.

To create the parameter group

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Parameter groups.
3. Choose Create parameter group.
4. In the Create parameter group pane, do the following:

a. For Parameter group family, choose sqlserver-se-13.0.


b. For Group name, enter an identifier for the parameter group, such as ssis-sqlserver-
se-13.
c. For Description, enter clr enabled parameter group.
5. Choose Create.

CLI

The following procedure creates a parameter group for SQL Server Standard Edition 2016.

To create the parameter group

• Run one of the following commands.

Example

For Linux, macOS, or Unix:

aws rds create-db-parameter-group \


--db-parameter-group-name ssis-sqlserver-se-13 \
--db-parameter-group-family "sqlserver-se-13.0" \
--description "clr enabled parameter group"

For Windows:

aws rds create-db-parameter-group ^

1566
Amazon Relational Database Service User Guide
SQL Server Integration Services

--db-parameter-group-name ssis-sqlserver-se-13 ^
--db-parameter-group-family "sqlserver-se-13.0" ^
--description "clr enabled parameter group"

Modifying the parameter for SSIS


Modify the clr enabled parameter in the parameter group that corresponds to the SQL Server edition
and version of your DB instance. For SSIS, set the clr enabled parameter to 1.

Console

The following procedure modifies the parameter group that you created for SQL Server Standard Edition
2016.

To modify the parameter group

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Parameter groups.
3. Choose the parameter group, such as ssis-sqlserver-se-13.
4. Under Parameters, filter the parameter list for clr.
5. Choose clr enabled.
6. Choose Edit parameters.
7. From Values, choose 1.
8. Choose Save changes.

CLI

The following procedure modifies the parameter group that you created for SQL Server Standard Edition
2016.

To modify the parameter group

• Run one of the following commands.

Example

For Linux, macOS, or Unix:

aws rds modify-db-parameter-group \


--db-parameter-group-name ssis-sqlserver-se-13 \
--parameters "ParameterName='clr enabled',ParameterValue=1,ApplyMethod=immediate"

For Windows:

aws rds modify-db-parameter-group ^


--db-parameter-group-name ssis-sqlserver-se-13 ^
--parameters "ParameterName='clr enabled',ParameterValue=1,ApplyMethod=immediate"

Associating the option group and parameter group with your DB instance
To associate the SSIS option group and parameter group with your DB instance, use the AWS
Management Console or the AWS CLI

1567
Amazon Relational Database Service User Guide
SQL Server Integration Services

Note
If you use an existing instance, it must already have an Active Directory domain and AWS
Identity and Access Management (IAM) role associated with it. If you create a new instance,
specify an existing Active Directory domain and IAM role. For more information, see Working
with Active Directory with RDS for SQL Server (p. 1387).

Console

To finish enabling SSIS, associate your SSIS option group and parameter group with a new or existing DB
instance:

• For a new DB instance, associate them when you launch the instance. For more information, see
Creating an Amazon RDS DB instance (p. 300).
• For an existing DB instance, associate them by modifying the instance. For more information, see
Modifying an Amazon RDS DB instance (p. 401).

CLI

You can associate the SSIS option group and parameter group with a new or existing DB instance.

To create an instance with the SSIS option group and parameter group

• Specify the same DB engine type and major version as you used when creating the option group.

Example

For Linux, macOS, or Unix:

aws rds create-db-instance \


--db-instance-identifier myssisinstance \
--db-instance-class db.m5.2xlarge \
--engine sqlserver-se \
--engine-version 13.00.5426.0.v1 \
--allocated-storage 100 \
--manage-master-user-password \
--master-username admin \
--storage-type gp2 \
--license-model li \
--domain-iam-role-name my-directory-iam-role \
--domain my-domain-id \
--option-group-name ssis-se-2016 \
--db-parameter-group-name ssis-sqlserver-se-13

For Windows:

aws rds create-db-instance ^


--db-instance-identifier myssisinstance ^
--db-instance-class db.m5.2xlarge ^
--engine sqlserver-se ^
--engine-version 13.00.5426.0.v1 ^
--allocated-storage 100 ^
--manage-master-user-password ^
--master-username admin ^
--storage-type gp2 ^
--license-model li ^
--domain-iam-role-name my-directory-iam-role ^
--domain my-domain-id ^
--option-group-name ssis-se-2016 ^
--db-parameter-group-name ssis-sqlserver-se-13

1568
Amazon Relational Database Service User Guide
SQL Server Integration Services

To modify an instance and associate the SSIS option group and parameter group

• Run one of the following commands.

Example

For Linux, macOS, or Unix:

aws rds modify-db-instance \


--db-instance-identifier myssisinstance \
--option-group-name ssis-se-2016 \
--db-parameter-group-name ssis-sqlserver-se-13 \
--apply-immediately

For Windows:

aws rds modify-db-instance ^


--db-instance-identifier myssisinstance ^
--option-group-name ssis-se-2016 ^
--db-parameter-group-name ssis-sqlserver-se-13 ^
--apply-immediately

Enabling S3 integration
To download SSIS project (.ispac) files to your host for deployment, use S3 file integration. For more
information, see Integrating an Amazon RDS for SQL Server DB instance with Amazon S3 (p. 1464).

Administrative permissions on SSISDB


When the instance is created or modified with the SSIS option, the result is an SSISDB database with
the ssis_admin and ssis_logreader roles granted to the master user. The master user has the following
privileges in SSISDB:

• alter on ssis_admin role


• alter on ssis_logreader role
• alter any user

Because the master user is a SQL-authenticated user, you can't use the master user for executing SSIS
packages. The master user can use these privileges to create new SSISDB users and add them to the
ssis_admin and ssis_logreader roles. Doing this is useful for giving access to your domain users for using
SSIS.

Setting up a Windows-authenticated user for SSIS


The master user can use the following code example to set up a Windows-authenticated login in SSISDB
and grant the required procedure permissions. Doing this grants permissions to the domain user to
deploy and run SSIS packages, use S3 file transfer procedures, create credentials, and work with the SQL
Server Agent proxy. For more information, see Credentials (database engine) and Create a SQL Server
Agent proxy in the Microsoft documentation.
Note
You can grant some or all of the following permissions as needed to Windows-authenticated
users.

1569
Amazon Relational Database Service User Guide
SQL Server Integration Services

Example

-- Create a server-level SQL login for the domain user, if it doesn't already exist
USE [master]
GO
CREATE LOGIN [mydomain\user_name] FROM WINDOWS
GO

-- Create a database-level account for the domain user, if it doesn't already exist
USE [SSISDB]
GO
CREATE USER [mydomain\user_name] FOR LOGIN [mydomain\user_name]

-- Add SSIS role membership to the domain user


ALTER ROLE [ssis_admin] ADD MEMBER [mydomain\user_name]
ALTER ROLE [ssis_logreader] ADD MEMBER [mydomain\user_name]
GO

-- Add MSDB role membership to the domain user


USE [msdb]
GO
CREATE USER [mydomain\user_name] FOR LOGIN [mydomain\user_name]

-- Grant MSDB stored procedure privileges to the domain user


GRANT EXEC ON msdb.dbo.rds_msbi_task TO [mydomain\user_name] with grant option
GRANT SELECT ON msdb.dbo.rds_fn_task_status TO [mydomain\user_name] with grant option
GRANT EXEC ON msdb.dbo.rds_task_status TO [mydomain\user_name] with grant option
GRANT EXEC ON msdb.dbo.rds_cancel_task TO [mydomain\user_name] with grant option
GRANT EXEC ON msdb.dbo.rds_download_from_s3 TO [mydomain\user_name] with grant option
GRANT EXEC ON msdb.dbo.rds_upload_to_s3 TO [mydomain\user_name] with grant option
GRANT EXEC ON msdb.dbo.rds_delete_from_filesystem TO [mydomain\user_name] with grant option
GRANT EXEC ON msdb.dbo.rds_gather_file_details TO [mydomain\user_name] with grant option
GRANT EXEC ON msdb.dbo.sp_add_proxy TO [mydomain\user_name] with grant option
GRANT EXEC ON msdb.dbo.sp_update_proxy TO [mydomain\user_name] with grant option
GRANT EXEC ON msdb.dbo.sp_grant_login_to_proxy TO [mydomain\user_name] with grant option
GRANT EXEC ON msdb.dbo.sp_revoke_login_from_proxy TO [mydomain\user_name] with grant option
GRANT EXEC ON msdb.dbo.sp_delete_proxy TO [mydomain\user_name] with grant option
GRANT EXEC ON msdb.dbo.sp_enum_login_for_proxy to [mydomain\user_name] with grant option
GRANT EXEC ON msdb.dbo.sp_enum_proxy_for_subsystem TO [mydomain\user_name] with grant
option
GRANT EXEC ON msdb.dbo.rds_sqlagent_proxy TO [mydomain\user_name] WITH GRANT OPTION

-- Add the SQLAgentUserRole privilege to the domain user


USE [msdb]
GO
ALTER ROLE [SQLAgentUserRole] ADD MEMBER [mydomain\user_name]
GO

-- Grant the ALTER ANY CREDENTIAL privilege to the domain user


USE [master]
GO
GRANT ALTER ANY CREDENTIAL TO [mydomain\user_name]
GO

Deploying an SSIS project


On RDS, you can't deploy SSIS projects directly by using SQL Server Management Studio (SSMS) or
SSIS procedures. To download project files from Amazon S3 and then deploy them, use RDS stored
procedures.

To run the stored procedures, log in as any user that you granted permissions for running the stored
procedures. For more information, see Setting up a Windows-authenticated user for SSIS (p. 1569).

1570
Amazon Relational Database Service User Guide
SQL Server Integration Services

To deploy the SSIS project

1. Download the project (.ispac) file.

exec msdb.dbo.rds_download_from_s3
@s3_arn_of_file='arn:aws:s3:::bucket_name/ssisproject.ispac',
[@rds_file_path='D:\S3\ssisproject.ispac'],
[@overwrite_file=1];

2. Submit the deployment task, making sure of the following:

• The folder is present in the SSIS catalog.


• The project name matches the project name that you used while developing the SSIS project.

exec msdb.dbo.rds_msbi_task
@task_type='SSIS_DEPLOY_PROJECT',
@folder_name='DEMO',
@project_name='ssisproject',
@file_path='D:\S3\ssisproject.ispac';

Monitoring the status of a deployment task


To track the status of your deployment task, call the rds_fn_task_status function. It takes two
parameters. The first parameter should always be NULL because it doesn't apply to SSIS. The second
parameter accepts a task ID.

To see a list of all tasks, set the first parameter to NULL and the second parameter to 0, as shown in the
following example.

SELECT * FROM msdb.dbo.rds_fn_task_status(NULL,0);

To get a specific task, set the first parameter to NULL and the second parameter to the task ID, as shown
in the following example.

SELECT * FROM msdb.dbo.rds_fn_task_status(NULL,42);

The rds_fn_task_status function returns the following information.

Output parameter Description

task_id The ID of the task.

task_type SSIS_DEPLOY_PROJECT

database_name Not applicable to SSIS tasks.

% complete The progress of the task as a percentage.

duration (mins) The amount of time spent on the task, in minutes.

lifecycle The status of the task. Possible statuses are the


following:

• CREATED – After you call the


msdb.dbo.rds_msbi_task stored procedure,

1571
Amazon Relational Database Service User Guide
SQL Server Integration Services

Output parameter Description


a task is created and the status is set to
CREATED.
• IN_PROGRESS – After a task starts, the status
is set to IN_PROGRESS. It can take up to five
minutes for the status to change from CREATED
to IN_PROGRESS.
• SUCCESS – After a task completes, the status is
set to SUCCESS.
• ERROR – If a task fails, the status is set to
ERROR. For more information about the error,
see the task_info column.
• CANCEL_REQUESTED – After you call
rds_cancel_task, the status of the task is set
to CANCEL_REQUESTED.
• CANCELLED – After a task is successfully
canceled, the status of the task is set to
CANCELLED.

task_info Additional information about the task. If an error


occurs during processing, this column contains
information about the error.

last_updated The date and time that the task status was last
updated.

created_at The date and time that the task was created.

S3_object_arn Not applicable to SSIS tasks.

overwrite_S3_backup_file Not applicable to SSIS tasks.

KMS_master_key_arn Not applicable to SSIS tasks.

filepath Not applicable to SSIS tasks.

overwrite_file Not applicable to SSIS tasks.

task_metadata Metadata associated with the SSIS task.

Using SSIS
After deploying the SSIS project into the SSIS catalog, you can run packages directly from SSMS or
schedule them by using SQL Server Agent. You must use a Windows-authenticated login for executing
SSIS packages. For more information, see Setting up a Windows-authenticated user for SSIS (p. 1569).

Topics
• Setting database connection managers for SSIS projects (p. 1573)
• Creating an SSIS proxy (p. 1573)
• Scheduling an SSIS package using SQL Server Agent (p. 1574)
• Revoking SSIS access from the proxy (p. 1575)

1572
Amazon Relational Database Service User Guide
SQL Server Integration Services

Setting database connection managers for SSIS projects


When you use a connection manager, you can use these types of authentication:

• For local database connections using AWS Managed Active Directory, you can use
SQL authentication or Windows authentication. For Windows authentication, use
DB_instance_name.fully_qualified_domain_name as the server name of the connection string.

An example is myssisinstance.corp-ad.example.com, where myssisinstance is the DB


instance name and corp-ad.example.com is the fully qualified domain name.
• For remote connections, always use SQL authentication.
• For local database connections using self-managed Active Directory, you can use SQL authentication or
Windows authentication. For Windows authentication, use . or LocalHost as the server name of the
connection string.

Creating an SSIS proxy


To be able to schedule SSIS packages using SQL Server Agent, create an SSIS credential and an SSIS
proxy. Run these procedures as a Windows-authenticated user.

To create the SSIS credential

• Create the credential for the proxy. To do this, you can use SSMS or the following SQL statement.

USE [master]
GO
CREATE CREDENTIAL [SSIS_Credential] WITH IDENTITY = N'mydomain\user_name', SECRET =
N'mysecret'
GO

Note
IDENTITY must be a domain-authenticated login. Replace mysecret with the password for
the domain-authenticated login.
Whenever the SSISDB primary host is changed, alter the SSIS proxy credentials to allow the
new host to access them.

To create the SSIS proxy

1. Use the following SQL statement to create the proxy.

USE [msdb]
GO
EXEC msdb.dbo.sp_add_proxy
@proxy_name=N'SSIS_Proxy',@credential_name=N'SSIS_Credential',@description=N''
GO

2. Use the following SQL statement to grant access to the proxy to other users.

USE [msdb]
GO
EXEC msdb.dbo.sp_grant_login_to_proxy
@proxy_name=N'SSIS_Proxy',@login_name=N'mydomain\user_name'
GO

3. Use the following SQL statement to give the SSIS subsystem access to the proxy.

USE [msdb]

1573
Amazon Relational Database Service User Guide
SQL Server Integration Services

GO
EXEC msdb.dbo.rds_sqlagent_proxy
@task_type='GRANT_SUBSYSTEM_ACCESS',@proxy_name='SSIS_Proxy',@proxy_subsystem='SSIS'
GO

To view the proxy and grants on the proxy

1. Use the following SQL statement to view the grantees of the proxy.

USE [msdb]
GO
EXEC sp_help_proxy
GO

2. Use the following SQL statement to view the subsystem grants.

USE [msdb]
GO
EXEC msdb.dbo.sp_enum_proxy_for_subsystem
GO

Scheduling an SSIS package using SQL Server Agent


After you create the credential and proxy and grant SSIS access to the proxy, you can create a SQL Server
Agent job to schedule the SSIS package.

To schedule the SSIS package

• You can use SSMS or T-SQL for creating the SQL Server Agent job. The following example uses T-
SQL.

USE [msdb]
GO
DECLARE @jobId BINARY(16)
EXEC msdb.dbo.sp_add_job @job_name=N'MYSSISJob',
@enabled=1,
@notify_level_eventlog=0,
@notify_level_email=2,
@notify_level_page=2,
@delete_level=0,
@category_name=N'[Uncategorized (Local)]',
@job_id = @jobId OUTPUT
GO
EXEC msdb.dbo.sp_add_jobserver @job_name=N'MYSSISJob',@server_name=N'(local)'
GO
EXEC msdb.dbo.sp_add_jobstep @job_name=N'MYSSISJob',@step_name=N'ExecuteSSISPackage',
@step_id=1,
@cmdexec_success_code=0,
@on_success_action=1,
@on_fail_action=2,
@retry_attempts=0,
@retry_interval=0,
@os_run_priority=0,
@subsystem=N'SSIS',
@command=N'/ISSERVER "\"\SSISDB\MySSISFolder\MySSISProject\MySSISPackage.dtsx\"" /
SERVER "\"my-rds-ssis-instance.corp-ad.company.com/\""
/Par "\"$ServerOption::LOGGING_LEVEL(Int16)\"";1 /Par
"\"$ServerOption::SYNCHRONIZED(Boolean)\"";True /CALLERINFO SQLAGENT /REPORTING E',
@database_name=N'master',

1574
Amazon Relational Database Service User Guide
SQL Server Integration Services

@flags=0,
@proxy_name=N'SSIS_Proxy'
GO

Revoking SSIS access from the proxy


You can revoke access to the SSIS subsystem and delete the SSIS proxy using the following stored
procedures.

To revoke access and delete the proxy

1. Revoke subsystem access.

USE [msdb]
GO
EXEC msdb.dbo.rds_sqlagent_proxy
@task_type='REVOKE_SUBSYSTEM_ACCESS',@proxy_name='SSIS_Proxy',@proxy_subsystem='SSIS'
GO

2. Revoke the grants on the proxy.

USE [msdb]
GO
EXEC msdb.dbo.sp_revoke_login_from_proxy
@proxy_name=N'SSIS_Proxy',@name=N'mydomain\user_name'
GO

3. Delete the proxy.

USE [msdb]
GO
EXEC dbo.sp_delete_proxy @proxy_name = N'SSIS_Proxy'
GO

Disabling SSIS
To disable SSIS, remove the SSIS option from its option group.
Important
Removing the option doesn't delete the SSISDB database, so you can safely remove the option
without losing the SSIS projects.
You can re-enable the SSIS option after removal to reuse the SSIS projects that were previously
deployed to the SSIS catalog.

Console

The following procedure removes the SSIS option.

To remove the SSIS option from its option group

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Option groups.
3. Choose the option group with the SSIS option (ssis-se-2016 in the previous examples).
4. Choose Delete option.
5. Under Deletion options, choose SSIS for Options to delete.

1575
Amazon Relational Database Service User Guide
SQL Server Integration Services

6. Under Apply immediately, choose Yes to delete the option immediately, or No to delete it at the
next maintenance window.
7. Choose Delete.

CLI

The following procedure removes the SSIS option.

To remove the SSIS option from its option group

• Run one of the following commands.

Example

For Linux, macOS, or Unix:

aws rds remove-option-from-option-group \


--option-group-name ssis-se-2016 \
--options SSIS \
--apply-immediately

For Windows:

aws rds remove-option-from-option-group ^


--option-group-name ssis-se-2016 ^
--options SSIS ^
--apply-immediately

Dropping the SSISDB database


After removing the SSIS option, the SSISDB database isn't deleted. To drop the SSISDB database, use the
rds_drop_ssis_database stored procedure after removing the SSIS option.

To drop the SSIS database

• Use the following stored procedure.

USE [msdb]
GO
EXEC dbo.rds_drop_ssis_database
GO

After dropping the SSISDB database, if you re-enable the SSIS option you get a fresh SSISDB catalog.

1576
Amazon Relational Database Service User Guide
SQL Server Reporting Services

Support for SQL Server Reporting Services in


Amazon RDS for SQL Server
Microsoft SQL Server Reporting Services (SSRS) is a server-based application used for report generation
and distribution. It's part of a suite of SQL Server services that also includes SQL Server Analysis Services
(SSAS) and SQL Server Integration Services (SSIS). SSRS is a service built on top of SQL Server. You can
use it to collect data from various data sources and present it in a way that's easily understandable and
ready for analysis.

Amazon RDS for SQL Server supports running SSRS directly on RDS DB instances. You can use SSRS with
existing or new DB instances.

RDS supports SSRS for SQL Server Standard and Enterprise Editions on the following versions:

• SQL Server 2019, version 15.00.4043.16.v1 and higher


• SQL Server 2017, version 14.00.3223.3.v1 and higher
• SQL Server 2016, version 13.00.5820.21.v1 and higher

Contents
• Limitations and recommendations (p. 1577)
• Turning on SSRS (p. 1578)
• Creating an option group for SSRS (p. 1578)
• Adding the SSRS option to your option group (p. 1579)
• Associating your option group with your DB instance (p. 1581)
• Allowing inbound access to your VPC security group (p. 1583)
• Report server databases (p. 1583)
• SSRS log files (p. 1583)
• Accessing the SSRS web portal (p. 1583)
• Using SSL on RDS (p. 1583)
• Granting access to domain users (p. 1584)
• Accessing the web portal (p. 1583)
• Deploying reports to SSRS (p. 1584)
• Configuring the report data source (p. 1585)
• Using SSRS Email to send reports (p. 1585)
• Revoking system-level permissions (p. 1586)
• Monitoring the status of a task (p. 1587)
• Turning off SSRS (p. 1588)
• Deleting the SSRS databases (p. 1589)

Limitations and recommendations


The following limitations and recommendations apply to running SSRS on RDS for SQL Server:

• You can't use SSRS on DB instances that have read replicas.


• Instances must use self-managed Active Directory or AWS Directory Service for Microsoft Active
Directory for SSRS web portal and web server authentication. For more information, see Working with
Active Directory with RDS for SQL Server (p. 1387).
• You can't back up the reporting server databases that are created with the SSRS option.
• Importing and restoring report server databases from other instances of SSRS isn't supported.

1577
Amazon Relational Database Service User Guide
SQL Server Reporting Services

Make sure to use the databases that are created when the SSRS option is added to the RDS DB
instance. For more information, see Report server databases (p. 1583).
• You can't configure SSRS to listen on the default SSL port (443). The allowed values are 1150–49511,
except 1234, 1434, 3260, 3343, 3389, and 47001.
• Subscriptions through a Microsoft Windows file share aren't supported.
• Using Reporting Services Configuration Manager isn't supported.
• Creating and modifying roles isn't supported.
• Modifying report server properties isn't supported.
• System administrator and system user roles aren't granted.
• You can't edit system-level role assignments through the web portal.

Turning on SSRS
Use the following process to turn on SSRS for your DB instance:

1. Create a new option group, or choose an existing option group.


2. Add the SSRS option to the option group.
3. Associate the option group with the DB instance.
4. Allow inbound access to the virtual private cloud (VPC) security group for the SSRS listener port.

Creating an option group for SSRS


To work with SSRS, create an option group that corresponds to the SQL Server engine and version of the
DB instance that you plan to use. To do this, use the AWS Management Console or the AWS CLI.
Note
You can also use an existing option group if it's for the correct SQL Server engine and version.

Console

The following procedure creates an option group for SQL Server Standard Edition 2017.

To create the option group

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Option groups.
3. Choose Create group.
4. In the Create option group pane, do the following:

a. For Name, enter a name for the option group that is unique within your AWS account, such as
ssrs-se-2017. The name can contain only letters, digits, and hyphens.
b. For Description, enter a brief description of the option group, such as SSRS option group
for SQL Server SE 2017. The description is used for display purposes.
c. For Engine, choose sqlserver-se.
d. For Major engine version, choose 14.00.
5. Choose Create.

CLI

The following procedure creates an option group for SQL Server Standard Edition 2017.

1578
Amazon Relational Database Service User Guide
SQL Server Reporting Services

To create the option group

• Run one of the following commands.

Example

For Linux, macOS, or Unix:

aws rds create-option-group \


--option-group-name ssrs-se-2017 \
--engine-name sqlserver-se \
--major-engine-version 14.00 \
--option-group-description "SSRS option group for SQL Server SE 2017"

For Windows:

aws rds create-option-group ^


--option-group-name ssrs-se-2017 ^
--engine-name sqlserver-se ^
--major-engine-version 14.00 ^
--option-group-description "SSRS option group for SQL Server SE 2017"

Adding the SSRS option to your option group


Next, use the AWS Management Console or the AWS CLI to add the SSRS option to your option group.

Console

To add the SSRS option

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Option groups.
3. Choose the option group that you just created, then choose Add option.
4. Under Option details, choose SSRS for Option name.
5. Under Option settings, do the following:

a. Enter the port for the SSRS service to listen on. The default is 8443. For a list of allowed values,
see Limitations and recommendations (p. 1577).
b. Enter a value for Max memory.

Max memory specifies the upper threshold above which no new memory allocation requests are
granted to report server applications. The number is a percentage of the total memory of the
DB instance. The allowed values are 10–80.
c. For Security groups, choose the VPC security group to associate with the option. Use the same
security group that is associated with your DB instance.
6. To use SSRS Email to send reports, choose the Configure email delivery options check box under
Email delivery in reporting services, and then do the following:

a. For Sender email address, enter the email address to use in the From field of messages sent by
SSRS Email.

Specify a user account that has permission to send mail from the SMTP server.
b. For SMTP server, specify the SMTP server or gateway to use.

1579
Amazon Relational Database Service User Guide
SQL Server Reporting Services

It can be an IP address, the NetBIOS name of a computer on your corporate intranet, or a fully
qualified domain name.
c. For SMTP port, enter the port to use to connect to the mail server. The default is 25.
d. To use authentication:

i. Select the Use authentication check box.


ii. For Secret Amazon Resource Name (ARN) enter the AWS Secrets Manager ARN for the user
credentials.

Use the following format:

arn:aws:secretsmanager:Region:AccountId:secret:SecretName-6RandomCharacters

For example:

arn:aws:secretsmanager:us-west-2:123456789012:secret:MySecret-a1b2c3

For more information on creating the secret, see Using SSRS Email to send
reports (p. 1585).
e. Select the Use Secure Sockets Layer (SSL) check box to encrypt email messages using SSL.
7. Under Scheduling, choose whether to add the option immediately or at the next maintenance
window.
8. Choose Add option.

CLI

To add the SSRS option

1. Create a JSON file, for example ssrs-option.json.

a. Set the following required parameters:

• OptionGroupName – The name of option group that you created or chose previously (ssrs-
se-2017 in the following example).
• Port – The port for the SSRS service to listen on. The default is 8443. For a list of allowed
values, see Limitations and recommendations (p. 1577).
• VpcSecurityGroupMemberships – VPC security group memberships for your RDS DB
instance.
• MAX_MEMORY – The upper threshold above which no new memory allocation requests are
granted to report server applications. The number is a percentage of the total memory of the
DB instance. The allowed values are 10–80.
b. (Optional) Set the following parameters to use SSRS Email:

• SMTP_ENABLE_EMAIL – Set to true to use SSRS Email. The default is false.


• SMTP_SENDER_EMAIL_ADDRESS – The email address to use in the From field of messages
sent by SSRS Email. Specify a user account that has permission to send mail from the SMTP
server.
• SMTP_SERVER – The SMTP server or gateway to use. It can be an IP address, the NetBIOS
name of a computer on your corporate intranet, or a fully qualified domain name.
• SMTP_PORT – The port to use to connect to the mail server. The default is 25.
• SMTP_USE_SSL – Set to true to encrypt email messages using SSL. The default is true.
• SMTP_EMAIL_CREDENTIALS_SECRET_ARN – The Secrets Manager ARN that holds the user
credentials. Use the following format:
1580
Amazon Relational Database Service User Guide
SQL Server Reporting Services

arn:aws:secretsmanager:Region:AccountId:secret:SecretName-6RandomCharacters

For more information on creating the secret, see Using SSRS Email to send reports (p. 1585).
• SMTP_USE_ANONYMOUS_AUTHENTICATION – Set to true and don't include
SMTP_EMAIL_CREDENTIALS_SECRET_ARN if you don't want to use authentication.

The default is false when SMTP_ENABLE_EMAIL is true.

The following example includes the SSRS Email parameters, using the secret ARN.

{
"OptionGroupName": "ssrs-se-2017",
"OptionsToInclude": [
{
"OptionName": "SSRS",
"Port": 8443,
"VpcSecurityGroupMemberships": ["sg-0abcdef123"],
"OptionSettings": [
{"Name": "MAX_MEMORY","Value": "60"},
{"Name": "SMTP_ENABLE_EMAIL","Value": "true"}
{"Name": "SMTP_SENDER_EMAIL_ADDRESS","Value": "[email protected]"},
{"Name": "SMTP_SERVER","Value": "email-smtp.us-west-2.amazonaws.com"},
{"Name": "SMTP_PORT","Value": "25"},
{"Name": "SMTP_USE_SSL","Value": "true"},
{"Name": "SMTP_EMAIL_CREDENTIALS_SECRET_ARN","Value":
"arn:aws:secretsmanager:us-west-2:123456789012:secret:MySecret-a1b2c3"}
]
}],
"ApplyImmediately": true
}

2. Add the SSRS option to the option group.

Example

For Linux, macOS, or Unix:

aws rds add-option-to-option-group \


--cli-input-json file://ssrs-option.json \
--apply-immediately

For Windows:

aws rds add-option-to-option-group ^


--cli-input-json file://ssrs-option.json ^
--apply-immediately

Associating your option group with your DB instance


Use the AWS Management Console or the AWS CLI to associate your option group with your DB instance.

If you use an existing DB instance, it must already have an Active Directory domain and AWS Identity and
Access Management (IAM) role associated with it. If you create a new instance, specify an existing Active
Directory domain and IAM role. For more information, see Working with Active Directory with RDS for
SQL Server (p. 1387).

1581
Amazon Relational Database Service User Guide
SQL Server Reporting Services

Console

You can associate your option group with a new or existing DB instance:

• For a new DB instance, associate the option group when you launch the instance. For more
information, see Creating an Amazon RDS DB instance (p. 300).
• For an existing DB instance, modify the instance and associate the new option group. For more
information, see Modifying an Amazon RDS DB instance (p. 401).

CLI

You can associate your option group with a new or existing DB instance.

To create a DB instance that uses your option group

• Specify the same DB engine type and major version as you used when creating the option group.

Example

For Linux, macOS, or Unix:

aws rds create-db-instance \


--db-instance-identifier myssrsinstance \
--db-instance-class db.m5.2xlarge \
--engine sqlserver-se \
--engine-version 14.00.3223.3.v1 \
--allocated-storage 100 \
--manage-master-user-password \
--master-username admin \
--storage-type gp2 \
--license-model li \
--domain-iam-role-name my-directory-iam-role \
--domain my-domain-id \
--option-group-name ssrs-se-2017

For Windows:

aws rds create-db-instance ^


--db-instance-identifier myssrsinstance ^
--db-instance-class db.m5.2xlarge ^
--engine sqlserver-se ^
--engine-version 14.00.3223.3.v1 ^
--allocated-storage 100 ^
--manage-master-user-password ^
--master-username admin ^
--storage-type gp2 ^
--license-model li ^
--domain-iam-role-name my-directory-iam-role ^
--domain my-domain-id ^
--option-group-name ssrs-se-2017

To modify a DB instance to use your option group

• Run one of the following commands.

Example

For Linux, macOS, or Unix:

1582
Amazon Relational Database Service User Guide
SQL Server Reporting Services

aws rds modify-db-instance \


--db-instance-identifier myssrsinstance \
--option-group-name ssrs-se-2017 \
--apply-immediately

For Windows:

aws rds modify-db-instance ^


--db-instance-identifier myssrsinstance ^
--option-group-name ssrs-se-2017 ^
--apply-immediately

Allowing inbound access to your VPC security group


To allow inbound access to the VPC security group associated with your DB instance, create an inbound
rule for the specified SSRS listener port. For more information about setting up security groups, see
Provide access to your DB instance in your VPC by creating a security group (p. 177).

Report server databases


When your DB instance is associated with the SSRS option, two new databases are created on your
DB instance: rdsadmin_ReportServer and rdsadmin_ReportServerTempDB. These databases act as the
ReportServer and ReportServerTempDB databases. SSRS stores its data in the ReportServer database
and caches its data in the ReportServerTempDB database.

RDS owns and manages these databases, so database operations on them such as ALTER and DROP
aren't permitted. However, you can perform read operations on the rdsadmin_ReportServer database.

SSRS log files


You can access ReportServerService_timestamp.log files. These report server logs can be found in the
D:\rdsdbdata\Log\SSRS directory. (The D:\rdsdbdata\Log directory is also the parent directory
for error logs and SQL Server Agent logs.)

For existing SSRS instances, restarting the SSRS service might be necessary to access report server logs.
You can restart the service by updating the SSRS option.

For more information, see Working with Microsoft SQL Server logs (p. 1619).

Accessing the SSRS web portal


Use the following process to access the SSRS web portal:

1. Turn on Secure Sockets Layer (SSL).


2. Grant access to domain users.
3. Access the web portal using a browser and the domain user credentials.

Using SSL on RDS


SSRS uses the HTTPS SSL protocol for its connections. To work with this protocol, import an SSL
certificate into the Microsoft Windows operating system on your client computer.

For more information on SSL certificates, see Using SSL/TLS to encrypt a connection to a DB
instance (p. 2591). For more information about using SSL with SQL Server, see Using SSL with a
Microsoft SQL Server DB instance (p. 1456).

1583
Amazon Relational Database Service User Guide
SQL Server Reporting Services

Granting access to domain users


In a new SSRS activation, there are no role assignments in SSRS. To give a domain user or user group
access to the web portal, RDS provides a stored procedure.

To grant access to a domain user on the web portal

• Use the following stored procedure.

exec msdb.dbo.rds_msbi_task
@task_type='SSRS_GRANT_PORTAL_PERMISSION',
@ssrs_group_or_username=N'AD_domain\user';

The domain user or user group is granted the RDS_SSRS_ROLE system role. This role has the following
system-level tasks granted to it:

• Run reports
• Manage jobs
• Manage shared schedules
• View shared schedules

The item-level role of Content Manager on the root folder is also granted.

Accessing the web portal


After the SSRS_GRANT_PORTAL_PERMISSION task finishes successfully, you have access to the portal
using a web browser. The web portal URL has the following format.

https://rds_endpoint:port/Reports

In this format, the following applies:

• rds_endpoint – The endpoint for the RDS DB instance that you're using with SSRS.

You can find the endpoint on the Connectivity & security tab for your DB instance. For more
information, see Connecting to a DB instance running the Microsoft SQL Server database
engine (p. 1380).
• port – The listener port for SSRS that you set in the SSRS option.

To access the web portal

1. Enter the web portal URL in your browser.

https://myssrsinstance.cg034itsfake.us-east-1.rds.amazonaws.com:8443/Reports

2. Log in with the credentials for a domain user that you granted access with the
SSRS_GRANT_PORTAL_PERMISSION task.

Deploying reports to SSRS


After you have access to the web portal, you can deploy reports to it. You can use the Upload tool in the
web portal to upload reports, or deploy directly from SQL Server data tools (SSDT). When deploying
from SSDT, ensure the following:

1584
Amazon Relational Database Service User Guide
SQL Server Reporting Services

• The user who launched SSDT has access to the SSRS web portal.
• The TargetServerURL value in the SSRS project properties is set to the HTTPS endpoint of the RDS
DB instance suffixed with ReportServer, for example:

https://myssrsinstance.cg034itsfake.us-east-1.rds.amazonaws.com:8443/ReportServer

Configuring the report data source


After you deploy a report to SSRS, you should configure the report data source. When configuring the
report data source, ensure the following:

• For RDS for SQL Server DB instances joined to AWS Directory Service for Microsoft Active Directory,
use the fully qualified domain name (FQDN) as the data source name of the connection string. An
example is myssrsinstance.corp-ad.example.com, where myssrsinstance is the DB instance
name and corp-ad.example.com is the fully qualified domain name.
• For RDS for SQL Server DB instances joined to self-managed Active Directory, use ., or LocalHost as
the data source name of the connection string.

Using SSRS Email to send reports


SSRS includes the SSRS Email extension, which you can use to send reports to users.

To configure SSRS Email, use the SSRS option settings. For more information, see Adding the SSRS
option to your option group (p. 1579).

After configuring SSRS Email, you can subscribe to reports on the report server. For more information,
see Email delivery in Reporting Services in the Microsoft documentation.

Integration with AWS Secrets Manager is required for SSRS Email to function on RDS. To integrate with
Secrets Manager, you create a secret.
Note
If you change the secret later, you also have to update the SSRS option in the option group.

To create a secret for SSRS Email

1. Follow the steps in Create a secret in the AWS Secrets Manager User Guide.

a. For Select secret type, choose Other type of secrets.


b. For Key/value pairs, enter the following:

• SMTP_USERNAME – Enter a user with permission to send mail from the SMTP server.
• SMTP_PASSWORD – Enter a password for the SMTP user.
c. For Encryption key, don't use the default AWS KMS key. Use your own existing key, or create a
new one.

The KMS key policy must allow the kms:Decrypt action, for example:

{
"Sid": "Allow use of the key",
"Effect": "Allow",
"Principal": {
"Service": [
"rds.amazonaws.com"

1585
Amazon Relational Database Service User Guide
SQL Server Reporting Services

]
},
"Action": [
"kms:Decrypt"
],
"Resource": "*"
}

2. Follow the steps in Attach a permissions policy to a secret in the AWS Secrets Manager User
Guide. The permissions policy gives the secretsmanager:GetSecretValue action to the
rds.amazonaws.com service principal.

We recommend that you use the aws:sourceAccount and aws:sourceArn conditions in the
policy to avoid the confused deputy problem. Use your AWS account for aws:sourceAccount and
the option group ARN for aws:sourceArn. For more information, see Preventing cross-service
confused deputy problems (p. 2640).

The following example shows a permissions policy.

{
"Version" : "2012-10-17",
"Statement" : [ {
"Effect" : "Allow",
"Principal" : {
"Service" : "rds.amazonaws.com"
},
"Action" : "secretsmanager:GetSecretValue",
"Resource" : "*",
"Condition" : {
"StringEquals" : {
"aws:sourceAccount" : "123456789012"
},
"ArnLike" : {
"aws:sourceArn" : "arn:aws:rds:us-west-2:123456789012:og:ssrs-se-2017"
}
}
} ]
}

For more examples, see Permissions policy examples for AWS Secrets Manager in the AWS Secrets
Manager User Guide.

Revoking system-level permissions


The RDS_SSRS_ROLE system role doesn't have sufficient permissions to delete system-level role
assignments. To remove a user or user group from RDS_SSRS_ROLE, use the same stored procedure that
you used to grant the role but use the SSRS_REVOKE_PORTAL_PERMISSION task type.

To revoke access from a domain user for the web portal

• Use the following stored procedure.

exec msdb.dbo.rds_msbi_task
@task_type='SSRS_REVOKE_PORTAL_PERMISSION',
@ssrs_group_or_username=N'AD_domain\user';

Doing this deletes the user from the RDS_SSRS_ROLE system role. It also deletes the user from the
Content Manager item-level role if the user has it.

1586
Amazon Relational Database Service User Guide
SQL Server Reporting Services

Monitoring the status of a task


To track the status of your granting or revoking task, call the rds_fn_task_status function. It takes
two parameters. The first parameter should always be NULL because it doesn't apply to SSRS. The
second parameter accepts a task ID.

To see a list of all tasks, set the first parameter to NULL and the second parameter to 0, as shown in the
following example.

SELECT * FROM msdb.dbo.rds_fn_task_status(NULL,0);

To get a specific task, set the first parameter to NULL and the second parameter to the task ID, as shown
in the following example.

SELECT * FROM msdb.dbo.rds_fn_task_status(NULL,42);

The rds_fn_task_status function returns the following information.

Output parameter Description

task_id The ID of the task.

task_type For SSRS, tasks can have the following task types:

• SSRS_GRANT_PORTAL_PERMISSION
• SSRS_REVOKE_PORTAL_PERMISSION

database_name Not applicable to SSRS tasks.

% complete The progress of the task as a percentage.

duration (mins) The amount of time spent on the task, in minutes.

lifecycle The status of the task. Possible statuses are the


following:

• CREATED – After you call one of the SSRS


stored procedures, a task is created and the
status is set to CREATED.
• IN_PROGRESS – After a task starts, the status
is set to IN_PROGRESS. It can take up to five
minutes for the status to change from CREATED
to IN_PROGRESS.
• SUCCESS – After a task completes, the status is
set to SUCCESS.
• ERROR – If a task fails, the status is set to
ERROR. For more information about the error,
see the task_info column.
• CANCEL_REQUESTED – After you call the
rds_cancel_task stored procedure, the
status of the task is set to CANCEL_REQUESTED.
• CANCELLED – After a task is successfully
canceled, the status of the task is set to
CANCELLED.

1587
Amazon Relational Database Service User Guide
SQL Server Reporting Services

Output parameter Description

task_info Additional information about the task. If an error


occurs during processing, this column contains
information about the error.

last_updated The date and time that the task status was last
updated.

created_at The date and time that the task was created.

S3_object_arn Not applicable to SSRS tasks.

overwrite_S3_backup_file Not applicable to SSRS tasks.

KMS_master_key_arn Not applicable to SSRS tasks.

filepath Not applicable to SSRS tasks.

overwrite_file Not applicable to SSRS tasks.

task_metadata Metadata associated with the SSRS task.

Turning off SSRS
To turn off SSRS, remove the SSRS option from its option group. Removing the option doesn't delete the
SSRS databases. For more information, see Deleting the SSRS databases (p. 1589).

You can turn SSRS on again by adding back the SSRS option. If you have also deleted the SSRS
databases, readding the option on the same DB instance creates new report server databases.

Console

To remove the SSRS option from its option group

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Option groups.
3. Choose the option group with the SSRS option (ssrs-se-2017 in the previous examples).
4. Choose Delete option.
5. Under Deletion options, choose SSRS for Options to delete.
6. Under Apply immediately, choose Yes to delete the option immediately, or No to delete it at the
next maintenance window.
7. Choose Delete.

CLI

To remove the SSRS option from its option group

• Run one of the following commands.

Example
For Linux, macOS, or Unix:

aws rds remove-option-from-option-group \

1588
Amazon Relational Database Service User Guide
SQL Server Reporting Services

--option-group-name ssrs-se-2017 \
--options SSRS \
--apply-immediately

For Windows:

aws rds remove-option-from-option-group ^


--option-group-name ssrs-se-2017 ^
--options SSRS ^
--apply-immediately

Deleting the SSRS databases


Removing the SSRS option doesn't delete the report server databases. To delete them, use the following
stored procedure.

To delete the report server databases, be sure to remove the SSRS option first.

To delete the SSRS databases

• Use the following stored procedure.

exec msdb.dbo.rds_drop_ssrs_databases

1589
Amazon Relational Database Service User Guide
Microsoft Distributed Transaction Coordinator

Support for Microsoft Distributed Transaction


Coordinator in RDS for SQL Server
A distributed transaction is a database transaction in which two or more network hosts are involved. RDS
for SQL Server supports distributed transactions among hosts, where a single host can be one of the
following:

• RDS for SQL Server DB instance


• On-premises SQL Server host
• Amazon EC2 host with SQL Server installed
• Any other EC2 host or RDS DB instance with a database engine that supports distributed transactions

In RDS, starting with SQL Server 2012 (version 11.00.5058.0.v1 and later), all editions of RDS for SQL
Server support distributed transactions. The support is provided using Microsoft Distributed Transaction
Coordinator (MSDTC). For in-depth information about MSDTC, see Distributed Transaction Coordinator in
the Microsoft documentation.

Contents
• Limitations (p. 1590)
• Enabling MSDTC (p. 1591)
• Creating the option group for MSDTC (p. 1591)
• Adding the MSDTC option to the option group (p. 1592)
• Creating the parameter group for MSDTC (p. 1594)
• Modifying the parameter for MSDTC (p. 1594)
• Associating the option group and parameter group with the DB instance (p. 1595)
• Using distributed transactions (p. 1597)
• Using XA transactions (p. 1597)
• Using transaction tracing (p. 1598)
• Modifying the MSDTC option (p. 1599)
• Disabling MSDTC (p. 1599)
• Troubleshooting MSDTC for RDS for SQL Server (p. 1600)

Limitations
The following limitations apply to using MSDTC on RDS for SQL Server:

• MSDTC isn't supported on instances using SQL Server Database Mirroring. For more information, see
Transactions - availability groups and database mirroring.
• The in-doubt xact resolution parameter must be set to 1 or 2. For more information, see
Modifying the parameter for MSDTC (p. 1594).
• MSDTC requires all hosts participating in distributed transactions to be resolvable using their host
names. RDS automatically maintains this functionality for domain-joined instances. However, for
standalone instances make sure to configure the DNS server manually.
• Java Database Connectivity (JDBC) XA transactions are supported for SQL Server 2017 version
14.00.3223.3 and higher, and SQL Server 2019.
• Distributed transactions that depend on client dynamic link libraries (DLLs) on RDS instances aren't
supported.
• Using custom XA dynamic link libraries isn't supported.

1590
Amazon Relational Database Service User Guide
Microsoft Distributed Transaction Coordinator

Enabling MSDTC
Use the following process to enable MSDTC for your DB instance:

1. Create a new option group, or choose an existing option group.


2. Add the MSDTC option to the option group.
3. Create a new parameter group, or choose an existing parameter group.
4. Modify the parameter group to set the in-doubt xact resolution parameter to 1 or 2.
5. Associate the option group and parameter group with the DB instance.

Creating the option group for MSDTC


Use the AWS Management Console or the AWS CLI to create an option group that corresponds to the
SQL Server engine and version of your DB instance.
Note
You can also use an existing option group if it's for the correct SQL Server engine and version.

Console

The following procedure creates an option group for SQL Server Standard Edition 2016.

To create the option group

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Option groups.
3. Choose Create group.
4. In the Create option group pane, do the following:

a. For Name, enter a name for the option group that is unique within your AWS account, such as
msdtc-se-2016. The name can contain only letters, digits, and hyphens.
b. For Description, enter a brief description of the option group, such as MSDTC option group
for SQL Server SE 2016. The description is used for display purposes.
c. For Engine, choose sqlserver-se.
d. For Major engine version, choose 13.00.
5. Choose Create.

CLI

The following example creates an option group for SQL Server Standard Edition 2016.

To create the option group

• Use one of the following commands.

Example

For Linux, macOS, or Unix:

aws rds create-option-group \


--option-group-name msdtc-se-2016 \
--engine-name sqlserver-se \
--major-engine-version 13.00 \

1591
Amazon Relational Database Service User Guide
Microsoft Distributed Transaction Coordinator

--option-group-description "MSDTC option group for SQL Server SE 2016"

For Windows:

aws rds create-option-group ^


--option-group-name msdtc-se-2016 ^
--engine-name sqlserver-se ^
--major-engine-version 13.00 ^
--option-group-description "MSDTC option group for SQL Server SE 2016"

Adding the MSDTC option to the option group


Next, use the AWS Management Console or the AWS CLI to add the MSDTC option to the option group.

The following option settings are required:

• Port – The port that you use to access MSDTC. Allowed values are 1150–49151 except for 1234, 1434,
3260, 3343, 3389, and 47001. The default value is 5000.

Make sure that the port you want to use is enabled in your firewall rules. Also, make sure as needed
that this port is enabled in the inbound and outbound rules for the security group associated with your
DB instance. For more information, see Can't connect to Amazon RDS DB instance (p. 2727).
• Security groups – The VPC security group memberships for your RDS DB instance.
• Authentication type – The authentication mode between hosts. The following authentication types
are supported:
• Mutual – The RDS instances are mutually authenticated to each other using integrated
authentication. If this option is selected, all instances associated with this option group must be
domain-joined.
• None – No authentication is performed between hosts. We don't recommend using this mode in
production environments.
• Transaction log size – The size of the MSDTC transaction log. Allowed values are 4–1024 MB. The
default size is 4 MB.

The following option settings are optional:

• Enable inbound connections – Whether to allow inbound MSDTC connections to instances associated
with this option group.
• Enable outbound connections – Whether to allow outbound MSDTC connections from instances
associated with this option group.
• Enable XA – Whether to allow XA transactions. For more information on the XA protocol, see XA
specification.
• Enable SNA LU – Whether to allow the SNA LU protocol to be used for distributed transactions. For
more information on SNA LU protocol support, see Managing IBM CICS LU 6.2 transactions in the
Microsoft documentation.

Console

To add the MSDTC option

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Option groups.
3. Choose the option group that you just created.

1592
Amazon Relational Database Service User Guide
Microsoft Distributed Transaction Coordinator

4. Choose Add option.


5. Under Option details, choose MSDTC for Option name.
6. Under Option settings:

a. For Port, enter the port number for accessing MSDTC. The default is 5000.
b. For Security groups, choose the VPC security group to associate with the option.
c. For Authentication type, choose Mutual or None.
d. For Transaction log size, enter a value from 4–1024. The default is 4.
7. Under Additional configuration, do the following:

a. For Connections, as needed choose Enable inbound connections and Enable outbound
connections.
b. For Allowed protocols, as needed choose Enable XA and Enable SNA LU.
8. Under Scheduling, choose whether to add the option immediately or at the next maintenance
window.
9. Choose Add option.

To add this option, no reboot is required.

CLI

To add the MSDTC option

1. Create a JSON file, for example msdtc-option.json, with the following required parameters.

{
"OptionGroupName":"msdtc-se-2016",
"OptionsToInclude": [
{
"OptionName":"MSDTC",
"Port":5000,
"VpcSecurityGroupMemberships":["sg-0abcdef123"],
"OptionSettings":[{"Name":"AUTHENTICATION","Value":"MUTUAL"},
{"Name":"TRANSACTION_LOG_SIZE","Value":"4"}]
}],
"ApplyImmediately": true
}

2. Add the MSDTC option to the option group.

Example

For Linux, macOS, or Unix:

aws rds add-option-to-option-group \


--cli-input-json file://msdtc-option.json \
--apply-immediately

For Windows:

aws rds add-option-to-option-group ^


--cli-input-json file://msdtc-option.json ^
--apply-immediately

No reboot is required.

1593
Amazon Relational Database Service User Guide
Microsoft Distributed Transaction Coordinator

Creating the parameter group for MSDTC


Create or modify a parameter group for the in-doubt xact resolution parameter that corresponds
to the SQL Server edition and version of your DB instance.

Console

The following example creates a parameter group for SQL Server Standard Edition 2016.

To create the parameter group

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Parameter groups.
3. Choose Create parameter group.
4. In the Create parameter group pane, do the following:

a. For Parameter group family, choose sqlserver-se-13.0.


b. For Group name, enter an identifier for the parameter group, such as msdtc-sqlserver-
se-13.
c. For Description, enter in-doubt xact resolution.
5. Choose Create.

CLI

The following example creates a parameter group for SQL Server Standard Edition 2016.

To create the parameter group

• Use one of the following commands.

Example

For Linux, macOS, or Unix:

aws rds create-db-parameter-group \


--db-parameter-group-name msdtc-sqlserver-se-13 \
--db-parameter-group-family "sqlserver-se-13.0" \
--description "in-doubt xact resolution"

For Windows:

aws rds create-db-parameter-group ^


--db-parameter-group-name msdtc-sqlserver-se-13 ^
--db-parameter-group-family "sqlserver-se-13.0" ^
--description "in-doubt xact resolution"

Modifying the parameter for MSDTC


Modify the in-doubt xact resolution parameter in the parameter group that corresponds to the
SQL Server edition and version of your DB instance.

For MSDTC, set the in-doubt xact resolution parameter to one of the following:

• 1 – Presume commit. Any MSDTC in-doubt transactions are presumed to have committed.

1594
Amazon Relational Database Service User Guide
Microsoft Distributed Transaction Coordinator

• 2 – Presume abort. Any MSDTC in-doubt transactions are presumed to have stopped.

For more information, see in-doubt xact resolution server configuration option in the Microsoft
documentation.

Console
The following example modifies the parameter group that you created for SQL Server Standard Edition
2016.

To modify the parameter group

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Parameter groups.
3. Choose the parameter group, such as msdtc-sqlserver-se-13.
4. Under Parameters, filter the parameter list for xact.
5. Choose in-doubt xact resolution.
6. Choose Edit parameters.
7. Enter 1 or 2.
8. Choose Save changes.

CLI
The following example modifies the parameter group that you created for SQL Server Standard Edition
2016.

To modify the parameter group

• Use one of the following commands.

Example
For Linux, macOS, or Unix:

aws rds modify-db-parameter-group \


--db-parameter-group-name msdtc-sqlserver-se-13 \
--parameters "ParameterName='in-doubt xact
resolution',ParameterValue=1,ApplyMethod=immediate"

For Windows:

aws rds modify-db-parameter-group ^


--db-parameter-group-name msdtc-sqlserver-se-13 ^
--parameters "ParameterName='in-doubt xact
resolution',ParameterValue=1,ApplyMethod=immediate"

Associating the option group and parameter group with the DB instance
You can use the AWS Management Console or the AWS CLI to associate the MSDTC option group and
parameter group with the DB instance.

Console
You can associate the MSDTC option group and parameter group with a new or existing DB instance.

1595
Amazon Relational Database Service User Guide
Microsoft Distributed Transaction Coordinator

• For a new DB instance, associate them when you launch the instance. For more information, see
Creating an Amazon RDS DB instance (p. 300).
• For an existing DB instance, associate them by modifying the instance. For more information, see
Modifying an Amazon RDS DB instance (p. 401).
Note
If you use an domain-joined existing DB instance, it must already have an Active Directory
domain and AWS Identity and Access Management (IAM) role associated with it. If you create
a new domain-joined instance, specify an existing Active Directory domain and IAM role.
For more information, see Working with AWS Managed Active Directory with RDS for SQL
Server (p. 1401).

CLI

You can associate the MSDTC option group and parameter group with a new or existing DB instance.
Note
If you use an existing domain-joined DB instance, it must already have an Active Directory
domain and IAM role associated with it. If you create a new domain-joined instance, specify an
existing Active Directory domain and IAM role. For more information, see Working with AWS
Managed Active Directory with RDS for SQL Server (p. 1401).

To create a DB instance with the MSDTC option group and parameter group

• Specify the same DB engine type and major version as you used when creating the option group.

Example

For Linux, macOS, or Unix:

aws rds create-db-instance \


--db-instance-identifier mydbinstance \
--db-instance-class db.m5.2xlarge \
--engine sqlserver-se \
--engine-version 13.00.5426.0.v1 \
--allocated-storage 100 \
--manage-master-user-password \
--master-username admin \
--storage-type gp2 \
--license-model li \
--domain-iam-role-name my-directory-iam-role \
--domain my-domain-id \
--option-group-name msdtc-se-2016 \
--db-parameter-group-name msdtc-sqlserver-se-13

For Windows:

aws rds create-db-instance ^


--db-instance-identifier mydbinstance ^
--db-instance-class db.m5.2xlarge ^
--engine sqlserver-se ^
--engine-version 13.00.5426.0.v1 ^
--allocated-storage 100 ^
--manage-master-user-password ^
--master-username admin ^
--storage-type gp2 ^
--license-model li ^
--domain-iam-role-name my-directory-iam-role ^
--domain my-domain-id ^
--option-group-name msdtc-se-2016 ^

1596
Amazon Relational Database Service User Guide
Microsoft Distributed Transaction Coordinator

--db-parameter-group-name msdtc-sqlserver-se-13

To modify a DB instance and associate the MSDTC option group and parameter group

• Use one of the following commands.

Example

For Linux, macOS, or Unix:

aws rds modify-db-instance \


--db-instance-identifier mydbinstance \
--option-group-name msdtc-se-2016 \
--db-parameter-group-name msdtc-sqlserver-se-13 \
--apply-immediately

For Windows:

aws rds modify-db-instance ^


--db-instance-identifier mydbinstance ^
--option-group-name msdtc-se-2016 ^
--db-parameter-group-name msdtc-sqlserver-se-13 ^
--apply-immediately

Using distributed transactions


In Amazon RDS for SQL Server, you run distributed transactions in the same way as distributed
transactions running on-premises:

• Using .NET framework System.Transactions promotable transactions, which optimizes distributed


transactions by deferring their creation until they're needed.

In this case, promotion is automatic and doesn't require you to make any intervention. If there's only
one resource manager within the transaction, no promotion is performed. For more information about
implicit transaction scopes, see Implementing an implicit transaction using transaction scope in the
Microsoft documentation.

Promotable transactions are supported with these .NET implementations:


• Starting with ADO.NET 2.0, System.Data.SqlClient supports promotable transactions with SQL
Server. For more information, see System.Transactions integration with SQL Server in the Microsoft
documentation.
• ODP.NET supports System.Transactions. A local transaction is created for the first connection
opened in the TransactionsScope scope to Oracle Database 11g release 1 (version 11.1)
and later. When a second connection is opened, this transaction is automatically promoted to a
distributed transaction. For more information about distributed transaction support in ODP.NET, see
Microsoft Distributed Transaction Coordinator integration in the Microsoft documentation.
• Using the BEGIN DISTRIBUTED TRANSACTION statement. For more information, see BEGIN
DISTRIBUTED TRANSACTION (Transact-SQL) in the Microsoft documentation.

Using XA transactions
Starting from RDS for SQL Server 2017 version14.00.3223.3, you can control distributed transactions
using JDBC. When you set the Enable XA option setting to true in the MSDTC option, RDS

1597
Amazon Relational Database Service User Guide
Microsoft Distributed Transaction Coordinator

automatically enables JDBC transactions and grants the SqlJDBCXAUser role to the guest user. This
allows executing distributed transactions through JDBC. For more information, including a code example,
see Understanding XA transactions in the Microsoft documentation.

Using transaction tracing


RDS supports controlling MSDTC transaction traces and downloading them from the RDS DB instance
for troubleshooting. You can control transaction tracing sessions by running the following RDS stored
procedure.

exec msdb.dbo.rds_msdtc_transaction_tracing 'trace_action',


[@traceall='0|1'],
[@traceaborted='0|1'],
[@tracelong='0|1'];

The following parameter is required:

• trace_action – The tracing action. It can be START, STOP, or STATUS.

The following parameters are optional:

• @traceall – Set to 1 to trace all distributed transactions. The default is 0.


• @traceaborted – Set to 1 to trace canceled distributed transactions. The default is 0.
• @tracelong – Set to 1 to trace long-running distributed transactions. The default is 0.

Example of START tracing action

To start a new transaction tracing session, run the following example statement.

exec msdb.dbo.rds_msdtc_transaction_tracing 'START',


@traceall='0',
@traceaborted='1',
@tracelong='1';

Note
Only one transaction tracing session can be active at one time. If a new tracing session START
command is issued while a tracing session is active, an error is returned and the active tracing
session remains unchanged.

Example of STOP tracing action

To stop a transaction tracing session, run the following statement.

exec msdb.dbo.rds_msdtc_transaction_tracing 'STOP'

This statement stops the active transaction tracing session and saves the transaction trace data into the
log directory on the RDS DB instance. The first row of the output contains the overall result, and the
following lines indicate details of the operation.

The following is an example of a successful tracing session stop.

OK: Trace session has been successfully stopped.


Setting log file to: D:\rdsdbdata\MSDTC\Trace\dtctrace.log
Examining D:\rdsdbdata\MSDTC\Trace\msdtctr.mof for message formats, 8 found.

1598
Amazon Relational Database Service User Guide
Microsoft Distributed Transaction Coordinator

Searching for TMF files on path: (null)


Logfile D:\rdsdbdata\MSDTC\Trace\dtctrace.log:
OS version 10.0.14393 (Currently running on 6.2.9200)
Start Time <timestamp>
End Time <timestamp>
Timezone is @tzres.dll,-932 (Bias is 0mins)
BufferSize 16384 B
Maximum File Size 10 MB
Buffers Written Not set (Logger may not have been stopped).
Logger Mode Settings (11000002) ( circular paged
ProcessorCount 1
Processing completed Buffers: 1, Events: 3, EventsLost: 0 :: Format Errors: 0, Unknowns:
3
Event traces dumped to d:\rdsdbdata\Log\msdtc_<timestamp>.log

You can use the detailed information to query the name of the generated log file. For more information
about downloading log files from the RDS DB instance, see Monitoring Amazon RDS log files (p. 895).

The trace session logs remain on the instance for 35 days. Any older trace session logs are automatically
deleted.

Example of STATUS tracing action

To trace the status of a transaction tracing session, run the following statement.

exec msdb.dbo.rds_msdtc_transaction_tracing 'STATUS'

This statement outputs the following as separate rows of the result set.

OK
SessionStatus: <Started|Stopped>
TraceAll: <True|False>
TraceAborted: <True|False>
TraceLongLived: <True|False>

The first line indicates the overall result of the operation: OK or ERROR with details, if applicable. The
subsequent lines indicate details about the tracing session status:

• SessionStatus can be one of the following:


• Started if a tracing session is running.
• Stopped if no tracing session is running.
• The tracing session flags can be True or False depending on how they were set in the START
command.

Modifying the MSDTC option


After you enable the MSDTC option, you can modify its settings. For information about how to modify
option settings, see Modifying an option setting (p. 340).
Note
Some changes to MSDTC option settings require the MSDTC service to be restarted. This
requirement can affect running distributed transactions.

Disabling MSDTC
To disable MSDTC, remove the MSDTC option from its option group.

1599
Amazon Relational Database Service User Guide
Microsoft Distributed Transaction Coordinator

Console

To remove the MSDTC option from its option group

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Option groups.
3. Choose the option group with the MSDTC option (msdtc-se-2016 in the previous examples).
4. Choose Delete option.
5. Under Deletion options, choose MSDTC for Options to delete.
6. Under Apply immediately, choose Yes to delete the option immediately, or No to delete it at the
next maintenance window.
7. Choose Delete.

CLI

To remove the MSDTC option from its option group

• Use one of the following commands.

Example

For Linux, macOS, or Unix:

aws rds remove-option-from-option-group \


--option-group-name msdtc-se-2016 \
--options MSDTC \
--apply-immediately

For Windows:

aws rds remove-option-from-option-group ^


--option-group-name msdtc-se-2016 ^
--options MSDTC ^
--apply-immediately

Troubleshooting MSDTC for RDS for SQL Server


In some cases, you might have trouble establishing a connection between MSDTC running on a client
computer and the MSDTC service running on an RDS for SQL Server DB instance. If so, make sure of the
following:

• The inbound rules for the security group associated with the DB instance are configured correctly. For
more information, see Can't connect to Amazon RDS DB instance (p. 2727).
• Your client computer is configured correctly.
• The MSDTC firewall rules on your client computer are enabled.

To configure the client computer

1. Open Component Services.

Or, in Server Manager, choose Tools, and then choose Component Services.

1600
Amazon Relational Database Service User Guide
Microsoft Distributed Transaction Coordinator

2. Expand Component Services, expand Computers, expand My Computer, and then expand
Distributed Transaction Coordinator.
3. Open the context (right-click) menu for Local DTC and choose Properties.
4. Choose the Security tab.
5. Choose all of the following:

• Network DTC Access


• Allow Inbound
• Allow Outbound
6. Make sure that the correct authentication mode is chosen:

• Mutual Authentication Required – The client machine is joined to the same domain as other
nodes participating in distributed transaction, or there is a trust relationship configured between
domains.
• No Authentication Required – All other cases.
7. Choose OK to save your changes.
8. If prompted to restart the service, choose Yes.

To enable MSDTC firewall rules

1. Open Windows Firewall, then choose Advanced settings.

Or, in Server Manager, choose Tools, and then choose Windows Firewall with Advanced Security.
Note
Depending on your operating system, Windows Firewall might be called Windows Defender
Firewall.
2. Choose Inbound Rules in the left pane.
3. Enable the following firewall rules, if they are not already enabled:

• Distributed Transaction Coordinator (RPC)


• Distributed Transaction Coordinator (RPC)-EPMAP
• Distributed Transaction Coordinator (TCP-In)
4. Close Windows Firewall.

1601
Amazon Relational Database Service User Guide
Common DBA tasks for SQL Server

Common DBA tasks for Microsoft SQL Server


This section describes the Amazon RDS-specific implementations of some common DBA tasks for DB
instances that are running the Microsoft SQL Server database engine. In order to deliver a managed
service experience, Amazon RDS does not provide shell access to DB instances, and it restricts access to
certain system procedures and tables that require advanced privileges.
Note
When working with a SQL Server DB instance, you can run scripts to modify a newly created
database, but you cannot modify the [model] database, the database used as the model for new
databases.

Topics
• Accessing the tempdb database on Microsoft SQL Server DB instances on Amazon RDS (p. 1603)
• Analyzing your database workload on an Amazon RDS for SQL Server DB instance with Database
Engine Tuning Advisor (p. 1605)
• Collations and character sets for Microsoft SQL Server (p. 1607)
• Creating a database user (p. 1611)
• Determining a recovery model for your Microsoft SQL Server database (p. 1611)
• Determining the last failover time (p. 1612)
• Disabling fast inserts during bulk loading (p. 1612)
• Dropping a Microsoft SQL Server database (p. 1613)
• Renaming a Microsoft SQL Server database in a Multi-AZ deployment (p. 1613)
• Resetting the db_owner role password (p. 1613)
• Restoring license-terminated DB instances (p. 1614)
• Transitioning a Microsoft SQL Server database from OFFLINE to ONLINE (p. 1614)
• Using change data capture (p. 1614)
• Using SQL Server Agent (p. 1617)
• Working with Microsoft SQL Server logs (p. 1619)
• Working with trace and dump files (p. 1620)

1602
Amazon Relational Database Service User Guide
Accessing the tempdb database

Accessing the tempdb database on Microsoft SQL


Server DB instances on Amazon RDS
You can access the tempdb database on your Microsoft SQL Server DB instances on Amazon RDS. You
can run code on tempdb by using Transact-SQL through Microsoft SQL Server Management Studio
(SSMS), or any other standard SQL client application. For more information about connecting to your DB
instance, see Connecting to a DB instance running the Microsoft SQL Server database engine (p. 1380).

The master user for your DB instance is granted CONTROL access to tempdb so that this user can modify
the tempdb database options. The master user isn't the database owner of the tempdb database. If
necessary, the master user can grant CONTROL access to other users so that they can also modify the
tempdb database options.
Note
You can't run Database Console Commands (DBCC) on the tempdb database.

Modifying tempdb database options


You can modify the database options on the tempdb database on your Amazon RDS DB instances.
For more information about which options can be modified, see tempdb database in the Microsoft
documentation.

Database options such as the maximum file size options are persistent after you restart your DB instance.
You can modify the database options to optimize performance when importing data, and to prevent
running out of storage.

Optimizing performance when importing data


To optimize performance when importing large amounts of data into your DB instance, set the SIZE and
FILEGROWTH properties of the tempdb database to large numbers. For more information about how to
optimize tempdb, see Optimizing tempdb performance in the Microsoft documentation.

The following example demonstrates setting the size to 100 GB and file growth to 10 percent.

alter database[tempdb] modify file (NAME = N'templog', SIZE=100GB, FILEGROWTH = 10%)

Preventing storage problems


To prevent the tempdb database from using all available disk space, set the MAXSIZE property. The
following example demonstrates setting the property to 2048 MB.

alter database [tempdb] modify file (NAME = N'templog', MAXSIZE = 2048MB)

Shrinking the tempdb database


There are two ways to shrink the tempdb database on your Amazon RDS DB instance. You can use the
rds_shrink_tempdbfile procedure, or you can set the SIZE property,

Using the rds_shrink_tempdbfile procedure


You can use the Amazon RDS procedure msdb.dbo.rds_shrink_tempdbfile to shrink the tempdb
database. You can only call rds_shrink_tempdbfile if you have CONTROL access to tempdb. When
you call rds_shrink_tempdbfile, there is no downtime for your DB instance.

1603
Amazon Relational Database Service User Guide
Accessing the tempdb database

The rds_shrink_tempdbfile procedure has the following parameters.

Parameter name Data type Default Required Description

@temp_filename SYSNAME — required The logical name of the file


to shrink.

@target_size int null optional The new size for the file, in
megabytes.

The following example gets the names of the files for the tempdb database.

use tempdb;
GO

select name, * from sys.sysfiles;


GO

The following example shrinks a tempdb database file named test_file, and requests a new size of 10
megabytes:

exec msdb.dbo.rds_shrink_tempdbfile @temp_filename = N'test_file', @target_size = 10;

Setting the SIZE property


You can also shrink the tempdb database by setting the SIZE property and then restarting your DB
instance. For more information about restarting your DB instance, see Rebooting a DB instance (p. 436).

The following example demonstrates setting the SIZE property to 1024 MB.

alter database [tempdb] modify file (NAME = N'templog', SIZE = 1024MB)

Considerations for Multi-AZ deployments


If your Amazon RDS DB instance is in a Multi-AZ Deployment for Microsoft SQL Server with Database
Mirroring (DBM) or Always On Availability Groups (AGs), there are some things to consider.

The tempdb database can't be replicated. No data that you store on your primary instance is replicated
to your secondary instance.

If you modify any database options on the tempdb database, you can capture those changes on the
secondary by using one of the following methods:

• First modify your DB instance and turn Multi-AZ off, then modify tempdb, and finally turn Multi-AZ
back on. This method doesn't involve any downtime.

For more information, see Modifying an Amazon RDS DB instance (p. 401).
• First modify tempdb in the original primary instance, then fail over manually, and finally modify
tempdb in the new primary instance. This method involves downtime.

For more information, see Rebooting a DB instance (p. 436).

1604
Amazon Relational Database Service User Guide
Analyzing database workload with
Database Engine Tuning Advisor

Analyzing your database workload on an Amazon


RDS for SQL Server DB instance with Database
Engine Tuning Advisor
Database Engine Tuning Advisor is a client application provided by Microsoft that analyzes database
workload and recommends an optimal set of indexes for your Microsoft SQL Server databases based
on the kinds of queries you run. Like SQL Server Management Studio, you run Tuning Advisor from a
client computer that connects to your Amazon RDS DB instance that is running SQL Server. The client
computer can be a local computer that you run on premises within your own network or it can be an
Amazon EC2 Windows instance that is running in the same region as your Amazon RDS DB instance.

This section shows how to capture a workload for Tuning Advisor to analyze. This is the preferred process
for capturing a workload because Amazon RDS restricts host access to the SQL Server instance. For more
information, see Database Engine Tuning Advisor in the Microsoft documentation.

To use Tuning Advisor, you must provide what is called a workload to the advisor. A workload is a set
of Transact-SQL statements that run against a database or databases that you want to tune. Database
Engine Tuning Advisor uses trace files, trace tables, Transact-SQL scripts, or XML files as workload input
when tuning databases. When working with Amazon RDS, a workload can be a file on a client computer
or a database table on an Amazon RDS for SQL Server DB accessible to your client computer. The file or
the table must contain queries against the databases you want to tune in a format suitable for replay.

For Tuning Advisor to be most effective, a workload should be as realistic as possible. You can generate
a workload file or table by performing a trace against your DB instance. While a trace is running, you can
either simulate a load on your DB instance or run your applications with a normal load.

There are two types of traces: client-side and server-side. A client-side trace is easier to set up and you
can watch trace events being captured in real-time in SQL Server Profiler. A server-side trace is more
complex to set up and requires some Transact-SQL scripting. In addition, because the trace is written to
a file on the Amazon RDS DB instance, storage space is consumed by the trace. It is important to track of
how much storage space a running server-side trace uses because the DB instance could enter a storage-
full state and would no longer be available if it runs out of storage space.

For a client-side trace, when a sufficient amount of trace data has been captured in the SQL Server
Profiler, you can then generate the workload file by saving the trace to either a file on your local
computer or in a database table on a DB instance that is available to your client computer. The main
disadvantage of using a client-side trace is that the trace may not capture all queries when under heavy
loads. This could weaken the effectiveness of the analysis performed by the Database Engine Tuning
Advisor. If you need to run a trace under heavy loads and you want to ensure that it captures every query
during a trace session, you should use a server-side trace.

For a server-side trace, you must get the trace files on the DB instance into a suitable workload file or
you can save the trace to a table on the DB instance after the trace completes. You can use the SQL
Server Profiler to save the trace to a file on your local computer or have the Tuning Advisor read from the
trace table on the DB instance.

Running a client-side trace on a SQL Server DB instance


To run a client-side trace on a SQL Server DB instance

1. Start SQL Server Profiler. It is installed in the Performance Tools folder of your SQL Server instance
folder. You must load or define a trace definition template to start a client-side trace.
2. In the SQL Server Profiler File menu, choose New Trace. In the Connect to Server dialog box, enter
the DB instance endpoint, port, master user name, and password of the database you would like to
run a trace on.

1605
Amazon Relational Database Service User Guide
Analyzing database workload with
Database Engine Tuning Advisor

3. In the Trace Properties dialog box, enter a trace name and choose a trace definition template. A
default template, TSQL_Replay, ships with the application. You can edit this template to define your
trace. Edit events and event information under the Events Selection tab of the Trace Properties
dialog box.

For more information about trace definition templates and using the SQL Server Profiler to specify a
client-side trace, see Database Engine Tuning Advisor in the Microsoft documentation.
4. Start the client-side trace and watch SQL queries in real-time as they run against your DB instance.
5. Select Stop Trace from the File menu when you have completed the trace. Save the results as a file
or as a trace table on you DB instance.

Running a server-side trace on a SQL Server DB instance


Writing scripts to create a server-side trace can be complex and is beyond the scope of this document.
This section contains sample scripts that you can use as examples. As with a client-side trace, the goal is
to create a workload file or trace table that you can open using the Database Engine Tuning Advisor.

The following is an abridged example script that starts a server-side trace and captures details to a
workload file. The trace initially saves to the file RDSTrace.trc in the D:\RDSDBDATA\Log directory and
rolls-over every 100 MB so subsequent trace files are named RDSTrace_1.trc, RDSTrace_2.trc, etc.

DECLARE @file_name NVARCHAR(245) = 'D:\RDSDBDATA\Log\RDSTrace';


DECLARE @max_file_size BIGINT = 100;
DECLARE @on BIT = 1
DECLARE @rc INT
DECLARE @traceid INT

EXEC @rc = sp_trace_create @traceid OUTPUT, 2, @file_name, @max_file_size


IF (@rc = 0) BEGIN
EXEC sp_trace_setevent @traceid, 10, 1, @on
EXEC sp_trace_setevent @traceid, 10, 2, @on
EXEC sp_trace_setevent @traceid, 10, 3, @on
. . .
EXEC sp_trace_setfilter @traceid, 10, 0, 7, N'SQL Profiler'
EXEC sp_trace_setstatus @traceid, 1
END

The following example is a script that stops a trace. Note that a trace created by the previous script
continues to run until you explicitly stop the trace or the process runs out of disk space.

DECLARE @traceid INT


SELECT @traceid = traceid FROM ::fn_trace_getinfo(default)
WHERE property = 5 AND value = 1 AND traceid <> 1

IF @traceid IS NOT NULL BEGIN


EXEC sp_trace_setstatus @traceid, 0
EXEC sp_trace_setstatus @traceid, 2
END

You can save server-side trace results to a database table and use the database table as the workload
for the Tuning Advisor by using the fn_trace_gettable function. The following commands load the
results of all files named RDSTrace.trc in the D:\rdsdbdata\Log directory, including all rollover files like
RDSTrace_1.trc, into a table named RDSTrace in the current database.

SELECT * INTO RDSTrace


FROM fn_trace_gettable('D:\rdsdbdata\Log\RDSTrace.trc', default);

1606
Amazon Relational Database Service User Guide
Collations and character sets

To save a specific rollover file to a table, for example the RDSTrace_1.trc file, specify the name of the
rollover file and substitute 1 instead of default as the last parameter to fn_trace_gettable.

SELECT * INTO RDSTrace_1


FROM fn_trace_gettable('D:\rdsdbdata\Log\RDSTrace_1.trc', 1);

Running Tuning Advisor with a trace


Once you create a trace, either as a local file or as a database table, you can then run Tuning Advisor
against your DB instance. Using Tuning Advisor with Amazon RDS is the same process as when working
with a standalone, remote SQL Server instance. You can either use the Tuning Advisor UI on your client
machine or use the dta.exe utility from the command line. In both cases, you must connect to the
Amazon RDS DB instance using the endpoint for the DB instance and provide your master user name and
master user password when using Tuning Advisor.

The following code example demonstrates using the dta.exe command line utility against an Amazon
RDS DB instance with an endpoint of dta.cnazcmklsdei.us-east-1.rds.amazonaws.com. The
example includes the master user name admin and the master user password test, the example
database to tune is named machine named C:\RDSTrace.trc. The example command line code
also specifies a trace session named RDSTrace1 and specifies output files to the local machine named
RDSTrace.sql for the SQL output script, RDSTrace.txt for a result file, and RDSTrace.xml for
an XML file of the analysis. There is also an error table specified on the RDSDTA database named
RDSTraceErrors.

dta -S dta.cnazcmklsdei.us-east-1.rds.amazonaws.com -U admin -P test -D RDSDTA -if C:


\RDSTrace.trc -s RDSTrace1 -of C:\ RDSTrace.sql -or C:\ RDSTrace.txt -ox C:\ RDSTrace.xml -
e RDSDTA.dbo.RDSTraceErrors

Here is the same example command line code except the input workload is a table on the remote
Amazon RDS instance named RDSTrace which is on the RDSDTA database.

dta -S dta.cnazcmklsdei.us-east-1.rds.amazonaws.com -U admin -P test -D RDSDTA -it


RDSDTA.dbo.RDSTrace -s RDSTrace1 -of C:\ RDSTrace.sql -or C:\ RDSTrace.txt -ox C:\
RDSTrace.xml -e RDSDTA.dbo.RDSTraceErrors

For a full list of dta utility command-line parameters, see dta Utility in the Microsoft documentation.

Collations and character sets for Microsoft SQL


Server
SQL Server supports collations at multiple levels. You set the default server collation when you create
the DB instance. You can override the collation in the database, table, or column level.

Topics
• Server-level collation for Microsoft SQL Server (p. 1607)
• Database-level collation for Microsoft SQL Server (p. 1610)

Server-level collation for Microsoft SQL Server


When you create a Microsoft SQL Server DB instance, you can set the server collation that
you want to use. If you don't choose a different collation, the server-level collation defaults to

1607
Amazon Relational Database Service User Guide
Collations and character sets

SQL_Latin1_General_CP1_CI_AS. The server collation is applied by default to all databases and database
objects.
Note
You can't change the collation when you restore from a DB snapshot.

Currently, Amazon RDS supports the following server collations:

Collation Description

Chinese_PRC_BIN2 Chinese-PRC, binary code point sort order

Chinese_PRC_CI_AS Chinese-PRC, case-insensitive, accent-sensitive,


kanatype-insensitive, width-insensitive

Chinese_Taiwan_Stroke_CI_AS Chinese-Taiwan-Stroke, case-insensitive, accent-


sensitive, kanatype-insensitive, width-insensitive

Danish_Norwegian_CI_AS Danish-Norwegian, case-insensitive, accent-


sensitive, kanatype-insensitive, width-insensitive

Finnish_Swedish_CI_AS Finnish, Swedish, and Swedish (Finland), case-


insensitive, accent-sensitive, kanatype-insensitive,
width-insensitive

French_CI_AS French, case-insensitive, accent-sensitive,


kanatype-insensitive, width-insensitive

Hebrew_BIN Hebrew, binary sort

Hebrew_CI_AS Hebrew, case-insensitive, accent-sensitive,


kanatype-insensitive, width-insensitive

Japanese_BIN Japanese, binary sort

Japanese_CI_AS Japanese, case-insensitive, accent-sensitive,


kanatype-insensitive, width-insensitive

Japanese_CS_AS Japanese, case-sensitive, accent-sensitive,


kanatype-insensitive, width-insensitive

Japanese_XJIS_140_CI_AS Japanese-XJIS-140, case-insensitive, accent-


sensitive, kanatype-insensitive, width-insensitive,
supplementary characters, variation selector
insensitive

Japanese_XJIS_140_CI_AS_KS_VSS Japanese-XJIS-140, case-insensitive, accent-


sensitive, kanatype-sensitive, width-insensitive,
supplementary characters, variation selector
sensitive

Japanese_XJIS_140_CI_AS_VSS Japanese-XJIS-140, case-insensitive, accent-


sensitive, kanatype-insensitive, width-insensitive,
supplementary characters, variation selector
sensitive

Korean_Wansung_CI_AS Korean-Wansung, case-insensitive, accent-


sensitive, kanatype-insensitive, width-insensitive

Latin1_General_100_BIN Latin1-General-100, binary sort

1608
Amazon Relational Database Service User Guide
Collations and character sets

Collation Description

Latin1_General_100_BIN2 Latin1-General-100, binary code point sort order

Latin1_General_100_BIN2_UTF8 Latin1-General-100, binary code point sort order,


UTF-8 encoded

Latin1_General_100_CI_AS Latin1-General-100, case-insensitive, accent-


sensitive, kanatype-insensitive, width-insensitive

Latin1_General_100_CI_AS_SC_UTF8 Latin1-General-100, case-insensitive, accent-


sensitive, supplementary characters, UTF-8
encoded

Latin1_General_BIN Latin1-General, binary sort

Latin1_General_BIN2 Latin1-General, binary code point sort order

Latin1_General_CI_AI Latin1-General, case-insensitive, accent-


insensitive, kanatype-insensitive, width-insensitive

Latin1_General_CI_AS Latin1-General, case-insensitive, accent-sensitive,


kanatype-insensitive, width-insensitive

Latin1_General_CI_AS_KS Latin1-General, case-insensitive, accent-sensitive,


kanatype-sensitive, width-insensitive

Latin1_General_CS_AS Latin1-General, case-sensitive, accent-sensitive,


kanatype-insensitive, width-insensitive

Modern_Spanish_CI_AS Modern-Spanish, case-insensitive, accent-


sensitive, kanatype-insensitive, width-insensitive

SQL_1xCompat_CP850_CI_AS Latin1-General, case-insensitive, accent-sensitive,


kanatype-insensitive, width-insensitive for
Unicode Data, SQL Server Sort Order 49 on Code
Page 850 for non-Unicode Data

SQL_Latin1_General_CP1_CI_AI Latin1-General, case-insensitive, accent-


insensitive, kanatype-insensitive, width-insensitive
for Unicode Data, SQL Server Sort Order 54 on
Code Page 1252 for non-Unicode Data

SQL_Latin1_General_CP1_CI_AS (default) Latin1-General, case-insensitive, accent-sensitive,


kanatype-insensitive, width-insensitive for
Unicode Data, SQL Server Sort Order 52 on Code
Page 1252 for non-Unicode Data

SQL_Latin1_General_CP1_CS_AS Latin1-General, case-sensitive, accent-sensitive,


kanatype-insensitive, width-insensitive for
Unicode Data, SQL Server Sort Order 51 on Code
Page 1252 for non-Unicode Data

SQL_Latin1_General_CP437_CI_AI Latin1-General, case-insensitive, accent-


insensitive, kanatype-insensitive, width-insensitive
for Unicode Data, SQL Server Sort Order 34 on
Code Page 437 for non-Unicode Data

1609
Amazon Relational Database Service User Guide
Collations and character sets

Collation Description

SQL_Latin1_General_CP850_BIN2 Latin1-General, binary code point sort order for


Unicode Data, SQL Server Sort Order 40 on Code
Page 850 for non-Unicode Data

SQL_Latin1_General_CP850_CI_AS Latin1-General, case-insensitive, accent-sensitive,


kanatype-insensitive, width-insensitive for
Unicode Data, SQL Server Sort Order 42 on Code
Page 850 for non-Unicode Data

SQL_Latin1_General_CP1256_CI_AS Latin1-General, case-insensitive, accent-sensitive,


kanatype-insensitive, width-insensitive for
Unicode Data, SQL Server Sort Order 146 on Code
Page 1256 for non-Unicode Data

Thai_CI_AS Thai, case-insensitive, accent-sensitive, kanatype-


insensitive, width-insensitive

To choose the collation:

• If you're using the Amazon RDS console, when creating a new DB instance choose Additional
configuration, then enter the collation in the Collation field. For more information, see Creating an
Amazon RDS DB instance (p. 300).
• If you're using the AWS CLI, use the --character-set-name option with the create-db-instance
command. For more information, see create-db-instance.
• If you're using the Amazon RDS API, use the CharacterSetName parameter with the
CreateDBInstance operation. For more information, see CreateDBInstance.

Database-level collation for Microsoft SQL Server


You can change the default collation at the database, table, or column level by overriding the collation
when creating a new database or database object. For example, if your default server collation is
SQL_Latin1_General_CP1_CI_AS, you can change it to Mohawk_100_CI_AS for Mohawk collation
support. Even arguments in a query can be type-cast to use a different collation if necessary.

For example, the following query would change the default collation for the AccountName column to
Mohawk_100_CI_AS

CREATE TABLE [dbo].[Account]


(
[AccountID] [nvarchar](10) NOT NULL,
[AccountName] [nvarchar](100) COLLATE Mohawk_100_CI_AS NOT NULL
) ON [PRIMARY];

The Microsoft SQL Server DB engine supports Unicode by the built-in NCHAR, NVARCHAR, and NTEXT
data types. For example, if you need CJK support, use these Unicode data types for character storage and
override the default server collation when creating your databases and tables. Here are several links from
Microsoft covering collation and Unicode support for SQL Server:

• Working with collations


• Collation and international terminology
• Using SQL Server collations
• International considerations for databases and database engine applications

1610
Amazon Relational Database Service User Guide
Creating a database user

Creating a database user


You can create a database user for your Amazon RDS for Microsoft SQL Server DB instance by running
a T-SQL script like the following example. Use an application such as SQL Server Management Suite
(SSMS). You log into the DB instance as the master user that was created when you created the DB
instance.

--Initially set context to master database


USE [master];
GO
--Create a server-level login named theirname with password theirpassword
CREATE LOGIN [theirname] WITH PASSWORD = 'theirpassword';
GO
--Set context to msdb database
USE [msdb];
GO
--Create a database user named theirname and link it to server-level login theirname
CREATE USER [theirname] FOR LOGIN [theirname];
GO

For an example of adding a database user to a role, see Adding a user to the SQLAgentUser
role (p. 1618).
Note
If you get permission errors when adding a user, you can restore privileges by modifying the
DB instance master user password. For more information, see Resetting the db_owner role
password (p. 1613).

Determining a recovery model for your Microsoft SQL


Server database
In Amazon RDS, the recovery model, retention period, and database status are linked.

It's important to understand the consequences before making a change to one of these settings. Each
setting can affect the others. For example:

• If you change a database's recovery model to SIMPLE or BULK_LOGGED while backup retention is
enabled, Amazon RDS resets the recovery model to FULL within five minutes. This also results in RDS
taking a snapshot of the DB instance.
• If you set backup retention to 0 days, RDS sets the recovery mode to SIMPLE.
• If you change a database's recovery model from SIMPLE to any other option while backup retention is
set to 0 days, RDS resets the recovery model to SIMPLE.

Important
Never change the recovery model on Multi-AZ instances, even if it seems you can do so—for
example, by using ALTER DATABASE. Backup retention, and therefore FULL recovery mode, is
required for Multi-AZ. If you alter the recovery model, RDS immediately changes it back to FULL.
This automatic reset forces RDS to completely rebuild the mirror. During this rebuild, the
availability of the database is degraded for about 30-90 minutes until the mirror is ready for
failover. The DB instance also experiences performance degradation in the same way it does
during a conversion from Single-AZ to Multi-AZ. How long performance is degraded depends on
the database storage size—the bigger the stored database, the longer the degradation.

For more information on SQL Server recovery models, see Recovery models (SQL Server) in the Microsoft
documentation.

1611
Amazon Relational Database Service User Guide
Determining the last failover time

Determining the last failover time


To determine the last failover time, use the following stored procedure:

execute msdb.dbo.rds_failover_time;

This procedure returns the following information.

Output parameter Description

errorlog_available_from Shows the time from when error logs are available
in the log directory.

recent_failover_time Shows the last failover time if it's available from


the error logs. Otherwise it shows null.

Note
The stored procedure searches all of the available SQL Server error logs in the log directory to
retrieve the most recent failover time. If the failover messages have been overwritten by SQL
Server, then the procedure doesn't retrieve the failover time.

Example of no recent failover

This example shows the output when there is no recent failover in the error logs. No failover has
happened since 2020-04-29 23:59:00.01.

errorlog_available_from recent_failover_time

2020-04-29 23:59:00.0100000 null

Example of recent failover

This example shows the output when there is a failover in the error logs. The most recent failover was at
2020-05-05 18:57:51.89.

errorlog_available_from recent_failover_time

2020-04-29 23:59:00.0100000 2020-05-05 18:57:51.8900000

Disabling fast inserts during bulk loading


Starting with SQL Server 2016, fast inserts are enabled by default. Fast inserts leverage the minimal
logging that occurs while the database is in the simple or bulk logged recovery model to optimize insert
performance. With fast inserts, each bulk load batch acquires new extents, bypassing the allocation
lookup for existing extents with available free space to optimize insert performance.

However, with fast inserts bulk loads with small batch sizes can lead to increased unused space
consumed by objects. If increasing batch size isn't feasible, enabling trace flag 692 can help reduce
unused reserved space, but at the expense of performance. Enabling this trace flag disables fast inserts
while bulk loading data into heap or clustered indexes.

1612
Amazon Relational Database Service User Guide
Dropping a SQL Server database

You enable trace flag 692 as a startup parameter using DB parameter groups. For more information, see
Working with parameter groups (p. 347).

Trace flag 692 is supported for Amazon RDS on SQL Server 2016 and later. For more information on
trace flags, see DBCC TRACEON - trace flags in the Microsoft documentation.

Dropping a Microsoft SQL Server database


You can drop a database on an Amazon RDS DB instance running Microsoft SQL Server in a Single-AZ or
Multi-AZ deployment. To drop the database, use the following command:

--replace your-database-name with the name of the database you want to drop
EXECUTE msdb.dbo.rds_drop_database N'your-database-name'

Note
Use straight single quotes in the command. Smart quotes will cause an error.

After you use this procedure to drop the database, Amazon RDS drops all existing connections to the
database and removes the database's backup history.

Renaming a Microsoft SQL Server database in a


Multi-AZ deployment
To rename a Microsoft SQL Server database instance that uses Multi-AZ, use the following procedure:

1. First, turn off Multi-AZ for the DB instance.


2. Rename the database by running rdsadmin.dbo.rds_modify_db_name.
3. Then, turn on Multi-AZ Mirroring or Always On Availability Groups for the DB instance, to return it to
its original state.

For more information, see Adding Multi-AZ to a Microsoft SQL Server DB instance (p. 1451).
Note
If your instance doesn't use Multi-AZ, you don't need to change any settings before or after
running rdsadmin.dbo.rds_modify_db_name.

Example: In the following example, the rdsadmin.dbo.rds_modify_db_name stored procedure


renames a database from MOO to ZAR. This is similar to running the statement DDL ALTER DATABASE
[MOO] MODIFY NAME = [ZAR].

EXEC rdsadmin.dbo.rds_modify_db_name N'MOO', N'ZAR'


GO

Resetting the db_owner role password


If you lock yourself out of the db_owner role on your Microsoft SQL Server database, you can reset
the db_owner role password by modifying the DB instance master password. By changing the DB
instance master password, you can regain access to the DB instance, access databases using the
modified password for the db_owner, and restore privileges for the db_owner role that may have been
accidentally revoked. You can change the DB instance password by using the Amazon RDS console,
the AWS CLI command modify-db-instance, or by using the ModifyDBInstance operation. For more
information about modifying a DB instance, see Modifying an Amazon RDS DB instance (p. 401).

1613
Amazon Relational Database Service User Guide
Restoring license-terminated DB instances

Restoring license-terminated DB instances


Microsoft has requested that some Amazon RDS customers who did not report their Microsoft License
Mobility information terminate their DB instance. Amazon RDS takes snapshots of these DB instances,
and you can restore from the snapshot to a new DB instance that has the License Included model.

You can restore from a snapshot of Standard Edition to either Standard Edition or Enterprise Edition.

You can restore from a snapshot of Enterprise Edition to either Standard Edition or Enterprise Edition.

To restore from a SQL Server snapshot after Amazon RDS has created a final snapshot of
your instance

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Snapshots.
3. Choose the snapshot of your SQL Server DB instance. Amazon RDS creates a final snapshot of your
DB instance. The name of the terminated instance snapshot is in the format instance_name-
final-snapshot. For example, if your DB instance name is mytest.cdxgahslksma.us-
east-1.rds.com, the final snapshot is called mytest-final-snapshot and is located in the
same AWS Region as the original DB instance.
4. For Actions, choose Restore Snapshot.

The Restore DB Instance window appears.


5. For License Model, choose license-included.
6. Choose the SQL Server DB engine that you want to use.
7. For DB Instance Identifier, enter the name for the restored DB instance.
8. Choose Restore DB Instance.

For more information about restoring from a snapshot, see Restoring from a DB snapshot (p. 615).

Transitioning a Microsoft SQL Server database from


OFFLINE to ONLINE
You can transition your Microsoft SQL Server database on an Amazon RDS DB instance from OFFLINE to
ONLINE.

SQL Server method Amazon RDS method

ALTER DATABASE db_name SET ONLINE; EXEC rdsadmin.dbo.rds_set_database_online


db_name

Using change data capture


Amazon RDS supports change data capture (CDC) for your DB instances running Microsoft SQL Server.
CDC captures changes that are made to the data in your tables. It stores metadata about each change,
which you can access later. For more information about how CDC works, see Change data capture in the
Microsoft documentation.

Before you use CDC with your Amazon RDS DB instances, enable it in the database by running
msdb.dbo.rds_cdc_enable_db. You must have master user privileges to enable CDC in the Amazon

1614
Amazon Relational Database Service User Guide
Using CDC

RDS DB instance. After CDC is enabled, any user who is db_owner of that database can enable or disable
CDC on tables in that database.
Important
During restores, CDC will be disabled. All of the related metadata is automatically removed from
the database. This applies to snapshot restores, point-in-time restores, and SQL Server Native
restores from S3. After performing one of these types of restores, you can re-enable CDC and
re-specify tables to track.

To enable CDC for a DB instance, run the msdb.dbo.rds_cdc_enable_db stored procedure.

exec msdb.dbo.rds_cdc_enable_db 'database_name'

To disable CDC for a DB instance, run the msdb.dbo.rds_cdc_disable_db stored procedure.

exec msdb.dbo.rds_cdc_disable_db 'database_name'

Topics
• Tracking tables with change data capture (p. 1615)
• Change data capture jobs (p. 1616)
• Change data capture for Multi-AZ instances (p. 1616)

Tracking tables with change data capture


After CDC is enabled on the database, you can start tracking specific tables. You can choose the tables to
track by running sys.sp_cdc_enable_table.

--Begin tracking a table


exec sys.sp_cdc_enable_table
@source_schema = N'source_schema'
, @source_name = N'source_name'
, @role_name = N'role_name'

--The following parameters are optional:

--, @capture_instance = 'capture_instance'


--, @supports_net_changes = supports_net_changes
--, @index_name = 'index_name'
--, @captured_column_list = 'captured_column_list'
--, @filegroup_name = 'filegroup_name'
--, @allow_partition_switch = 'allow_partition_switch'
;

To view the CDC configuration for your tables, run sys.sp_cdc_help_change_data_capture.

--View CDC configuration


exec sys.sp_cdc_help_change_data_capture

--The following parameters are optional and must be used together.


-- 'schema_name', 'table_name'
;

For more information on CDC tables, functions, and stored procedures in SQL Server documentation, see
the following:

• Change data capture stored procedures (Transact-SQL)

1615
Amazon Relational Database Service User Guide
Using CDC

• Change data capture functions (Transact-SQL)


• Change data capture tables (Transact-SQL)

Change data capture jobs


When you enable CDC, SQL Server creates the CDC jobs. Database owners (db_owner) can view, create,
modify, and delete the CDC jobs. However, the RDS system account owns them. Therefore, the jobs aren't
visible from native views, procedures, or in SQL Server Management Studio.

To control behavior of CDC in a database, use native SQL Server procedures such as sp_cdc_enable_table
and sp_cdc_start_job. To change CDC job parameters, like maxtrans and maxscans, you can use
sp_cdc_change_job..

To get more information regarding the CDC jobs, you can query the following dynamic management
views:

• sys.dm_cdc_errors
• sys.dm_cdc_log_scan_sessions
• sysjobs
• sysjobhistory

Change data capture for Multi-AZ instances


If you use CDC on a Multi-AZ instance, make sure the mirror's CDC job configuration matches the one
on the principal. CDC jobs are mapped to the database_id. If the database IDs on the secondary
are different from the principal, then the jobs won't be associated with the correct database. To try to
prevent errors after failover, RDS drops and recreates the jobs on the new principal. The recreated jobs
use the parameters that the principal recorded before failover.

Although this process runs quickly, it's still possible that the CDC jobs might run before RDS can correct
them. Here are three ways to force parameters to be consistent between primary and secondary replicas:

• Use the same job parameters for all the databases that have CDC enabled.
• Before you change the CDC job configuration, convert the Multi-AZ instance to Single-AZ.
• Manually transfer the parameters whenever you change them on the principal.

To view and define the CDC parameters that are used to recreate the CDC jobs after a failover, use
rds_show_configuration and rds_set_configuration.

The following example returns the value set for cdc_capture_maxtrans. For any parameter that is set
to RDS_DEFAULT, RDS automatically configures the value.

-- Show configuration for each parameter on either primary and secondary replicas.
exec rdsadmin.dbo.rds_show_configuration 'cdc_capture_maxtrans';

To set the configuration on the secondary, run rdsadmin.dbo.rds_set_configuration. This


procedure sets the parameter values for all of the databases on the secondary server. These settings are
used only after a failover. The following example sets the maxtrans for all CDC capture jobs to 1000:

--To set values on secondary. These are used after failover.


exec rdsadmin.dbo.rds_set_configuration 'cdc_capture_maxtrans', 1000;

1616
Amazon Relational Database Service User Guide
Using SQL Server Agent

To set the CDC job parameters on the principal, use sys.sp_cdc_change_job instead.

Using SQL Server Agent


With Amazon RDS, you can use SQL Server Agent on a DB instance running Microsoft SQL Server
Enterprise Edition, Standard Edition, or Web Edition. SQL Server Agent is a Microsoft Windows service
that runs scheduled administrative tasks that are called jobs. You can use SQL Server Agent to run T-SQL
jobs to rebuild indexes, run corruption checks, and aggregate data in a SQL Server DB instance.

When you create a SQL Server DB instance, the master user is enrolled in the SQLAgentUserRole role.

SQL Server Agent can run a job on a schedule, in response to a specific event, or on demand. For more
information, see SQL Server Agent in the Microsoft documentation.
Note
Avoid scheduling jobs to run during the maintenance and backup windows for your DB instance.
The maintenance and backup processes that are launched by AWS could interrupt a job or cause
it to be canceled.
In Multi-AZ deployments, SQL Server Agent jobs are replicated from the primary host to the
secondary host when the job replication feature is turned on. For more information, see Turning
on SQL Server Agent job replication (p. 1617).
Multi-AZ deployments have a limit of 100 SQL Server Agent jobs. If you need a higher limit,
request an increase by contacting AWS Support. Open the AWS Support Center page, sign in
if necessary, and choose Create case. Choose Service limit increase. Complete and submit the
form.

To view the history of an individual SQL Server Agent job in SQL Server Management Studio (SSMS),
open Object Explorer, right-click the job, and then choose View History.

Because SQL Server Agent is running on a managed host in a DB instance, some actions aren't supported:

• Running replication jobs and running command-line scripts by using ActiveX, Windows command shell,
or Windows PowerShell aren't supported.
• You can't manually start, stop, or restart SQL Server Agent.
• Email notifications through SQL Server Agent aren't available from a DB instance.
• SQL Server Agent alerts and operators aren't supported.
• Using SQL Server Agent to create backups isn't supported. Use Amazon RDS to back up your DB
instance.

Turning on SQL Server Agent job replication


You can turn on SQL Server Agent job replication by using the following stored procedure:

EXECUTE msdb.dbo.rds_set_system_database_sync_objects @object_types = 'SQLAgentJob';

You can run the stored procedure on all SQL Server versions supported by Amazon RDS for SQL Server.
Jobs in the following categories are replicated:

• [Uncategorized (Local)]
• [Uncategorized (Multi-Server)]
• [Uncategorized]
• Data Collector
• Database Engine Tuning Advisor

1617
Amazon Relational Database Service User Guide
Using SQL Server Agent

• Database Maintenance
• Full-Text

Only jobs that use T-SQL job steps are replicated. Jobs with step types such as SQL Server Integration
Services (SSIS), SQL Server Reporting Services (SSRS), Replication, and PowerShell aren't replicated. Jobs
that use Database Mail and server-level objects aren't replicated.
Important
The primary host is the source of truth for replication. Before turning on job replication, make
sure that your SQL Server Agent jobs are on the primary. If you don't do this, it could lead to the
deletion of your SQL Server Agent jobs if you turn on the feature when newer jobs are on the
secondary host.

You can use the following function to confirm whether replication is turned on.

SELECT * from msdb.dbo.rds_fn_get_system_database_sync_objects();

The T-SQL query returns the following if SQL Server Agent jobs are replicating. If they're not replicating,
it returns nothing for object_class.

You can use the following function to find the last time objects were synchronized in UTC time.

SELECT * from msdb.dbo.rds_fn_server_object_last_sync_time();

For example, suppose that you modify a SQL Server Agent job at 01:00. You expect the most recent
synchronization time to be after 01:00, indicating that synchronization has taken place.

After synchronization, the values returned for date_created and date_modified on the secondary
node are expected to match.

Adding a user to the SQLAgentUser role


To allow an additional login or user to use SQL Server Agent, log in as the master user and do the
following:

1. Create another server-level login by using the CREATE LOGIN command.


2. Create a user in msdb using CREATE USER command, and then link this user to the login that you
created in the previous step.
3. Add the user to the SQLAgentUserRole using the sp_addrolemember system stored procedure.

For example, suppose that your master user name is admin and you want to give access to SQL Server
Agent to a user named theirname with a password theirpassword. In that case, you can use the
following procedure.

1618
Amazon Relational Database Service User Guide
Working with SQL Server logs

To add a user to the SQLAgentUser role

1. Log in as the master user.


2. Run the following commands:

--Initially set context to master database


USE [master];
GO
--Create a server-level login named theirname with password theirpassword
CREATE LOGIN [theirname] WITH PASSWORD = 'theirpassword';
GO
--Set context to msdb database
USE [msdb];
GO
--Create a database user named theirname and link it to server-level login theirname
CREATE USER [theirname] FOR LOGIN [theirname];
GO
--Added database user theirname in msdb to SQLAgentUserRole in msdb
EXEC sp_addrolemember [SQLAgentUserRole], [theirname];

Deleting a SQL Server Agent job


You use the sp_delete_job stored procedure to delete SQL Server Agent jobs on Amazon RDS for
Microsoft SQL Server.

You can't use SSMS to delete SQL Server Agent jobs. If you try to do so, you get an error message similar
to the following:

The EXECUTE permission was denied on the object 'xp_regread', database


'mssqlsystemresource', schema 'sys'.

As a managed service, RDS is restricted from running procedures that access the Windows registry. When
you use SSMS, it tries to run a process (xp_regread) for which RDS isn't authorized.
Note
On RDS for SQL Server, only members of the sysadmin role are allowed to update or delete jobs
owned by a different login.

To delete a SQL Server Agent job

• Run the following T-SQL statement:

EXEC msdb..sp_delete_job @job_name = 'job_name';

Working with Microsoft SQL Server logs


You can use the Amazon RDS console to view, watch, and download SQL Server Agent logs, Microsoft
SQL Server error logs, and SQL Server Reporting Services (SSRS) logs.

Watching log files


If you view a log in the Amazon RDS console, you can see its contents as they exist at that moment.
Watching a log in the console opens it in a dynamic state so that you can see updates to it in near-real
time.

1619
Amazon Relational Database Service User Guide
Working with trace and dump files

Only the latest log is active for watching. For example, suppose you have the following logs shown:

Only log/ERROR, as the most recent log, is being actively updated. You can choose to watch others, but
they are static and will not update.

Archiving log files


The Amazon RDS console shows logs for the past week through the current day. You can download and
archive logs to keep them for reference past that time. One way to archive logs is to load them into
an Amazon S3 bucket. For instructions on how to set up an Amazon S3 bucket and upload a file, see
Amazon S3 basics in the Amazon Simple Storage Service Getting Started Guide and click Get Started.

Viewing error and agent logs


To view Microsoft SQL Server error and agent logs, use the Amazon RDS stored procedure
rds_read_error_log with the following parameters:

• @index – the version of the log to retrieve. The default value is 0, which retrieves the current error log.
Specify 1 to retrieve the previous log, specify 2 to retrieve the one before that, and so on.
• @type – the type of log to retrieve. Specify 1 to retrieve an error log. Specify 2 to retrieve an agent
log.

Example

The following example requests the current error log.

EXEC rdsadmin.dbo.rds_read_error_log @index = 0, @type = 1;

For more information on SQL Server errors, see Database engine errors in the Microsoft documentation.

Working with trace and dump files


This section describes working with trace files and dump files for your Amazon RDS DB instances running
Microsoft SQL Server.

1620
Amazon Relational Database Service User Guide
Working with trace and dump files

Generating a trace SQL query


declare @rc int
declare @TraceID int
declare @maxfilesize bigint

set @maxfilesize = 5

exec @rc = sp_trace_create @TraceID output, 0, N'D:\rdsdbdata\log\rdstest', @maxfilesize,


NULL

Viewing an open trace


select * from ::fn_trace_getinfo(default)

Viewing trace contents


select * from ::fn_trace_gettable('D:\rdsdbdata\log\rdstest.trc', default)

Setting the retention period for trace and dump files


Trace and dump files can accumulate and consume disk space. By default, Amazon RDS purges trace and
dump files that are older than seven days.

To view the current trace and dump file retention period, use the rds_show_configuration
procedure, as shown in the following example.

exec rdsadmin..rds_show_configuration;

To modify the retention period for trace files, use the rds_set_configuration procedure and set the
tracefile retention in minutes. The following example sets the trace file retention period to 24
hours.

exec rdsadmin..rds_set_configuration 'tracefile retention', 1440;

To modify the retention period for dump files, use the rds_set_configuration procedure and set the
dumpfile retention in minutes. The following example sets the dump file retention period to 3 days.

exec rdsadmin..rds_set_configuration 'dumpfile retention', 4320;

For security reasons, you cannot delete a specific trace or dump file on a SQL Server DB instance. To
delete all unused trace or dump files, set the retention period for the files to 0.

1621
Amazon Relational Database Service User Guide

Amazon RDS for MySQL


Amazon RDS supports DB instances that run the following versions of MySQL:

• MySQL 8.0
• MySQL 5.7

For more information about minor version support, see MySQL on Amazon RDS versions (p. 1627).

To create an Amazon RDS for MySQL DB instance, use the Amazon RDS management tools or interfaces.
You can then do the following:

• Resize your DB instance


• Authorize connections to your DB instance
• Create and restore from backups or snapshots
• Create Multi-AZ secondaries
• Create read replicas
• Monitor the performance of your DB instance

To store and access the data in your DB instance, you use standard MySQL utilities and applications.

Amazon RDS for MySQL is compliant with many industry standards. For example, you can use RDS for
MySQL databases to build HIPAA-compliant applications. You can use RDS for MySQL databases to
store healthcare related information, including protected health information (PHI) under a Business
Associate Agreement (BAA) with AWS. Amazon RDS for MySQL also meets Federal Risk and Authorization
Management Program (FedRAMP) security requirements. In addition, Amazon RDS for MySQL has
received a FedRAMP Joint Authorization Board (JAB) Provisional Authority to Operate (P-ATO) at the
FedRAMP HIGH Baseline within the AWS GovCloud (US) Regions. For more information on supported
compliance standards, see AWS cloud compliance.

For information about the features in each version of MySQL, see The main features of MySQL in the
MySQL documentation.

Before creating a DB instance, complete the steps in Setting up for Amazon RDS (p. 174). When you
create a DB instance, the RDS master user gets DBA privileges, with some limitations. Use this account
for administrative tasks such as creating additional database accounts.

You can create the following:

• DB instances
• DB snapshots
• Point-in-time restores
• Automated backups
• Manual backups

You can use DB instances running MySQL inside a virtual private cloud (VPC) based on Amazon VPC. You
can also add features to your MySQL DB instance by turning on various options. Amazon RDS supports
Multi-AZ deployments for MySQL as a high-availability, failover solution.
Important
To deliver a managed service experience, Amazon RDS doesn't provide shell access to DB
instances. It also restricts access to certain system procedures and tables that need advanced

1622
Amazon Relational Database Service User Guide

privileges. You can access your database using standard SQL clients such as the mysql client.
However, you can't access the host directly by using Telnet or Secure Shell (SSH).

Topics
• MySQL feature support on Amazon RDS (p. 1624)
• MySQL on Amazon RDS versions (p. 1627)
• Connecting to a DB instance running the MySQL database engine (p. 1630)
• Securing MySQL DB instance connections (p. 1637)
• Improving query performance for RDS for MySQL with Amazon RDS Optimized Reads (p. 1656)
• Improving write performance with Amazon RDS Optimized Writes for MySQL (p. 1659)
• Upgrading the MySQL DB engine (p. 1664)
• Importing data into a MySQL DB instance (p. 1674)
• Working with MySQL replication in Amazon RDS (p. 1708)
• Exporting data from a MySQL DB instance by using replication (p. 1728)
• Options for MySQL DB instances (p. 1732)
• Parameters for MySQL (p. 1742)
• Common DBA tasks for MySQL DB instances (p. 1744)
• Local time zone for MySQL DB instances (p. 1749)
• Known issues and limitations for Amazon RDS for MySQL (p. 1752)
• RDS for MySQL stored procedure reference (p. 1757)

1623
Amazon Relational Database Service User Guide
MySQL feature support

MySQL feature support on Amazon RDS


RDS for MySQL supports most of the features and capabilities of MySQL. Some features might have
limited support or restricted privileges.

You can filter new Amazon RDS features on the What's New with Database? page. For Products, choose
Amazon RDS. Then search using keywords such as MySQL 2022.
Note
The following lists are not exhaustive.

Topics
• Supported storage engines for RDS for MySQL (p. 1624)
• Using memcached and other options with MySQL on Amazon RDS (p. 1624)
• InnoDB cache warming for MySQL on Amazon RDS (p. 1625)
• MySQL features not supported by Amazon RDS (p. 1625)

Supported storage engines for RDS for MySQL


While MySQL supports multiple storage engines with varying capabilities, not all of them are optimized
for recovery and data durability. Amazon RDS fully supports the InnoDB storage engine for MySQL DB
instances. Amazon RDS features such as Point-In-Time restore and snapshot restore require a recoverable
storage engine and are supported for the InnoDB storage engine only. For more information, see MySQL
memcached support (p. 1738).

The Federated Storage Engine is currently not supported by Amazon RDS for MySQL.

For user-created schemas, the MyISAM storage engine does not support reliable recovery and can result
in lost or corrupt data when MySQL is restarted after a recovery, preventing Point-In-Time restore or
snapshot restore from working as intended. However, if you still choose to use MyISAM with Amazon
RDS, snapshots can be helpful under some conditions.
Note
System tables in the mysql schema can be in MyISAM storage.

If you want to convert existing MyISAM tables to InnoDB tables, you can use the ALTER TABLE
command (for example, alter table TABLE_NAME engine=innodb;). Bear in mind that MyISAM
and InnoDB have different strengths and weaknesses, so you should fully evaluate the impact of making
this switch on your applications before doing so.

MySQL 5.1, 5.5, and 5.6 are no longer supported in Amazon RDS. However, you can restore existing
MySQL 5.1, 5.5, and 5.6 snapshots. When you restore a MySQL 5.1, 5.5, or 5.6 snapshot, the DB instance
is automatically upgraded to MySQL 5.7.

Using memcached and other options with MySQL on


Amazon RDS
Most Amazon RDS DB engines support option groups that allow you to select additional features
for your DB instance. RDS for MySQL DB instances support the memcached option, a simple, key-
based cache. For more information about memcached and other options, see Options for MySQL DB
instances (p. 1732). For more information about working with option groups, see Working with option
groups (p. 331).

1624
Amazon Relational Database Service User Guide
InnoDB cache warming

InnoDB cache warming for MySQL on Amazon RDS


InnoDB cache warming can provide performance gains for your MySQL DB instance by saving the current
state of the buffer pool when the DB instance is shut down, and then reloading the buffer pool from the
saved information when the DB instance starts up. This bypasses the need for the buffer pool to "warm
up" from normal database use and instead preloads the buffer pool with the pages for known common
queries. The file that stores the saved buffer pool information only stores metadata for the pages that
are in the buffer pool, and not the pages themselves. As a result, the file does not require much storage
space. The file size is about 0.2 percent of the cache size. For example, for a 64 GiB cache, the cache
warming file size is 128 MiB. For more information on InnoDB cache warming, see Saving and restoring
the buffer pool state in the MySQL documentation.

RDS for MySQL DB instances support InnoDB cache warming. To enable InnoDB cache warming, set
the innodb_buffer_pool_dump_at_shutdown and innodb_buffer_pool_load_at_startup
parameters to 1 in the parameter group for your DB instance. Changing these parameter values in a
parameter group will affect all MySQL DB instances that use that parameter group. To enable InnoDB
cache warming for specific MySQL DB instances, you might need to create a new parameter group for
those instances. For information on parameter groups, see Working with parameter groups (p. 347).

InnoDB cache warming primarily provides a performance benefit for DB instances that use standard
storage. If you use PIOPS storage, you do not commonly see a significant performance benefit.
Important
If your MySQL DB instance does not shut down normally, such as during a failover, then the
buffer pool state will not be saved to disk. In this case, MySQL loads whatever buffer pool file is
available when the DB instance is restarted. No harm is done, but the restored buffer pool might
not reflect the most recent state of the buffer pool prior to the restart. To ensure that you have
a recent state of the buffer pool available to warm the InnoDB cache on startup, we recommend
that you periodically dump the buffer pool "on demand."
You can create an event to dump the buffer pool automatically and on a regular interval. For
example, the following statement creates an event named periodic_buffer_pool_dump
that dumps the buffer pool every hour.

CREATE EVENT periodic_buffer_pool_dump


ON SCHEDULE EVERY 1 HOUR
DO CALL mysql.rds_innodb_buffer_pool_dump_now();

For more information on MySQL events, see Event syntax in the MySQL documentation.

Dumping and loading the buffer pool on demand


You can save and load the InnoDB cache "on demand."

• To dump the current state of the buffer pool to disk, call the
mysql.rds_innodb_buffer_pool_dump_now (p. 1784) stored procedure.
• To load the saved state of the buffer pool from disk, call the
mysql.rds_innodb_buffer_pool_load_now (p. 1784) stored procedure.
• To cancel a load operation in progress, call the mysql.rds_innodb_buffer_pool_load_abort (p. 1784)
stored procedure.

MySQL features not supported by Amazon RDS


Amazon RDS doesn't currently support the following MySQL features:

• Authentication Plugin

1625
Amazon Relational Database Service User Guide
Features not supported

• Error Logging to the System Log


• Group Replication Plugin
• InnoDB Tablespace Encryption
• Password Strength Plugin
• Persisted system variables
• Rewriter Query Rewrite Plugin
• Semisynchronous replication
• Transportable tablespace
• X Plugin

Note
Global transaction IDs are supported for all RDS for MySQL 5.7 versions, and for RDS for MySQL
8.0.26 and higher 8.0 versions.

To deliver a managed service experience, Amazon RDS doesn't provide shell access to DB instances. It
also restricts access to certain system procedures and tables that require advanced privileges. Amazon
RDS supports access to databases on a DB instance using any standard SQL client application. Amazon
RDS doesn't allow direct host access to a DB instance by using Telnet, Secure Shell (SSH), or Windows
Remote Desktop Connection. When you create a DB instance, you are assigned to the db_owner role for
all databases on that instance, and you have all database-level permissions except for those used for
backups. Amazon RDS manages backups for you.

1626
Amazon Relational Database Service User Guide
MySQL versions

MySQL on Amazon RDS versions


For MySQL, version numbers are organized as version = X.Y.Z. In Amazon RDS terminology, X.Y denotes
the major version, and Z is the minor version number. For Amazon RDS implementations, a version
change is considered major if the major version number changes—for example, going from version 5.7 to
8.0. A version change is considered minor if only the minor version number changes—for example, going
from version 8.0.28 to 8.0.32.

Topics
• Supported MySQL minor versions on Amazon RDS (p. 1627)
• Supported MySQL major versions on Amazon RDS (p. 1629)
• Deprecated versions for Amazon RDS for MySQL (p. 1629)

Supported MySQL minor versions on Amazon RDS


Amazon RDS currently supports the following minor versions of MySQL.
Note
Dates with only a month and a year are approximate and are updated with an exact date when
it’s known.

MySQL engine version Community release RDS release date RDS end of standard
date support date

8.0

8.0.34 18 July 2023 9 August 2023 September 2024

8.0.33 18 April 2023 15 June 2023 September 2024

8.0.32 17 January 2023 7 February 2023 March 2024

8.0.31 11 October 2022 10 November 2022 March 2024

8.0.30 26 July 2022 9 September 2022 September 2023

8.0.28 18 January 2022 11 March 2022 March 2024

5.7

5.7.44* Not yet released Not yet released February 2024

5.7.43 18 July 2023 9 August 2023 February 2024

5.7.42 18 April 2023 15 June 2023 December 2023

5.7.41 17 January 2023 7 February 2023 December 2023

5.7.40 11 October 2022 11 November 2022 December 2023

5.7.39 26 July 2022 29 September 2022 December 2023

5.7.38 26 April 2022 6 June 2022 December 2023

5.7.37 18 January 2022 11 March 2022 December 2023

1627
Amazon Relational Database Service User Guide
Supported MySQL minor versions

* Amazon RDS Extended Support eligible minor engine version. For more information, see Using Amazon
RDS Extended Support (p. 565).

You can specify any currently supported MySQL version when creating a new DB instance. You can
specify the major version (such as MySQL 5.7), and any supported minor version for the specified major
version. If no version is specified, Amazon RDS defaults to a supported version, typically the most recent
version. If a major version is specified but a minor version is not, Amazon RDS defaults to a recent release
of the major version you have specified. To see a list of supported versions, as well as defaults for newly
created DB instances, use the describe-db-engine-versions AWS CLI command.

For example, to list the supported engine versions for RDS for MySQL, run the following CLI command:

aws rds describe-db-engine-versions --engine mysql --query "*[].


{Engine:Engine,EngineVersion:EngineVersion}" --output text

The default MySQL version might vary by AWS Region. To create a DB instance with a specific minor
version, specify the minor version during DB instance creation. You can determine the default minor
version for an AWS Region using the following AWS CLI command:

aws rds describe-db-engine-versions --default-only --engine mysql --engine-version major-


engine-version --region region --query "*[].{Engine:Engine,EngineVersion:EngineVersion}" --
output text

Replace major-engine-version with the major engine version, and replace region with the AWS
Region. For example, the following AWS CLI command returns the default MySQL minor engine version
for the 5.7 major version and the US West (Oregon) AWS Region (us-west-2):

aws rds describe-db-engine-versions --default-only --engine mysql --engine-version 5.7 --


region us-west-2 --query "*[].{Engine:Engine,EngineVersion:EngineVersion}" --output text

With Amazon RDS, you control when to upgrade your MySQL instance to a new major version supported
by Amazon RDS. You can maintain compatibility with specific MySQL versions, test new versions with
your application before deploying in production, and perform major version upgrades at times that best
fit your schedule.

When automatic minor version upgrade is enabled, your DB instance will be upgraded automatically
to new MySQL minor versions as they are supported by Amazon RDS. This patching occurs during your
scheduled maintenance window. You can modify a DB instance to enable or disable automatic minor
version upgrades.

If you opt out of automatically scheduled upgrades, you can manually upgrade to a supported
minor version release by following the same procedure as you would for a major version update. For
information, see Upgrading a DB instance engine version (p. 429).

Amazon RDS currently supports the major version upgrades from MySQL version 5.6 to version 5.7,
and from MySQL version 5.7 to version 8.0. Because major version upgrades involve some compatibility
risk, they do not occur automatically; you must make a request to modify the DB instance. You should
thoroughly test any upgrade before upgrading your production instances. For information about
upgrading a MySQL DB instance, see Upgrading the MySQL DB engine (p. 1664).

You can test a DB instance against a new version before upgrading by creating a DB snapshot of your
existing DB instance, restoring from the DB snapshot to create a new DB instance, and then initiating a
version upgrade for the new DB instance. You can then experiment safely on the upgraded clone of your
DB instance before deciding whether or not to upgrade your original DB instance.

1628
Amazon Relational Database Service User Guide
Supported MySQL major versions

Supported MySQL major versions on Amazon RDS


RDS for MySQL major versions are available under standard support at least until community end of life
for the corresponding community version. You can continue running a major version past its RDS end of
standard support date for a fee. For more information, see Using Amazon RDS Extended Support (p. 565)
and Amazon RDS for MySQL pricing.

You can use the following dates to plan your testing and upgrade cycles.
Note
Dates with only a month and a year are approximate and are updated with an exact date when
it’s known.

MySQL Community RDS Community RDS RDS RDS RDS


major release release end of life end of start of start of end of
version date date date standard Extended Extended Extended
support Support Support Support
date year 1 year 3 date
pricing pricing
date date

MySQL 8.0 19 April 23 April 2026 31 July 1 August 1 August 31 July


2018 October 2026 2026 2028 2029
2018

MySQL 5.7 21 22 October 29 1 March 1 March 28


October February 2023 February 2024 2026 February
2015 2016 2024 2027

MySQL 5.6 5 February 1 July 5 February 1 March N/A N/A N/A


2013 2013 2021 2022

Deprecated versions for Amazon RDS for MySQL


Amazon RDS for MySQL version 5.1, 5.5, and 5.6 are deprecated.

For information about the Amazon RDS deprecation policy for MySQL, see Amazon RDS FAQs.

1629
Amazon Relational Database Service User Guide
Connecting to a DB instance running MySQL

Connecting to a DB instance running the MySQL


database engine
Before you can connect to a DB instance running the MySQL database engine, you must create a
DB instance. For information, see Creating an Amazon RDS DB instance (p. 300). After Amazon RDS
provisions your DB instance, you can use any standard MySQL client application or utility to connect to
the instance. In the connection string, you specify the DNS address from the DB instance endpoint as the
host parameter, and specify the port number from the DB instance endpoint as the port parameter.

To authenticate to your RDS DB instance, you can use one of the authentication methods for MySQL and
AWS Identity and Access Management (IAM) database authentication:

• To learn how to authenticate to MySQL using one of the authentication methods for MySQL, see
Authentication method in the MySQL documentation.
• To learn how to authenticate to MySQL using IAM database authentication, see IAM database
authentication for MariaDB, MySQL, and PostgreSQL (p. 2642).

You can connect to a MySQL DB instance by using tools like the MySQL command-line client. For more
information on using the MySQL command-line client, see mysql - the MySQL command-line client in
the MySQL documentation. One GUI-based application you can use to connect is MySQL Workbench. For
more information, see the Download MySQL Workbench page. For information about installing MySQL
(including the MySQL command-line client), see Installing and upgrading MySQL.

Most Linux distributions include the MariaDB client instead of the Oracle MySQL client. To install the
MySQL command-line client on Amazon Linux 2023, run the following command:

sudo dnf install mariadb105

To install the MySQL command-line client on Amazon Linux 2, run the following command:

sudo yum install mariadb

To install the MySQL command-line client on most DEB-based Linux distributions, run the following
command:

apt-get install mariadb-client

To check the version of your MySQL command-line client, run the following command:

mysql --version

To read the MySQL documentation for your current client version, run the following command:

man mysql

To connect to a DB instance from outside of its Amazon VPC, the DB instance must be publicly
accessible, access must be granted using the inbound rules of the DB instance's security group,
and other requirements must be met. For more information, see Can't connect to Amazon RDS DB
instance (p. 2727).

You can use Secure Sockets Layer (SSL) or Transport Layer Security (TLS) encryption on connections
to a MySQL DB instance. For information, see Using SSL/TLS with a MySQL DB instance (p. 1639). If

1630
Amazon Relational Database Service User Guide
Finding the connection information

you are using AWS Identity and Access Management (IAM) database authentication, make sure to use
an SSL/TLS connection. For information, see IAM database authentication for MariaDB, MySQL, and
PostgreSQL (p. 2642).

You can also connect to a DB instance from a web server. For more information, see Tutorial: Create a
web server and an Amazon RDS DB instance (p. 249).
Note
For information on connecting to a MariaDB DB instance, see Connecting to a DB instance
running the MariaDB database engine (p. 1269).

Topics
• Finding the connection information for a MySQL DB instance (p. 1631)
• Connecting from the MySQL command-line client (unencrypted) (p. 1633)
• Connecting from MySQL Workbench (p. 1634)
• Connecting with the Amazon Web Services JDBC Driver for MySQL (p. 1635)
• Troubleshooting connections to your MySQL DB instance (p. 1636)

Finding the connection information for a MySQL DB


instance
The connection information for a DB instance includes its endpoint, port, and a valid database user,
such as the master user. For example, suppose that an endpoint value is mydb.123456789012.us-
east-1.rds.amazonaws.com. In this case, the port value is 3306, and the database user is admin.
Given this information, you specify the following values in a connection string:

• For host or host name or DNS name, specify mydb.123456789012.us-


east-1.rds.amazonaws.com.
• For port, specify 3306.
• For user, specify admin.

To connect to a DB instance, use any client for the MySQL DB engine. For example, you might use the
MySQL command-line client or MySQL Workbench.

To find the connection information for a DB instance, you can use the AWS Management Console, the
AWS CLI describe-db-instances command, or the Amazon RDS API DescribeDBInstances operation to list
its details.

Console

To find the connection information for a DB instance in the AWS Management Console

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases to display a list of your DB instances.
3. Choose the name of the MySQL DB instance to display its details.
4. On the Connectivity & security tab, copy the endpoint. Also, note the port number. You need both
the endpoint and the port number to connect to the DB instance.

1631
Amazon Relational Database Service User Guide
Finding the connection information

5. If you need to find the master user name, choose the Configuration tab and view the Master
username value.

AWS CLI
To find the connection information for a MySQL DB instance by using the AWS CLI, call the describe-db-
instances command. In the call, query for the DB instance ID, endpoint, port, and master user name.

1632
Amazon Relational Database Service User Guide
Connecting from the MySQL
command-line client (unencrypted)

For Linux, macOS, or Unix:

aws rds describe-db-instances \


--filters "Name=engine,Values=mysql" \
--query "*[].[DBInstanceIdentifier,Endpoint.Address,Endpoint.Port,MasterUsername]"

For Windows:

aws rds describe-db-instances ^


--filters "Name=engine,Values=mysql" ^
--query "*[].[DBInstanceIdentifier,Endpoint.Address,Endpoint.Port,MasterUsername]"

Your output should be similar to the following.

[
[
"mydb1",
"mydb1.123456789012.us-east-1.rds.amazonaws.com",
3306,
"admin"
],
[
"mydb2",
"mydb2.123456789012.us-east-1.rds.amazonaws.com",
3306,
"admin"
]
]

RDS API
To find the connection information for a DB instance by using the Amazon RDS API, call the
DescribeDBInstances operation. In the output, find the values for the endpoint address, endpoint port,
and master user name.

Connecting from the MySQL command-line client


(unencrypted)
Important
Only use an unencrypted MySQL connection when the client and server are in the same VPC and
the network is trusted. For information about using encrypted connections, see Connecting from
the MySQL command-line client with SSL/TLS (encrypted) (p. 1640).

To connect to a DB instance using the MySQL command-line client, enter the following command at
the command prompt. For the -h parameter, substitute the DNS name (endpoint) for your DB instance.
For the -P parameter, substitute the port for your DB instance. For the -u parameter, substitute the user
name of a valid database user, such as the master user. Enter the master user password when prompted.

mysql -h mysql–instance1.123456789012.us-east-1.rds.amazonaws.com -P 3306 -u mymasteruser -


p

After you enter the password for the user, you should see output similar to the following.

Welcome to the MySQL monitor. Commands end with ; or \g.

1633
Amazon Relational Database Service User Guide
Connecting from MySQL Workbench

Your MySQL connection id is 9738


Server version: 8.0.28 Source distribution

Type 'help;' or '\h' for help. Type '\c' to clear the buffer.

mysql>

Connecting from MySQL Workbench


To connect from MySQL Workbench

1. Download and install MySQL Workbench at Download MySQL Workbench.


2. Open MySQL Workbench.

3. From Database, choose Manage Connections.


4. In the Manage Server Connections window, choose New.
5. In the Connect to Database window, enter the following information:

• Stored Connection – Enter a name for the connection, such as MyDB.


• Hostname – Enter the DB instance endpoint.
• Port – Enter the port used by the DB instance.
• Username – Enter the user name of a valid database user, such as the master user.
• Password – Optionally, choose Store in Vault and then enter and save the password for the user.

The window looks similar to the following:

1634
Amazon Relational Database Service User Guide
Connecting with the AWS JDBC Driver for MySQL

You can use the features of MySQL Workbench to customize connections. For example, you can use
the SSL tab to configure SSL/TLS connections. For information about using MySQL Workbench, see
the MySQL Workbench documentation. Encrypting client connections to MySQL DB instances with
SSL/TLS, see Encrypting client connections to MySQL DB instances with SSL/TLS (p. 1639).
6. Optionally, choose Test Connection to confirm that the connection to the DB instance is successful.
7. Choose Close.
8. From Database, choose Connect to Database.
9. From Stored Connection, choose your connection.
10. Choose OK.

Connecting with the Amazon Web Services JDBC


Driver for MySQL
The AWS JDBC Driver for MySQL is a client driver designed for RDS for MySQL. By default, the driver
has settings that are optimized for use with RDS for MySQL. For more information about the driver and
complete instructions for using it, see the AWS JDBC Driver for MySQL GitHub repository.

The driver is drop-in compatible with the MySQL Connector/J driver. To install or upgrade your
connector, replace the MySQL connector .jar file (located in the application CLASSPATH) with the
AWS JDBC Driver for MySQL .jar file, and update the connection URL prefix from jdbc:mysql:// to
jdbc:mysql:aws://.

The AWS JDBC Driver for MySQL supports IAM database authentication. For more information, see
AWS IAM Database Authentication in the AWS JDBC Driver for MySQL GitHub repository. For more
information about IAM database authentication, see IAM database authentication for MariaDB, MySQL,
and PostgreSQL (p. 2642).

1635
Amazon Relational Database Service User Guide
Troubleshooting

Troubleshooting connections to your MySQL DB


instance
Two common causes of connection failures to a new DB instance are:

• The DB instance was created using a security group that doesn't authorize connections from the device
or Amazon EC2 instance where the MySQL application or utility is running. The DB instance must have
a VPC security group that authorizes the connections. For more information, see Amazon VPC VPCs
and Amazon RDS (p. 2688).

You can add or edit an inbound rule in the security group. For Source, choose My IP. This allows access
to the DB instance from the IP address detected in your browser.
• The DB instance was created using the default port of 3306, and your company has firewall rules
blocking connections to that port from devices in your company network. To fix this failure, recreate
the instance with a different port.

For more information on connection issues, see Can't connect to Amazon RDS DB instance (p. 2727).

1636
Amazon Relational Database Service User Guide
Securing MySQL connections

Securing MySQL DB instance connections


You can manage the security of your MySQL DB instances.

Topics
• MySQL security on Amazon RDS (p. 1637)
• Using the Password Validation Plugin for RDS for MySQL (p. 1638)
• Encrypting client connections to MySQL DB instances with SSL/TLS (p. 1639)
• Updating applications to connect to MySQL DB instances using new SSL/TLS certificates (p. 1642)
• Using Kerberos authentication for MySQL (p. 1645)

MySQL security on Amazon RDS


Security for MySQL DB instances is managed at three levels:

• AWS Identity and Access Management controls who can perform Amazon RDS management actions
on DB instances. When you connect to AWS using IAM credentials, your IAM account must have IAM
policies that grant the permissions required to perform Amazon RDS management operations. For
more information, see Identity and access management for Amazon RDS (p. 2606).
• When you create a DB instance, you use a VPC security group to control which devices and Amazon
EC2 instances can open connections to the endpoint and port of the DB instance. These connections
can be made using Secure Sockets Layer (SSL) and Transport Layer Security (TLS). In addition, firewall
rules at your company can control whether devices running at your company can open connections to
the DB instance.
• To authenticate login and permissions for a MySQL DB instance, you can take either of the following
approaches, or a combination of them.

You can take the same approach as with a stand-alone instance of MySQL. Commands such as CREATE
USER, RENAME USER, GRANT, REVOKE, and SET PASSWORD work just as they do in on-premises
databases, as does directly modifying database schema tables. For information, see Access control and
account management in the MySQL documentation.

You can also use IAM database authentication. With IAM database authentication, you authenticate
to your DB instance by using an IAM user or IAM role and an authentication token. An authentication
token is a unique value that is generated using the Signature Version 4 signing process. By using IAM
database authentication, you can use the same credentials to control access to your AWS resources
and your databases. For more information, see IAM database authentication for MariaDB, MySQL, and
PostgreSQL (p. 2642).

Another option is Kerberos authentication for RDS for MySQL. The DB instance works with AWS
Directory Service for Microsoft Active Directory (AWS Managed Microsoft AD) to enable Kerberos
authentication. When users authenticate with a MySQL DB instance joined to the trusting domain,
authentication requests are forwarded. Forwarded requests go to the domain directory that you
create with AWS Directory Service. For more information, see Using Kerberos authentication for
MySQL (p. 1645).

When you create an Amazon RDS DB instance, the master user has the following default privileges:

• alter
• alter routine
• create
• create routine

1637
Amazon Relational Database Service User Guide
Password Validation Plugin

• create temporary tables


• create user
• create view
• delete
• drop
• event
• execute
• grant option
• index
• insert
• lock tables
• process
• references
• replication client
• replication slave
• select
• show databases
• show view
• trigger
• update

Note
Although it is possible to delete the master user on the DB instance, it is not recommended.
To recreate the master user, use the ModifyDBInstance RDS API operation or the modify-db-
instance AWS CLI command and specify a new master user password with the appropriate
parameter. If the master user does not exist in the instance, the master user is created with the
specified password.

To provide management services for each DB instance, the rdsadmin user is created when the DB
instance is created. Attempting to drop, rename, change the password, or change privileges for the
rdsadmin account will result in an error.

To allow management of the DB instance, the standard kill and kill_query commands have been
restricted. The Amazon RDS commands rds_kill and rds_kill_query are provided to allow you to
end user sessions or queries on DB instances.

Using the Password Validation Plugin for RDS for


MySQL
MySQL provides the validate_password plugin for improved security. The plugin enforces password
policies using parameters in the DB parameter group for your MySQL DB instance. The plugin is
supported for DB instances running MySQL version 5.7 and 8.0. For more information about the
validate_password plugin, see The Password Validation Plugin in the MySQL documentation.

To enable the validate_password plugin for a MySQL DB instance

1. Connect to your MySQL DB instance and run the following command.

1638
Amazon Relational Database Service User Guide
Encrypting with SSL/TLS

INSTALL PLUGIN validate_password SONAME 'validate_password.so';

2. Configure the parameters for the plugin in the DB parameter group used by the DB instance.

For more information about the parameters, see Password Validation Plugin options and variables in
the MySQL documentation.

For more information about modifying DB instance parameters, see Modifying parameters in a DB
parameter group (p. 352).

After installing and enabling the password_validate plugin, reset existing passwords to comply with
your new validation policies.

Amazon RDS doesn't validate passwords. The MySQL DB instance performs password validation. If you
set a user password with the AWS Management Console, the modify-db-instance AWS CLI command,
or the ModifyDBInstance RDS API operation, the change can succeed even if the new password
doesn't satisfy your password policies. However, a new password is set in the MySQL DB instance only if
it satisfies the password policies. In this case, Amazon RDS records the following event.

"RDS-EVENT-0067" - An attempt to reset the master password for the DB instance has failed.

For more information about Amazon RDS events, see Working with Amazon RDS event
notification (p. 855).

Encrypting client connections to MySQL DB instances


with SSL/TLS
Secure Sockets Layer (SSL) is an industry-standard protocol for securing network connections between
client and server. After SSL version 3.0, the name was changed to Transport Layer Security (TLS).
Amazon RDS supports SSL/TLS encryption for MySQL DB instances. Using SSL/TLS, you can encrypt a
connection between your application client and your MySQL DB instance. SSL/TLS support is available in
all AWS Regions for MySQL.

Topics
• Using SSL/TLS with a MySQL DB instance (p. 1639)
• Requiring SSL/TLS for all connections to a MySQL DB instance (p. 1640)
• Connecting from the MySQL command-line client with SSL/TLS (encrypted) (p. 1640)

Using SSL/TLS with a MySQL DB instance


Amazon RDS creates an SSL/TLS certificate and installs the certificate on the DB instance when Amazon
RDS provisions the instance. These certificates are signed by a certificate authority. The SSL/TLS
certificate includes the DB instance endpoint as the Common Name (CN) for the SSL/TLS certificate to
guard against spoofing attacks.

An SSL/TLS certificate created by Amazon RDS is the trusted root entity and should work in most cases
but might fail if your application does not accept certificate chains. If your application does not accept
certificate chains, you might need to use an intermediate certificate to connect to your AWS Region. For
example, you must use an intermediate certificate to connect to the AWS GovCloud (US) Regions using
SSL/TLS.

1639
Amazon Relational Database Service User Guide
Encrypting with SSL/TLS

For information about downloading certificates, see Using SSL/TLS to encrypt a connection to a DB
instance (p. 2591). For more information about using SSL/TLS with MySQL, see Updating applications to
connect to MySQL DB instances using new SSL/TLS certificates (p. 1642).

MySQL uses OpenSSL for secure connections. Amazon RDS for MySQL supports Transport Layer Security
(TLS) versions 1.0, 1.1, 1.2, and 1.3. TLS support depends on the MySQL version. The following table
shows the TLS support for MySQL versions.

MySQL version TLS 1.0 TLS 1.1 TLS 1.2 TLS 1.3

MySQL 8.0 Not supported Not supported Supported Supported

MySQL 5.7 Supported Supported Supported Not supported

You can require SSL/TLS connections for specific users accounts. For example, you can use one of the
following statements, depending on your MySQL version, to require SSL/TLS connections on the user
account encrypted_user.

To do so, use the following statement.

ALTER USER 'encrypted_user'@'%' REQUIRE SSL;

For more information on SSL/TLS connections with MySQL, see the Using encrypted connections in the
MySQL documentation.

Requiring SSL/TLS for all connections to a MySQL DB instance


Use the require_secure_transport parameter to require that all user connections to your MySQL
DB instance use SSL/TLS. By default, the require_secure_transport parameter is set to OFF. You
can set the require_secure_transport parameter to ON to require SSL/TLS for connections to your
DB instance.

You can set the require_secure_transport parameter value by updating the DB parameter group
for your DB instance. You don't need to reboot your DB instance for the change to take effect.

When the require_secure_transport parameter is set to ON for a DB instance, a database client


can connect to it if it can establish an encrypted connection. Otherwise, an error message similar to the
following is returned to the client:

MySQL Error 3159 (HY000): Connections using insecure transport are prohibited while --
require_secure_transport=ON.

For information about setting parameters, see Modifying parameters in a DB parameter group (p. 352).

For more information about the require_secure_transport parameter, see the MySQL
documentation.

Connecting from the MySQL command-line client with SSL/TLS


(encrypted)
The mysql client program parameters are slightly different if you are using the MySQL 5.7 version, the
MySQL 8.0 version, or the MariaDB version.

1640
Amazon Relational Database Service User Guide
Encrypting with SSL/TLS

To find out which version you have, run the mysql command with the --version option. In the
following example, the output shows that the client program is from MariaDB.

$ mysql --version
mysql Ver 15.1 Distrib 10.5.15-MariaDB, for osx10.15 (x86_64) using readline 5.1

Most Linux distributions, such as Amazon Linux, CentOS, SUSE, and Debian have replaced MySQL with
MariaDB, and the mysql version in them is from MariaDB.

To connect to your DB instance using SSL/TLS, follow these steps:

To connect to a DB instance with SSL/TLS using the MySQL command-line client

1. Download a root certificate that works for all AWS Regions.

For information about downloading certificates, see Using SSL/TLS to encrypt a connection to a DB
instance (p. 2591).
2. Use a MySQL command-line client to connect to a DB instance with SSL/TLS encryption. For the -h
parameter, substitute the DNS name (endpoint) for your DB instance. For the --ssl-ca parameter,
substitute the SSL/TLS certificate file name. For the -P parameter, substitute the port for your DB
instance. For the -u parameter, substitute the user name of a valid database user, such as the master
user. Enter the master user password when prompted.

The following example shows how to launch the client using the --ssl-ca parameter using the
MySQL 5.7 client or later:

mysql -h mysql–instance1.123456789012.us-east-1.rds.amazonaws.com --ssl-ca=global-


bundle.pem --ssl-mode=REQUIRED -P 3306 -u myadmin -p

To require that the SSL/TLS connection verifies the DB instance endpoint against the endpoint in the
SSL/TLS certificate, enter the following command:

mysql -h mysql–instance1.123456789012.us-east-1.rds.amazonaws.com --ssl-ca=global-


bundle.pem --ssl-mode=VERIFY_IDENTITY -P 3306 -u myadmin -p

The following example shows how to launch the client using the --ssl-ca parameter using the
MariaDB client:

mysql -h mysql–instance1.123456789012.us-east-1.rds.amazonaws.com --ssl-ca=global-


bundle.pem --ssl -P 3306 -u myadmin -p

3. Enter the master user password when prompted.

You will see output similar to the following.

Welcome to the MySQL monitor. Commands end with ; or \g.


Your MySQL connection id is 9738
Server version: 8.0.28 Source distribution

Type 'help;' or '\h' for help. Type '\c' to clear the buffer.

mysql>

1641
Amazon Relational Database Service User Guide
Using new SSL/TLS certificates

Updating applications to connect to MySQL DB


instances using new SSL/TLS certificates
As of January 13, 2023, Amazon RDS has published new Certificate Authority (CA) certificates for
connecting to your RDS DB instances using Secure Socket Layer or Transport Layer Security (SSL/TLS).
Following, you can find information about updating your applications to use the new certificates.

This topic can help you to determine whether any client applications use SSL/TLS to connect to your DB
instances. If they do, you can further check whether those applications require certificate verification to
connect.
Note
Some applications are configured to connect to MySQL DB instances only if they can
successfully verify the certificate on the server. For such applications, you must update your
client application trust stores to include the new CA certificates.
You can specify the following SSL modes: disabled, preferred, and required. When
you use the preferred SSL mode and the CA certificate doesn't exist or isn't up to date, the
connection falls back to not using SSL and connects without encryption.
Because these later versions use the OpenSSL protocol, an expired server certificate doesn't
prevent successful connections unless the required SSL mode is specified.
We recommend avoiding preferred mode. In preferred mode, if the connection encounters
an invalid certificate, it stops using encryption and proceeds unencrypted.

After you update your CA certificates in the client application trust stores, you can rotate the certificates
on your DB instances. We strongly recommend testing these procedures in a development or staging
environment before implementing them in your production environments.

For more information about certificate rotation, see Rotating your SSL/TLS certificate (p. 2596). For
more information about downloading certificates, see Using SSL/TLS to encrypt a connection to a DB
instance (p. 2591). For information about using SSL/TLS with MySQL DB instances, see Using SSL/TLS
with a MySQL DB instance (p. 1639).

Topics
• Determining whether any applications are connecting to your MySQL DB instance using
SSL (p. 1642)
• Determining whether a client requires certificate verification to connect (p. 1643)
• Updating your application trust store (p. 1644)
• Example Java code for establishing SSL connections (p. 1644)

Determining whether any applications are connecting to your


MySQL DB instance using SSL
If you are using Amazon RDS for MySQL version 5.7 or 8.0 and the Performance Schema is enabled,
run the following query to check if connections are using SSL/TLS. For information about enabling the
Performance Schema, see Performance Schema quick start in the MySQL documentation.

mysql> SELECT id, user, host, connection_type


FROM performance_schema.threads pst
INNER JOIN information_schema.processlist isp
ON pst.processlist_id = isp.id;

In this sample output, you can see both your own session (admin) and an application logged in as
webapp1 are using SSL.

1642
Amazon Relational Database Service User Guide
Using new SSL/TLS certificates

+----+-----------------+------------------+-----------------+
| id | user | host | connection_type |
+----+-----------------+------------------+-----------------+
| 8 | admin | 10.0.4.249:42590 | SSL/TLS |
| 4 | event_scheduler | localhost | NULL |
| 10 | webapp1 | 159.28.1.1:42189 | SSL/TLS |
+----+-----------------+------------------+-----------------+
3 rows in set (0.00 sec)

Determining whether a client requires certificate verification to


connect
You can check whether JDBC clients and MySQL clients require certificate verification to connect.

JDBC
The following example with MySQL Connector/J 8.0 shows one way to check an application's JDBC
connection properties to determine whether successful connections require a valid certificate. For more
information on all of the JDBC connection options for MySQL, see Configuration properties in the
MySQL documentation.

When using the MySQL Connector/J 8.0, an SSL connection requires verification against the server CA
certificate if your connection properties have sslMode set to VERIFY_CA or VERIFY_IDENTITY, as in
the following example.

Properties properties = new Properties();


properties.setProperty("sslMode", "VERIFY_IDENTITY");
properties.put("user", DB_USER);
properties.put("password", DB_PASSWORD);

Note
If you use either the MySQL Java Connector v5.1.38 or later, or the MySQL Java Connector
v8.0.9 or later to connect to your databases, even if you haven't explicitly configured your
applications to use SSL/TLS when connecting to your databases, these client drivers default to
using SSL/TLS. In addition, when using SSL/TLS, they perform partial certificate verification and
fail to connect if the database server certificate is expired.

MySQL
The following examples with the MySQL Client show two ways to check a script's MySQL connection to
determine whether successful connections require a valid certificate. For more information on all of the
connection options with the MySQL Client, see Client-side configuration for encrypted connections in
the MySQL documentation.

When using the MySQL 5.7 or MySQL 8.0 Client, an SSL connection requires verification against the
server CA certificate if for the --ssl-mode option you specify VERIFY_CA or VERIFY_IDENTITY, as in
the following example.

mysql -h mysql-database.rds.amazonaws.com -uadmin -ppassword --ssl-ca=/tmp/ssl-cert.pem --


ssl-mode=VERIFY_CA

1643
Amazon Relational Database Service User Guide
Using new SSL/TLS certificates

When using the MySQL 5.6 Client, an SSL connection requires verification against the server CA
certificate if you specify the --ssl-verify-server-cert option, as in the following example.

mysql -h mysql-database.rds.amazonaws.com -uadmin -ppassword --ssl-ca=/tmp/ssl-cert.pem --


ssl-verify-server-cert

Updating your application trust store


For information about updating the trust store for MySQL applications, see Installing SSL certificates in
the MySQL documentation.

For information about downloading the root certificate, see Using SSL/TLS to encrypt a connection to a
DB instance (p. 2591).

For sample scripts that import certificates, see Sample script for importing certificates into your trust
store (p. 2603).
Note
When you update the trust store, you can retain older certificates in addition to adding the new
certificates.

If you are using the mysql JDBC driver in an application, set the following properties in the application.

System.setProperty("javax.net.ssl.trustStore", certs);
System.setProperty("javax.net.ssl.trustStorePassword", "password");

When you start the application, set the following properties.

java -Djavax.net.ssl.trustStore=/path_to_truststore/MyTruststore.jks -
Djavax.net.ssl.trustStorePassword=my_truststore_password com.companyName.MyApplication

Note
Specify a password other than the prompt shown here as a security best practice.

Example Java code for establishing SSL connections


The following code example shows how to set up the SSL connection that validates the server certificate
using JDBC.

public class MySQLSSLTest {

private static final String DB_USER = "username";


private static final String DB_PASSWORD = "password";
// This key store has only the prod root ca.
private static final String KEY_STORE_FILE_PATH = "file-path-to-keystore";
private static final String KEY_STORE_PASS = "keystore-password";

public static void test(String[] args) throws Exception {


Class.forName("com.mysql.jdbc.Driver");

1644
Amazon Relational Database Service User Guide
Using Kerberos authentication for MySQL

System.setProperty("javax.net.ssl.trustStore", KEY_STORE_FILE_PATH);
System.setProperty("javax.net.ssl.trustStorePassword", KEY_STORE_PASS);

Properties properties = new Properties();


properties.setProperty("sslMode", "VERIFY_IDENTITY");
properties.put("user", DB_USER);
properties.put("password", DB_PASSWORD);

Connection connection = null;


Statement stmt = null;
ResultSet rs = null;
try {
connection =
DriverManager.getConnection("jdbc:mysql://mydatabase.123456789012.us-
east-1.rds.amazonaws.com:3306",properties);
stmt = connection.createStatement();
rs=stmt.executeQuery("SELECT 1 from dual");
} finally {
if (rs != null) {
try {
rs.close();
} catch (SQLException e) {
}
}
if (stmt != null) {
try {
stmt.close();
} catch (SQLException e) {
}
}
if (connection != null) {
try {
connection.close();
} catch (SQLException e) {
e.printStackTrace();
}
}
}
return;
}
}

Important
After you have determined that your database connections use SSL/TLS and have updated
your application trust store, you can update your database to use the rds-ca-rsa2048-g1
certificates. For instructions, see step 3 in Updating your CA certificate by modifying your DB
instance (p. 2597).
Specify a password other than the prompt shown here as a security best practice.

Using Kerberos authentication for MySQL


You can use Kerberos authentication to authenticate users when they connect to your MySQL DB
instance. The DB instance works with AWS Directory Service for Microsoft Active Directory (AWS
Managed Microsoft AD) to enable Kerberos authentication. When users authenticate with a MySQL DB
instance joined to the trusting domain, authentication requests are forwarded. Forwarded requests go to
the domain directory that you create with AWS Directory Service.

Keeping all of your credentials in the same directory can save you time and effort. With this approach,
you have a centralized place for storing and managing credentials for multiple DB instances. Using a
directory can also improve your overall security profile.

1645
Amazon Relational Database Service User Guide
Using Kerberos authentication for MySQL

Region and version availability


Feature availability and support varies across specific versions of each database engine, and across
AWS Regions. For more information on version and Region availability of Amazon RDS with Kerberos
authentication, see Kerberos authentication (p. 141).

Overview of Setting up Kerberos authentication for MySQL DB


instances
To set up Kerberos authentication for a MySQL DB instance, complete the following general steps,
described in more detail later:

1. Use AWS Managed Microsoft AD to create an AWS Managed Microsoft AD directory. You can use the
AWS Management Console, the AWS CLI, or the AWS Directory Service to create the directory. For
details about doing so, see Create your AWS Managed Microsoft AD directory in the AWS Directory
Service Administration Guide.
2. Create an AWS Identity and Access Management (IAM) role that uses the managed IAM policy
AmazonRDSDirectoryServiceAccess. The role allows Amazon RDS to make calls to your directory.

For the role to allow access, the AWS Security Token Service (AWS STS) endpoint must be activated
in the AWS Region for your AWS account. AWS STS endpoints are active by default in all AWS
Regions, and you can use them without any further actions. For more information, see Activating and
deactivating AWS STS in an AWS Region in the IAM User Guide.
3. Create and configure users in the AWS Managed Microsoft AD directory using the Microsoft Active
Directory tools. For more information about creating users in your Active Directory, see Manage users
and groups in AWS managed Microsoft AD in the AWS Directory Service Administration Guide.
4. Create or modify a MySQL DB instance. If you use either the CLI or RDS API in the create request,
specify a domain identifier with the Domain parameter. Use the d-* identifier that was generated
when you created your directory and the name of the role that you created.

If you modify an existing MySQL DB instance to use Kerberos authentication, set the domain and IAM
role parameters for the DB instance. Locate the DB instance in the same VPC as the domain directory.
5. Use the Amazon RDS master user credentials to connect to the MySQL DB instance. Create the user in
MySQL using the CREATE USER clause IDENTIFIED WITH 'auth_pam'. Users that you create this
way can log in to the MySQL DB instance using Kerberos authentication.

Setting up Kerberos authentication for MySQL DB instances


You use AWS Managed Microsoft AD to set up Kerberos authentication for a MySQL DB instance. To set
up Kerberos authentication, you take the following steps.

Step 1: Create a directory using AWS Managed Microsoft AD


AWS Directory Service creates a fully managed Active Directory in the AWS Cloud. When you create an
AWS Managed Microsoft AD directory, AWS Directory Service creates two domain controllers and Domain
Name System (DNS) servers on your behalf. The directory servers are created in different subnets in a
VPC. This redundancy helps make sure that your directory remains accessible even if a failure occurs.

When you create an AWS Managed Microsoft AD directory, AWS Directory Service performs the following
tasks on your behalf:

• Sets up an Active Directory within the VPC.


• Creates a directory administrator account with the user name Admin and the specified password. You
use this account to manage your directory.

1646
Amazon Relational Database Service User Guide
Using Kerberos authentication for MySQL

Note
Be sure to save this password. AWS Directory Service doesn't store it. You can reset it, but you
can't retrieve it.
• Creates a security group for the directory controllers.

When you launch an AWS Managed Microsoft AD, AWS creates an Organizational Unit (OU) that contains
all of your directory's objects. This OU has the NetBIOS name that you typed when you created your
directory and is located in the domain root. The domain root is owned and managed by AWS.

The Admin account that was created with your AWS Managed Microsoft AD directory has permissions for
the most common administrative activities for your OU:

• Create, update, or delete users


• Add resources to your domain such as file or print servers, and then assign permissions for those
resources to users in your OU
• Create additional OUs and containers
• Delegate authority
• Restore deleted objects from the Active Directory Recycle Bin
• Run AD and DNS Windows PowerShell modules on the Active Directory Web Service

The Admin account also has rights to perform the following domain-wide activities:

• Manage DNS configurations (add, remove, or update records, zones, and forwarders)
• View DNS event logs
• View security event logs

To create a directory with AWS Managed Microsoft AD

1. Sign in to the AWS Management Console and open the AWS Directory Service console at https://
console.aws.amazon.com/directoryservicev2/.
2. In the navigation pane, choose Directories and choose Set up Directory.
3. Choose AWS Managed Microsoft AD. AWS Managed Microsoft AD is the only option that you can
currently use with Amazon RDS.
4. Enter the following information:

Directory DNS name

The fully qualified name for the directory, such as corp.example.com.


Directory NetBIOS name

The short name for the directory, such as CORP.


Directory description

(Optional) A description for the directory.


Admin password

The password for the directory administrator. The directory creation process creates an
administrator account with the user name Admin and this password.

The directory administrator password and can't include the word "admin." The password is case-
sensitive and must be 8–64 characters in length. It must also contain at least one character from
three of the following four categories:

1647
Amazon Relational Database Service User Guide
Using Kerberos authentication for MySQL

• Lowercase letters (a–z)


• Uppercase letters (A–Z)
• Numbers (0–9)
• Non-alphanumeric characters (~!@#$%^&*_-+=`|\(){}[]:;"'<>,.?/)
Confirm password

The administrator password retyped.


5. Choose Next.
6. Enter the following information in the Networking section and then choose Next:

VPC

The VPC for the directory. Create the MySQL DB instance in this same VPC.
Subnets

Subnets for the directory servers. The two subnets must be in different Availability Zones.
7. Review the directory information and make any necessary changes. When the information is correct,
choose Create directory.

1648
Amazon Relational Database Service User Guide
Using Kerberos authentication for MySQL

It takes several minutes for the directory to be created. When it has been successfully created, the Status
value changes to Active.

To see information about your directory, choose the directory name in the directory listing. Note the
Directory ID value because you need this value when you create or modify your MySQL DB instance.

1649
Amazon Relational Database Service User Guide
Using Kerberos authentication for MySQL

Step 2: Create the IAM role for use by Amazon RDS


For Amazon RDS to call AWS Directory Service for you, an IAM role that uses the managed IAM policy
AmazonRDSDirectoryServiceAccess is required. This role allows Amazon RDS to make calls to the
AWS Directory Service.

When a DB instance is created using the AWS Management Console and the console user has the
iam:CreateRole permission, the console creates this role automatically. In this case, the role name
is rds-directoryservice-kerberos-access-role. Otherwise, you must create the IAM role
manually. When you create this IAM role, choose Directory Service, and attach the AWS managed
policy AmazonRDSDirectoryServiceAccess to it.

For more information about creating IAM roles for a service, see Creating a role to delegate permissions
to an AWS service in the IAM User Guide.
Note
The IAM role used for Windows Authentication for RDS for SQL Server can't be used for RDS for
MySQL.

Optionally, you can create policies with the required permissions instead of using the managed IAM
policy AmazonRDSDirectoryServiceAccess. In this case, the IAM role must have the following IAM
trust policy.

{
"Version": "2012-10-17",

1650
Amazon Relational Database Service User Guide
Using Kerberos authentication for MySQL

"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": [
"directoryservice.rds.amazonaws.com",
"rds.amazonaws.com"
]
},
"Action": "sts:AssumeRole"
}
]
}

The role must also have the following IAM role policy.

{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"ds:DescribeDirectories",
"ds:AuthorizeApplication",
"ds:UnauthorizeApplication",
"ds:GetAuthorizedApplicationDetails"
],
"Effect": "Allow",
"Resource": "*"
}
]
}

Step 3: Create and configure users


You can create users with the Active Directory Users and Computers tool. This tool is part of the Active
Directory Domain Services and Active Directory Lightweight Directory Services tools. Users represent
individual people or entities that have access to your directory.

To create users in an AWS Directory Service directory, you must be connected to an Amazon EC2 instance
based on Microsoft Windows. This instance must be a member of the AWS Directory Service directory
and be logged in as a user that has privileges to create users. For more information, see Manage users
and groups in AWS Managed Microsoft AD in the AWS Directory Service Administration Guide.

Step 4: Create or modify a MySQL DB instance


Create or modify a MySQL DB instance for use with your directory. You can use the console, CLI, or RDS
API to associate a DB instance with a directory. You can do this in one of the following ways:

• Create a new MySQL DB instance using the console, the create-db-instance CLI command, or the
CreateDBInstance RDS API operation.

For instructions, see Creating an Amazon RDS DB instance (p. 300).


• Modify an existing MySQL DB instance using the console, the modify-db-instance CLI command, or the
ModifyDBInstance RDS API operation.

For instructions, see Modifying an Amazon RDS DB instance (p. 401).


• Restore a MySQL DB instance from a DB snapshot using the console, the restore-db-instance-from-db-
snapshot CLI command, or the RestoreDBInstanceFromDBSnapshot RDS API operation.

For instructions, see Restoring from a DB snapshot (p. 615).

1651
Amazon Relational Database Service User Guide
Using Kerberos authentication for MySQL

• Restore a MySQL DB instance to a point-in-time using the console, the restore-db-instance-to-point-


in-time CLI command, or the RestoreDBInstanceToPointInTime RDS API operation.

For instructions, see Restoring a DB instance to a specified time (p. 660).

Kerberos authentication is only supported for MySQL DB instances in a VPC. The DB instance can be
in the same VPC as the directory, or in a different VPC. The DB instance must use a security group that
allows egress within the directory's VPC so the DB instance can communicate with the directory.

When you use the console to create, modify, or restore a DB instance, choose Password and Kerberos
authentication in the Database authentication section. Choose Browse Directory and then select the
directory, or choose Create a new directory.

When you use the AWS CLI or RDS API, associate a DB instance with a directory. The following
parameters are required for the DB instance to use the domain directory you created:

• For the --domain parameter, use the domain identifier ("d-*" identifier) generated when you created
the directory.
• For the --domain-iam-role-name parameter, use the role you created that uses the managed IAM
policy AmazonRDSDirectoryServiceAccess.

For example, the following CLI command modifies a DB instance to use a directory.

For Linux, macOS, or Unix:

aws rds modify-db-instance \


--db-instance-identifier mydbinstance \
--domain d-ID \
--domain-iam-role-name role-name

For Windows:

aws rds modify-db-instance ^

1652
Amazon Relational Database Service User Guide
Using Kerberos authentication for MySQL

--db-instance-identifier mydbinstance ^
--domain d-ID ^
--domain-iam-role-name role-name

Important
If you modify a DB instance to enable Kerberos authentication, reboot the DB instance after
making the change.

Step 5: Create Kerberos authentication MySQL logins


Use the Amazon RDS master user credentials to connect to the MySQL DB instance as you do any
other DB instance. The DB instance is joined to the AWS Managed Microsoft AD domain. Thus, you can
provision MySQL logins and users from the Active Directory users in your domain. Database permissions
are managed through standard MySQL permissions that are granted to and revoked from these logins.

You can allow an Active Directory user to authenticate with MySQL. To do this, first use the Amazon
RDS master user credentials to connect to the MySQL DB instance as with any other DB instance. After
you're logged in, create an externally authenticated user with PAM (Pluggable Authentication Modules)
in MySQL as shown following.

CREATE USER 'testuser'@'%' IDENTIFIED WITH 'auth_pam';

Replace testuser with the user name. Users (both humans and applications) from your domain can
now connect to the DB instance from a domain joined client machine using Kerberos authentication.
Important
We strongly recommended that clients use SSL/TLS connections when using PAM
authentication. If they don't use SSL/TLS connections, the password might be sent as clear text
in some cases. To require an SSL/TLS encrypted connection for your AD user, run the following
command:

UPDATE mysql.user SET ssl_type = 'any' WHERE ssl_type = '' AND PLUGIN = 'auth_pam'
and USER = 'testuser';
FLUSH PRIVILEGES;

For more information, see Using SSL/TLS with a MySQL DB instance (p. 1639).

Managing a DB instance in a domain


You can use the CLI or the RDS API to manage your DB instance and its relationship with your managed
Active Directory. For example, you can associate an Active Directory for Kerberos authentication and
disassociate an Active Directory to disable Kerberos authentication. You can also move a DB instance to
be externally authenticated by one Active Directory to another.

For example, using the Amazon RDS API, you can do the following:

• To reattempt enabling Kerberos authentication for a failed membership, use the ModifyDBInstance
API operation and specify the current membership's directory ID.
• To update the IAM role name for membership, use the ModifyDBInstance API operation and specify
the current membership's directory ID and the new IAM role.
• To disable Kerberos authentication on a DB instance, use the ModifyDBInstance API operation and
specify none as the domain parameter.
• To move a DB instance from one domain to another, use the ModifyDBInstance API operation and
specify the domain identifier of the new domain as the domain parameter.
• To list membership for each DB instance, use the DescribeDBInstances API operation.

1653
Amazon Relational Database Service User Guide
Using Kerberos authentication for MySQL

Understanding domain membership


After you create or modify your DB instance, it becomes a member of the domain. You can view
the status of the domain membership for the DB instance by running the describe-db-instances CLI
command. The status of the DB instance can be one of the following:

• kerberos-enabled – The DB instance has Kerberos authentication enabled.


• enabling-kerberos – AWS is in the process of enabling Kerberos authentication on this DB instance.
• pending-enable-kerberos – The enabling of Kerberos authentication is pending on this DB
instance.
• pending-maintenance-enable-kerberos – AWS will attempt to enable Kerberos authentication
on the DB instance during the next scheduled maintenance window.
• pending-disable-kerberos – The disabling of Kerberos authentication is pending on this DB
instance.
• pending-maintenance-disable-kerberos – AWS will attempt to disable Kerberos authentication
on the DB instance during the next scheduled maintenance window.
• enable-kerberos-failed – A configuration problem has prevented AWS from enabling Kerberos
authentication on the DB instance. Check and fix your configuration before reissuing the DB instance
modify command.
• disabling-kerberos – AWS is in the process of disabling Kerberos authentication on this DB
instance.

A request to enable Kerberos authentication can fail because of a network connectivity issue or an
incorrect IAM role. For example, suppose that you create a DB instance or modify an existing DB instance
and the attempt to enable Kerberos authentication fails. If this happens, re-issue the modify command
or modify the newly created DB instance to join the domain.

Connecting to MySQL with Kerberos authentication


To connect to MySQL with Kerberos authentication, you must log in using the Kerberos authentication
type.

To create a database user that you can connect to using Kerberos authentication, use an IDENTIFIED
WITH clause on the CREATE USER statement. For instructions, see Step 5: Create Kerberos
authentication MySQL logins (p. 1653).

To avoid errors, use the MariaDB mysql client. You can download MariaDB software at https://
downloads.mariadb.org/.

At a command prompt, connect to one of the endpoints associated with your MySQL DB instance. Follow
the general procedures in Connecting to a DB instance running the MySQL database engine (p. 1630).
When you're prompted for the password, enter the Kerberos password associated with that user name.

Restoring a MySQL DB instance and adding it to a domain


You can restore a DB snapshot or complete a point-in-time restore for a MySQL DB instance and then
add it to a domain. After the DB instance is restored, modify the DB instance using the process explained
in Step 4: Create or modify a MySQL DB instance (p. 1651) to add the DB instance to a domain.

Kerberos authentication MySQL limitations


The following limitations apply to Kerberos authentication for MySQL:

• Only an AWS Managed Microsoft AD is supported. However, you can join RDS for MySQL DB instances
to shared Managed Microsoft AD domains owned by different accounts in the same AWS Region.

1654
Amazon Relational Database Service User Guide
Using Kerberos authentication for MySQL

• You must reboot the DB instance after enabling the feature.


• The domain name length can't be longer than 61 characters.
• You can't enable Kerberos authentication and IAM authentication at the same time. Choose one
authentication method or the other for your MySQL DB instance.
• Don't modify the DB instance port after enabling the feature.
• Don't use Kerberos authentication with read replicas.
• If you have auto minor version upgrade turned on for a MySQL DB instance that is using Kerberos
authentication, you must turn off Kerberos authentication and then turn it back on after an automatic
upgrade. For more information about auto minor version upgrades, see Automatic minor version
upgrades for MySQL (p. 1669).
• To delete a DB instance with this feature enabled, first disable the feature. To do so, use the modify-
db-instance CLI command for the DB instance and specify none for the --domain parameter.

If you use the CLI or RDS API to delete a DB instance with this feature enabled, expect a delay.
• You can't set up a forest trust relationship between your on-premises or self-hosted Microsoft Active
Directory and the AWS Managed Microsoft AD.

1655
Amazon Relational Database Service User Guide
Improving query performance with RDS Optimized Reads

Improving query performance for RDS for MySQL


with Amazon RDS Optimized Reads
You can achieve faster query processing for RDS for MySQL with Amazon RDS Optimized Reads. An RDS
for MySQL DB instance or Multi-AZ DB cluster that uses RDS Optimized Reads can achieve up to 2x faster
query processing compared to a DB instance or cluster that doesn't use it.

Topics
• Overview of RDS Optimized Reads (p. 1656)
• Use cases for RDS Optimized Reads (p. 1656)
• Best practices for RDS Optimized Reads (p. 1657)
• Using RDS Optimized Reads (p. 1657)
• Monitoring DB instances that use RDS Optimized Reads (p. 1658)
• Limitations for RDS Optimized Reads (p. 1658)

Overview of RDS Optimized Reads


When you use an RDS for MySQL DB instance or Multi-AZ DB cluster that has RDS Optimized Reads
turned on, it achieves faster query performance through the use of an instance store. An instance store
provides temporary block-level storage for your DB instance or Multi-AZ DB cluster. The storage is
located on Non-Volatile Memory Express (NVMe) solid state drives (SSDs) that are physically attached
to the host server. This storage is optimized for low latency, high random I/O performance, and high
sequential read throughput.

RDS Optimized Reads is turned on by default when a DB instance or Multi-AZ DB cluster uses a DB
instance class with an instance store, such as db.m5d or db.m6gd. With RDS Optimized Reads, some
temporary objects are stored on the instance store. These temporary objects include internal temporary
files, internal on-disk temp tables, memory map files, and binary log (binlog) cache files. For more
information about the instance store, see Amazon EC2 instance store in the Amazon Elastic Compute
Cloud User Guide for Linux Instances.

The workloads that generate temporary objects in MySQL for query processing can take advantage
of the instance store for faster query processing. This type of workload includes queries involving
sorts, hash aggregations, high-load joins, Common Table Expressions (CTEs), and queries on unindexed
columns. These instance store volumes provide higher IOPS and performance, regardless of the storage
configurations used for persistent Amazon EBS storage. Because RDS Optimized Reads offloads
operations on temporary objects to the instance store, the input/output operations per second (IOPS)
or throughput of the persistent storage (Amazon EBS) can now be used for operations on persistent
objects. These operations include regular data file reads and writes, and background engine operations,
such as flushing and insert buffer merges.
Note
Both manual and automated RDS snapshots only contain engine files for persistent objects. The
temporary objects created in the instance store aren't included in RDS snapshots.

Use cases for RDS Optimized Reads


If you have workloads that rely heavily on temporary objects, such as internal tables or files, for their
query execution, then you can benefit from turning on RDS Optimized Reads. The following use cases are
candidates for RDS Optimized Reads:

• Applications that run analytical queries with complex common table expressions (CTEs), derived tables,
and grouping operations

1656
Amazon Relational Database Service User Guide
Best practices

• Read replicas that serve heavy read traffic with unoptimized queries
• Applications that run on-demand or dynamic reporting queries that involve complex operations, such
as queries with GROUP BY and ORDER BY clauses
• Workloads that use internal temporary tables for query processing

You can monitor the engine status variable created_tmp_disk_tables to determine the number of
disk-based temporary tables created on your DB instance.
• Applications that create large temporary tables, either directly or in procedures, to store intermediate
results
• Database queries that perform grouping or ordering on non-indexed columns

Best practices for RDS Optimized Reads


Use the following best practices for RDS Optimized Reads:

• Add retry logic for read-only queries in case they fail because the instance store is full during the
execution.
• Monitor the storage space available on the instance store with the CloudWatch metric
FreeLocalStorage. If the instance store is reaching its limit because of workload on the DB instance,
modify the DB instance to use a larger DB instance class.
• When your DB instance or Multi-AZ DB cluster has sufficient memory but is still reaching the storage
limit on the instance store, increase the binlog_cache_size value to maintain the session-specific
binlog entries in memory. This configuration prevents writing the binlog entries to temporary binlog
cache files on disk.

The binlog_cache_size parameter is session-specific. You can change the value for each new
session. The setting for this parameter can increase the memory utilization on the DB instance during
peak workload. Therefore, consider increasing the parameter value based on the workload pattern of
your application and available memory on the DB instance.
• Use the default value of MIXED for the binlog_format. Depending on the size of the transactions,
setting binlog_format to ROW can result in large binlog cache files on the instance store.
• Set the internal_tmp_mem_storage_engine parameter to TempTable, and set the
temptable_max_mmap parameter to match the size of the available storage on the instance store.
• Avoid performing bulk changes in a single transaction. These types of transactions can generate large
binlog cache files on the instance store and can cause issues when the instance store is full. Consider
splitting writes into multiple small transactions to minimize storage use for binlog cache files.
• Use the default value of ABORT_SERVER for the binlog_error_action parameter. Doing so avoids
issues with the binary logging on DB instances with backups enabled.

Using RDS Optimized Reads


When you provision an RDS for MySQL DB instance with one of the following DB instance classes
in a Single-AZ DB instance deployment, Multi-AZ DB instance deployment, or Multi-AZ DB cluster
deployment, the DB instance automatically uses RDS Optimized Reads.

To turn on RDS Optimized Reads, do one of the following:

• Create an RDS for MySQL DB instance or Multi-AZ DB cluster using one of these DB instance classes.
For more information, see Creating an Amazon RDS DB instance (p. 300).
• Modify an existing RDS for MySQL DB instance or Multi-AZ DB cluster to use one of these DB instance
classes. For more information, see Modifying an Amazon RDS DB instance (p. 401).

1657
Amazon Relational Database Service User Guide
Monitoring

RDS Optimized Reads is available in all AWS Regions RDS where one or more of the DB instance classes
with local NVMe SSD storage are supported. For information about DB instance classes, see the section
called “DB instance classes” (p. 11).

DB instance class availability differs for AWS Regions. To determine whether a DB instance class is
supported in a specific AWS Region, see the section called “Determining DB instance class support in
AWS Regions” (p. 68).

If you don't want to use RDS Optimized Reads, modify your DB instance or Multi-AZ DB cluster so that it
doesn't use a DB instance class that supports the feature.

Monitoring DB instances that use RDS Optimized


Reads
You can monitor DB instances that use RDS Optimized Reads with the following CloudWatch metrics:

• FreeLocalStorage
• ReadIOPSLocalStorage
• ReadLatencyLocalStorage
• ReadThroughputLocalStorage
• WriteIOPSLocalStorage
• WriteLatencyLocalStorage
• WriteThroughputLocalStorage

These metrics provide data about available instance store storage, IOPS, and throughput. For
more information about these metrics, see Amazon CloudWatch instance-level metrics for Amazon
RDS (p. 806).

Limitations for RDS Optimized Reads


The following limitations apply to RDS Optimized Reads:

• RDS Optimized Reads is supported for RDS for MySQL version 8.0.28 and higher. For information
about RDS for MySQL versions, see MySQL on Amazon RDS versions (p. 1627).
• You can't change the location of temporary objects to persistent storage (Amazon EBS) on the DB
instance classes that support RDS Optimized Reads.
• When binary logging is enabled on a DB instance, the maximum transaction size is limited by the
size of the instance store. In MySQL, any session that requires more storage than the value of
binlog_cache_size writes transaction changes to temporary binlog cache files, which are created
on the instance store.
• Transactions can fail when the instance store is full.

1658
Amazon Relational Database Service User Guide
Improving write performance with
RDS Optimized Writes for MySQL

Improving write performance with Amazon RDS


Optimized Writes for MySQL
You can improve the performance of write transactions with Amazon RDS Optimized Writes for MySQL.
When your RDS for MySQL database uses RDS Optimized Writes, it can achieve up to two times higher
write transaction throughput.

Topics
• Overview of RDS Optimized Writes (p. 1284)
• Using RDS Optimized Writes (p. 1660)
• Limitations for RDS Optimized Writes (p. 1663)

Overview of RDS Optimized Writes


When you turn on Amazon RDS Optimized Writes, your RDS for MySQL databases write only once when
flushing data to durable storage without the need for the doublewrite buffer. The databases continue to
provide ACID property protections for reliable database transactions, along with improved performance.

Relational databases, like MySQL, provide the ACID properties of atomicity, consistency, isolation, and
durability for reliable database transactions. To help provide these properties, MySQL uses a data
storage area called the doublewrite buffer that prevents partial page write errors. These errors occur
when there is a hardware failure while the database is updating a page, such as in the case of a power
outage. A MySQL database can detect partial page writes and recover with a copy of the page in the
doublewrite buffer. While this technique provides protection, it also results in extra write operations.
For more information about the MySQL doublewrite buffer, see Doublewrite Buffer in the MySQL
documentation.

With RDS Optimized Writes turned on, RDS for MySQL databases write only once when flushing data to
durable storage without using the doublewrite buffer. RDS Optimized Writes is useful if you run write-
heavy workloads on your RDS for MySQL databases. Examples of databases with write-heavy workloads
include ones that support digital payments, financial trading, and gaming applications.

These databases run on DB instance classes that use the AWS Nitro System. Because of the hardware
configuration in these systems, the database can write 16-KiB pages directly to data files reliably and
durably in one step. The AWS Nitro System makes RDS Optimized Writes possible.

You can set the new database parameter rds.optimized_writes to control the RDS Optimized Writes
feature for RDS for MySQL databases. Access this parameter in the DB parameter groups of RDS for
MySQL version 8.0. Set the parameter using the following values:

• AUTO – Turn on RDS Optimized Writes if the database supports it. Turn off RDS Optimized Writes if the
database doesn't support it. This setting is the default.
• OFF – Turn off RDS Optimized Writes even if the database supports it.

If you migrate an RDS for MySQL database that is configured to use RDS Optimized Writes to a DB
instance class that doesn't support the feature, RDS automatically turns off RDS Optimized Writes for the
database.

When RDS Optimized Writes is turned off, the database uses the MySQL doublewrite buffer.

To determine whether an RDS for MySQL database is using RDS Optimized Writes, view the current
value of the innodb_doublewrite parameter for the database. If the database is using RDS Optimized
Writes, this parameter is set to FALSE (0).

1659
Amazon Relational Database Service User Guide
Using

Using RDS Optimized Writes


You can turn on RDS Optimized Writes when you create an RDS for MySQL database with the RDS
console, the AWS CLI, or the RDS API. RDS Optimized Writes is turned on automatically when both of the
following conditions apply during database creation:

• You specify a DB engine version and DB instance class that support RDS Optimized Writes.
• RDS Optimized Writes is supported for RDS for MySQL version 8.0.30 and higher. For information
about RDS for MySQL versions, see MySQL on Amazon RDS versions (p. 1627).
• RDS Optimized Writes is supported for RDS for MySQL databases that use the following DB instance
classes:
• db.m7g
• db.m6g
• db.m6gd
• db.m6i
• db.m5d
• db.r7g
• db.r6g
• db.r6gd
• db.r6i
• db.r5
• db.r5b
• db.r5d
• db.x2iedn

For information about DB instance classes, see the section called “DB instance classes” (p. 11).

DB instance class availability differs for AWS Regions. To determine whether a DB instance class is
supported in a specific AWS Region, see the section called “Determining DB instance class support in
AWS Regions” (p. 68).
• In the parameter group associated with the database, the rds.optimized_writes parameter is set
to AUTO. In default parameter groups, this parameter is always set to AUTO.

If you want to use a DB engine version and DB instance class that support RDS Optimized Writes, but you
don't want to use this feature, then specify a custom parameter group when you create the database. In
this parameter group, set the rds.optimized_writes parameter to OFF. If you want the database to
use RDS Optimized Writes later, you can set the parameter to AUTO to turn it on. For information about
creating custom parameter groups and setting parameters, see Working with parameter groups (p. 347).

For information about creating a DB instance, see Creating an Amazon RDS DB instance (p. 300).

Console
When you use the RDS console to create an RDS for MySQL database, you can filter for the DB engine
versions and DB instance classes that support RDS Optimized Writes. After you turn on the filters, you
can choose from the available DB engine versions and DB instance classes.

To choose a DB engine version that supports RDS Optimized Writes, filter for the RDS for MySQL DB
engine versions that support it in Engine version, and then choose a version.

1660
Amazon Relational Database Service User Guide
Using

1661
Amazon Relational Database Service User Guide
Using

In the Instance configuration section, filter for the DB instance classes that support RDS Optimized
Writes, and then choose a DB instance class.

After you make these selections, you can choose other settings that meet your requirements and finish
creating the RDS for MySQL database with the console.

AWS CLI
To create a DB instance by using the AWS CLI, use the create-db-instance command. Make sure the --
engine-version and --db-instance-class values support RDS Optimized Writes. In addition, make
sure the parameter group associated with the DB instance has the rds.optimized_writes parameter
set to AUTO. This example associates the default parameter group with the DB instance.

Example Creating a DB instance that uses RDS Optimized Writes

For Linux, macOS, or Unix:

aws rds create-db-instance \


--db-instance-identifier mydbinstance \
--engine mysql \
--engine-version 8.0.30 \
--db-instance-class db.r5b.large \
--manage-master-user-password \
--master-username admin \
--allocated-storage 200

For Windows:

aws rds create-db-instance ^


--db-instance-identifier mydbinstance ^
--engine mysql ^
--engine-version 8.0.30 ^
--db-instance-class db.r5b.large ^
--manage-master-user-password ^
--master-username admin ^
--allocated-storage 200

RDS API
You can create a DB instance using the CreateDBInstance operation. When you use this operation, make
sure the EngineVersion and DBInstanceClass values support RDS Optimized Writes. In addition,
make sure the parameter group associated with the DB instance has the rds.optimized_writes
parameter set to AUTO.

1662
Amazon Relational Database Service User Guide
Limitations

Limitations for RDS Optimized Writes


The following limitations apply to RDS Optimized Writes:

• You can only modify a database to turn on RDS Optimized Writes if the database was created with a
DB engine version and DB instance class that support the feature. In this case, if RDS Optimized Writes
is turned off for the database, you can turn it on by setting the rds.optimized_writes parameter
to AUTO. For more information, see Using RDS Optimized Writes (p. 1660).
• You can only modify a database to turn on RDS Optimized Writes if the database was created after
the feature was released. The underlying file system format and organization that RDS Optimized
Writes needs is incompatible with the file system format of databases created before the feature was
released. By extension, you can't use any snapshots of previously created instances with this feature
because the snapshots use the older, incompatible file system.
Important
To convert the old format to the new format, you need to perform a full database migration.
If you want to use this feature on DB instances that were created before the feature was
released, create a new empty DB instance and manually migrate your older DB instance to
the newer DB instance. You can migrate your older DB instance using the native mysqldump
tool, replication, or AWS Database Migration Service. For more information, see mysqldump
— A Database Backup Program in the MySQL 8.0 Reference Manual, Working with MySQL
replication in Amazon RDS (p. 1708), and the AWS Database Migration Service User Guide. For
help with migrating using AWS tools, contact support.
• When you are restoring an RDS for MySQL database from a snapshot, you can only turn on RDS
Optimized Writes for the database if all of the following conditions apply:
• The snapshot was created from a database that supports RDS Optimized Writes.
• The snapshot was created from a database that was created after RDS Optimized Writes was
released.
• The snapshot is restored to a database that supports RDS Optimized Writes.
• The restored database is associated with a parameter group that has the rds.optimized_writes
parameter set to AUTO.

1663
Amazon Relational Database Service User Guide
Upgrading the MySQL DB engine

Upgrading the MySQL DB engine


When Amazon RDS supports a new version of a database engine, you can upgrade your DB instances to
the new version. There are two kinds of upgrades for MySQL DB instances: major version upgrades and
minor version upgrades.

Major version upgrades can contain database changes that are not backward-compatible with existing
applications. As a result, you must manually perform major version upgrades of your DB instances. You
can initiate a major version upgrade by modifying your DB instance. However, before you perform a
major version upgrade, we recommend that you follow the instructions in Major version upgrades for
MySQL (p. 1665).

In contrast, minor version upgrades include only changes that are backward-compatible with existing
applications. You can initiate a minor version upgrade manually by modifying your DB instance. Or
you can enable the Auto minor version upgrade option when creating or modifying a DB instance.
Doing so means that your DB instance is automatically upgraded after Amazon RDS tests and approves
the new version. For information about performing an upgrade, see Upgrading a DB instance engine
version (p. 429).

If your MySQL DB instance is using read replicas, you must upgrade all of the read replicas before
upgrading the source instance. If your DB instance is in a Multi-AZ deployment, both the primary and
standby replicas are upgraded. Your DB instance will not be available until the upgrade is complete.

Database engine upgrades require downtime. The duration of the downtime varies based on the size of
your DB instance.
Tip
You can minimize the downtime required for DB instance upgrade by using a blue/green
deployment. For more information, see Using Amazon RDS Blue/Green Deployments for
database updates (p. 566).

Topics
• Overview of upgrading (p. 1664)
• Major version upgrades for MySQL (p. 1665)
• Testing an upgrade (p. 1669)
• Upgrading a MySQL DB instance (p. 1669)
• Automatic minor version upgrades for MySQL (p. 1669)
• Using a read replica to reduce downtime when upgrading a MySQL database (p. 1671)

Overview of upgrading
When you use the AWS Management Console to upgrade a DB instance, it shows the valid upgrade
targets for the DB instance. You can also use the following AWS CLI command to identify the valid
upgrade targets for a DB instance:

For Linux, macOS, or Unix:

aws rds describe-db-engine-versions \


--engine mysql \
--engine-version version-number \
--query "DBEngineVersions[*].ValidUpgradeTarget[*].{EngineVersion:EngineVersion}" --
output text

For Windows:

1664
Amazon Relational Database Service User Guide
Major version upgrades

aws rds describe-db-engine-versions ^


--engine mysql ^
--engine-version version-number ^
--query "DBEngineVersions[*].ValidUpgradeTarget[*].{EngineVersion:EngineVersion}" --
output text

For example, to identify the valid upgrade targets for a MySQL version 8.0.28 DB instance, run the
following AWS CLI command:

For Linux, macOS, or Unix:

aws rds describe-db-engine-versions \


--engine mysql \
--engine-version 8.0.28 \
--query "DBEngineVersions[*].ValidUpgradeTarget[*].{EngineVersion:EngineVersion}" --
output text

For Windows:

aws rds describe-db-engine-versions ^


--engine mysql ^
--engine-version 8.0.28 ^
--query "DBEngineVersions[*].ValidUpgradeTarget[*].{EngineVersion:EngineVersion}" --
output text

Amazon RDS takes two or more DB snapshots during the upgrade process. Amazon RDS takes up to
two snapshots of the DB instance before making any upgrade changes. If the upgrade doesn't work for
your databases, you can restore one of these snapshots to create a DB instance running the old version.
Amazon RDS takes another snapshot of the DB instance when the upgrade completes. Amazon RDS
takes these snapshots regardless of whether AWS Backup manages the backups for the DB instance.
Note
Amazon RDS only takes DB snapshots if you have set the backup retention period for your DB
instance to a number greater than 0. To change your backup retention period, see Modifying an
Amazon RDS DB instance (p. 401).

After the upgrade is complete, you can't revert to the previous version of the database engine. If you
want to return to the previous version, restore the first DB snapshot taken to create a new DB instance.

You control when to upgrade your DB instance to a new version supported by Amazon RDS. This level of
control helps you maintain compatibility with specific database versions and test new versions with your
application before deploying in production. When you are ready, you can perform version upgrades at
the times that best fit your schedule.

If your DB instance is using read replication, you must upgrade all of the Read Replicas before upgrading
the source instance.

If your DB instance is in a Multi-AZ deployment, both the primary and standby DB instances are
upgraded. The primary and standby DB instances are upgraded at the same time and you will experience
an outage until the upgrade is complete. The time for the outage varies based on your database engine,
engine version, and the size of your DB instance.

Major version upgrades for MySQL


Amazon RDS supports the following in-place upgrades for major versions of the MySQL database engine:

• MySQL 5.6 to MySQL 5.7


• MySQL 5.7 to MySQL 8.0

1665
Amazon Relational Database Service User Guide
Major version upgrades

Note
You can only create MySQL version 5.7 and 8.0 DB instances with latest-generation and current-
generation DB instance classes, in addition to the db.m3 previous-generation DB instance class.
In some cases, you want to upgrade a MySQL version 5.6 DB instance running on a previous-
generation DB instance class (other than db.m3) to a MySQL version 5.7 DB instance. In
these cases, first modify the DB instance to use a latest-generation or current-generation DB
instance class. After you do this, you can then modify the DB instance to use the MySQL version
5.7 database engine. For information on Amazon RDS DB instance classes, see DB instance
classes (p. 11).

Topics
• Overview of MySQL major version upgrades (p. 1666)
• Upgrades to MySQL version 5.7 might be slow (p. 1666)
• Prechecks for upgrades from MySQL 5.7 to 8.0 (p. 1667)
• Rollback after failure to upgrade from MySQL 5.7 to 8.0 (p. 1668)

Overview of MySQL major version upgrades


Major version upgrades can contain database changes that are not backward-compatible with existing
applications. As a result, Amazon RDS doesn't apply major version upgrades automatically; you must
manually modify your DB instance. We recommend that you thoroughly test any upgrade before
applying it to your production instances.

To perform a major version upgrade for a MySQL version 5.6 DB instance on Amazon RDS to MySQL
version 5.7 or later, first perform any available OS updates. After OS updates are complete, upgrade to
each major version: 5.6 to 5.7 and then 5.7 to 8.0. MySQL DB instances created before April 24, 2014,
show an available OS update until the update has been applied. For more information on OS updates,
see Applying updates for a DB instance (p. 421).

During a major version upgrade of MySQL, Amazon RDS runs the MySQL binary mysql_upgrade to
upgrade tables, if necessary. Also, Amazon RDS empties the slow_log and general_log tables during
a major version upgrade. To preserve log information, save the log contents before the major version
upgrade.

MySQL major version upgrades typically complete in about 10 minutes. Some upgrades might take
longer because of the DB instance class size or because the instance doesn't follow certain operational
guidelines in Best practices for Amazon RDS (p. 286). If you upgrade a DB instance from the Amazon RDS
console, the status of the DB instance indicates when the upgrade is complete. If you upgrade using the
AWS Command Line Interface (AWS CLI), use the describe-db-instances command and check the Status
value.

Upgrades to MySQL version 5.7 might be slow


MySQL version 5.6.4 introduced a new date and time format for the datetime, time, and timestamp
columns that allows fractional components in date and time values. When upgrading a DB instance to
MySQL version 5.7, MySQL forces the conversion of all date and time column types to the new format.

Because this conversion rebuilds your tables, it might take a considerable amount of time to complete
the DB instance upgrade. The forced conversion occurs for any DB instances that are running a version
before MySQL version 5.6.4. It also occurs for any DB instances that were upgraded from a version before
MySQL version 5.6.4 to a version other than 5.7.

If your DB instance runs a version before MySQL version 5.6.4, or was upgraded from a version before
5.6.4, we recommend an extra step. In these cases, we recommend that you convert the datetime,
time, and timestamp columns in your database before upgrading your DB instance to MySQL version
5.7. This conversion can significantly reduce the amount of time required to upgrade the DB instance to
MySQL version 5.7. To upgrade your date and time columns to the new format, issue the ALTER TABLE

1666
Amazon Relational Database Service User Guide
Major version upgrades

<table_name> FORCE; command for each table that contains date or time columns. Because altering
a table locks the table as read-only, we recommend that you perform this update during a maintenance
window.

To find all tables in your database that have datetime, time, or timestamp columns and create an
ALTER TABLE <table_name> FORCE; command for each table, use the following query.

SET show_old_temporals = ON;


SELECT table_schema, table_name,column_name, column_type
FROM information_schema.columns
WHERE column_type LIKE '%/* 5.5 binary format */';
SET show_old_temporals = OFF;

Prechecks for upgrades from MySQL 5.7 to 8.0


MySQL 8.0 includes a number of incompatibilities with MySQL 5.7. These incompatibilities can cause
problems during an upgrade from MySQL 5.7 to MySQL 8.0. So, some preparation might be required on
your database for the upgrade to be successful. The following is a general list of these incompatibilities:

• There must be no tables that use obsolete data types or functions.


• There must be no orphan *.frm files.
• Triggers must not have a missing or empty definer or an invalid creation context.
• There must be no partitioned table that uses a storage engine that does not have native partitioning
support.
• There must be no keyword or reserved word violations. Some keywords might be reserved in MySQL
8.0 that were not reserved previously.

For more information, see Keywords and reserved words in the MySQL documentation.
• There must be no tables in the MySQL 5.7 mysql system database that have the same name as a table
used by the MySQL 8.0 data dictionary.
• There must be no obsolete SQL modes defined in your sql_mode system variable setting.
• There must be no tables or stored procedures with individual ENUM or SET column elements that
exceed 255 characters or 1020 bytes in length.
• Before upgrading to MySQL 8.0.13 or higher, there must be no table partitions that reside in shared
InnoDB tablespaces.
• There must be no queries and stored program definitions from MySQL 8.0.12 or lower that use ASC or
DESC qualifiers for GROUP BY clauses.
• Your MySQL 5.7 installation must not use features that are not supported in MySQL 8.0.

For more information, see Features removed in MySQL 8.0 in the MySQL documentation.
• There must be no foreign key constraint names longer than 64 characters.
• For improved Unicode support, consider converting objects that use the utf8mb3 charset to use
the utf8mb4 charset. The utf8mb3 character set is deprecated. Also, consider using utf8mb4 for
character set references instead of utf8, because currently utf8 is an alias for the utf8mb3 charset.

For more information, see The utf8mb3 character set (3-byte UTF-8 unicode encoding) in the MySQL
documentation.

When you start an upgrade from MySQL 5.7 to 8.0, Amazon RDS runs prechecks automatically to detect
these incompatibilities. For information about upgrading to MySQL 8.0, see Upgrading MySQL in the
MySQL documentation.

These prechecks are mandatory. You can't choose to skip them. The prechecks provide the following
benefits:

1667
Amazon Relational Database Service User Guide
Major version upgrades

• They enable you to avoid unplanned downtime during the upgrade.


• If there are incompatibilities, Amazon RDS prevents the upgrade and provides a log for you to learn
about them. You can then use the log to prepare your database for the upgrade to MySQL 8.0 by
reducing the incompatibilities. For detailed information about removing incompatibilities, see
Preparing your installation for upgrade in the MySQL documentation and Upgrading to MySQL 8.0?
Here is what you need to know... on the MySQL Server Blog.

The prechecks include some that are included with MySQL and some that were created specifically by
the Amazon RDS team. For information about the prechecks provided by MySQL, see Upgrade checker
utility.

The prechecks run before the DB instance is stopped for the upgrade, meaning that they don't cause
any downtime when they run. If the prechecks find an incompatibility, Amazon RDS automatically
cancels the upgrade before the DB instance is stopped. Amazon RDS also generates an event for the
incompatibility. For more information about Amazon RDS events, see Working with Amazon RDS event
notification (p. 855).

Amazon RDS records detailed information about each incompatibility in the log file
PrePatchCompatibility.log. In most cases, the log entry includes a link to the MySQL
documentation for correcting the incompatibility. For more information about viewing log files, see
Viewing and listing database log files (p. 895).

Due to the nature of the prechecks, they analyze the objects in your database. This analysis results in
resource consumption and increases the time for the upgrade to complete.
Note
Amazon RDS runs all of these prechecks only for an upgrade from MySQL 5.7 to MySQL 8.0. For
an upgrade from MySQL 5.6 to MySQL 5.7, prechecks are limited to confirming that there are no
orphan tables and that there is enough storage space to rebuild tables. Prechecks aren't run for
upgrades to releases lower than MySQL 5.7.

Rollback after failure to upgrade from MySQL 5.7 to 8.0


When you upgrade a DB instance from MySQL version 5.7 to MySQL version 8.0, the upgrade can fail.
In particular, it can fail if the data dictionary contains incompatibilities that weren't captured by the
prechecks. In this case, the database fails to start up successfully in the new MySQL 8.0 version. At this
point, Amazon RDS rolls back the changes performed for the upgrade. After the rollback, the MySQL DB
instance is running MySQL version 5.7. When an upgrade fails and is rolled back, Amazon RDS generates
an event with the event ID RDS-EVENT-0188.

Typically, an upgrade fails because there are incompatibilities in the metadata between the databases
in your DB instance and the target MySQL version. When an upgrade fails, you can view the details
about these incompatibilities in the upgradeFailure.log file. Resolve the incompatibilities before
attempting to upgrade again.

During an unsuccessful upgrade attempt and rollback, your DB instance is restarted. Any pending
parameter changes are applied during the restart and persist after the rollback.

For more information about upgrading to MySQL 8.0, see the following topics in the MySQL
documentation:

• Preparing Your Installation for Upgrade


• Upgrading to MySQL 8.0? Here is what you need to know…

Note
Currently, automatic rollback after upgrade failure is supported only for MySQL 5.7 to 8.0 major
version upgrades.

1668
Amazon Relational Database Service User Guide
Testing an upgrade

Testing an upgrade
Before you perform a major version upgrade on your DB instance, thoroughly test your database for
compatibility with the new version. In addition, thoroughly test all applications that access the database
for compatibility with the new version. We recommend that you use the following procedure.

To test a major version upgrade

1. Review the upgrade documentation for the new version of the database engine to see if there are
compatibility issues that might affect your database or applications:

• Changes in MySQL 5.6


• Changes in MySQL 5.7
• Changes in MySQL 8.0
2. If your DB instance is a member of a custom DB parameter group, create a new DB parameter group
with your existing settings that is compatible with the new major version. Specify the new DB
parameter group when you upgrade your test instance, so your upgrade testing ensures that it works
correctly. For more information about creating a DB parameter group, see Working with parameter
groups (p. 347).
3. Create a DB snapshot of the DB instance to be upgraded. For more information, see Creating a DB
snapshot (p. 613).
4. Restore the DB snapshot to create a new test DB instance. For more information, see Restoring from
a DB snapshot (p. 615).
5. Modify this new test DB instance to upgrade it to the new version, using one of the methods
detailed following. If you created a new parameter group in step 2, specify that parameter group.
6. Evaluate the storage used by the upgraded instance to determine if the upgrade requires additional
storage.
7. Run as many of your quality assurance tests against the upgraded DB instance as needed to ensure
that your database and application work correctly with the new version. Implement any new tests
needed to evaluate the impact of any compatibility issues that you identified in step 1. Test all
stored procedures and functions. Direct test versions of your applications to the upgraded DB
instance.
8. If all tests pass, then perform the upgrade on your production DB instance. We recommend that
you don't allow write operations to the DB instance until you confirm that everything is working
correctly.

Upgrading a MySQL DB instance


For information about manually or automatically upgrading a MySQL DB instance, see Upgrading a DB
instance engine version (p. 429).

Automatic minor version upgrades for MySQL


If you specify the following settings when creating or modifying a DB instance, you can have your DB
instance automatically upgraded.

• The Auto minor version upgrade setting is enabled.


• The Backup retention period setting is greater than 0.

In the AWS Management Console, these settings are under Additional configuration. The following
image shows the Auto minor version upgrade setting.

1669
Amazon Relational Database Service User Guide
Automatic minor version upgrades

For more information about these settings, see Settings for DB instances (p. 402).

For some RDS for MySQL major versions in some AWS Regions, one minor version is designated
by RDS as the automatic upgrade version. After a minor version has been tested and approved by
Amazon RDS, the minor version upgrade occurs automatically during your maintenance window. RDS
doesn't automatically set newer released minor versions as the automatic upgrade version. Before RDS
designates a newer automatic upgrade version, several criteria are considered, such as the following:

• Known security issues


• Bugs in the MySQL community version
• Overall fleet stability since the minor version was released

You can use the following AWS CLI command to determine the current automatic minor upgrade target
version for a specified MySQL minor version in a specific AWS Region.

For Linux, macOS, or Unix:

aws rds describe-db-engine-versions \


--engine mysql \
--engine-version minor-version \
--region region \
--query "DBEngineVersions[*].ValidUpgradeTarget[*].
{AutoUpgrade:AutoUpgrade,EngineVersion:EngineVersion}" \
--output text

For Windows:

aws rds describe-db-engine-versions ^


--engine mysql ^
--engine-version minor-version ^
--region region ^
--query "DBEngineVersions[*].ValidUpgradeTarget[*].
{AutoUpgrade:AutoUpgrade,EngineVersion:EngineVersion}" ^
--output text

For example, the following AWS CLI command determines the automatic minor upgrade target for
MySQL minor version 8.0.11 in the US East (Ohio) AWS Region (us-east-2).

1670
Amazon Relational Database Service User Guide
Upgrading with reduced downtime

For Linux, macOS, or Unix:

aws rds describe-db-engine-versions \


--engine mysql \
--engine-version 8.0.11 \
--region us-east-2 \
--query "DBEngineVersions[*].ValidUpgradeTarget[*].
{AutoUpgrade:AutoUpgrade,EngineVersion:EngineVersion}" \
--output table

For Windows:

aws rds describe-db-engine-versions ^


--engine mysql ^
--engine-version 8.0.11 ^
--region us-east-2 ^
--query "DBEngineVersions[*].ValidUpgradeTarget[*].
{AutoUpgrade:AutoUpgrade,EngineVersion:EngineVersion}" ^
--output table

Your output is similar to the following.

----------------------------------
| DescribeDBEngineVersions |
+--------------+-----------------+
| AutoUpgrade | EngineVersion |
+--------------+-----------------+
| False | 8.0.15 |
| False | 8.0.16 |
| False | 8.0.17 |
| False | 8.0.19 |
| False | 8.0.20 |
| False | 8.0.21 |
| True | 8.0.23 |
| False | 8.0.25 |
+--------------+-----------------+

In this example, the AutoUpgrade value is True for MySQL version 8.0.23. So, the automatic minor
upgrade target is MySQL version 8.0.23, which is highlighted in the output.

A MySQL DB instance is automatically upgraded during your maintenance window if the following
criteria are met:

• The Auto minor version upgrade setting is enabled.


• The Backup retention period setting is greater than 0.
• The DB instance is running a minor DB engine version that is less than the current automatic upgrade
minor version.

For more information, see Automatically upgrading the minor engine version (p. 431).

Using a read replica to reduce downtime when


upgrading a MySQL database
In most cases, a blue/green deployment is the best option to reduce downtime when upgrading a MySQL
DB instance. For more information, see Using Amazon RDS Blue/Green Deployments for database
updates (p. 566).

1671
Amazon Relational Database Service User Guide
Upgrading with reduced downtime

If you can't use a blue/green deployment and your MySQL DB instance is currently in use with a
production application, you can use the following procedure to upgrade the database version for your DB
instance. This procedure can reduce the amount of downtime for your application.

By using a read replica, you can perform most of the maintenance steps ahead of time and minimize the
necessary changes during the actual outage. With this technique, you can test and prepare the new DB
instance without making any changes to your existing DB instance.

The following procedure shows an example of upgrading from MySQL version 5.7 to MySQL version 8.0.
You can use the same general steps for upgrades to other major versions.
Note
When you are upgrading from MySQL version 5.7 to MySQL version 8.0, complete the prechecks
before performing the upgrade. For more information, see Prechecks for upgrades from MySQL
5.7 to 8.0 (p. 1667).

To upgrade a MySQL database while a DB instance is in use

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. Create a read replica of your MySQL 5.7 DB instance. This process creates an upgradable copy of
your database. Other read replicas of the DB instance might also exist.

a. In the console, choose Databases, and then choose the DB instance that you want to upgrade.
b. For Actions, choose Create read replica.
c. Provide a value for DB instance identifier for your read replica and ensure that the DB instance
class and other settings match your MySQL 5.7 DB instance.
d. Choose Create read replica.
3. (Optional) When the read replica has been created and Status shows Available, convert the read
replica into a Multi-AZ deployment and enable backups.

By default, a read replica is created as a Single-AZ deployment with backups disabled. Because the
read replica ultimately becomes the production DB instance, it is a best practice to configure a Multi-
AZ deployment and enable backups now.

a. In the console, choose Databases, and then choose the read replica that you just created.
b. Choose Modify.
c. For Multi-AZ deployment, choose Create a standby instance.
d. For Backup Retention Period, choose a positive nonzero value, such as 3 days, and then choose
Continue.
e. For Scheduling of modifications, choose Apply immediately.
f. Choose Modify DB instance.
4. When the read replica Status shows Available, upgrade the read replica to MySQL 8.0:

a. In the console, choose Databases, and then choose the read replica that you just created.
b. Choose Modify.
c. For DB engine version, choose the MySQL 8.0 version to upgrade to, and then choose Continue.
d. For Scheduling of modifications, choose Apply immediately.
e. Choose Modify DB instance to start the upgrade.
5. When the upgrade is complete and Status shows Available, verify that the upgraded read replica is
up-to-date with the source MySQL 5.7 DB instance. To verify, connect to the read replica and run the
SHOW REPLICA STATUS command. If the Seconds_Behind_Master field is 0, then replication is
up-to-date.

1672
Amazon Relational Database Service User Guide
Upgrading with reduced downtime

Note
Previous versions of MySQL used SHOW SLAVE STATUS instead of SHOW REPLICA
STATUS. If you are using a MySQL version before 8.0.23, then use SHOW SLAVE STATUS.
6. (Optional) Create a read replica of your read replica.

If you want the DB instance to have a read replica after it is promoted to a standalone DB instance,
you can create the read replica now.

a. In the console, choose Databases, and then choose the read replica that you just upgraded.
b. For Actions, choose Create read replica.
c. Provide a value for DB instance identifier for your read replica and ensure that the DB instance
class and other settings match your MySQL 5.7 DB instance.
d. Choose Create read replica.
7. (Optional) Configure a custom DB parameter group for the read replica.

If you want the DB instance to use a custom parameter group after it is promoted to a standalone
DB instance, you can create the DB parameter group now and associate it with the read replica.

a. Create a custom DB parameter group for MySQL 8.0. For instructions, see Creating a DB
parameter group (p. 350).
b. Modify the parameters that you want to change in the DB parameter group you just created. For
instructions, see Modifying parameters in a DB parameter group (p. 352).
c. In the console, choose Databases, and then choose the read replica.
d. Choose Modify.
e. For DB parameter group, choose the MySQL 8.0 DB parameter group you just created, and then
choose Continue.
f. For Scheduling of modifications, choose Apply immediately.
g. Choose Modify DB instance to start the upgrade.
8. Make your MySQL 8.0 read replica a standalone DB instance.
Important
When you promote your MySQL 8.0 read replica to a standalone DB instance, it is no longer
a replica of your MySQL 5.7 DB instance. We recommend that you promote your MySQL 8.0
read replica during a maintenance window when your source MySQL 5.7 DB instance is in
read-only mode and all write operations are suspended. When the promotion is completed,
you can direct your write operations to the upgraded MySQL 8.0 DB instance to ensure that
no write operations are lost.
In addition, we recommend that, before promoting your MySQL 8.0 read replica, you
perform all necessary data definition language (DDL) operations on your MySQL 8.0 read
replica. An example is creating indexes. This approach avoids negative effects on the
performance of the MySQL 8.0 read replica after it has been promoted. To promote a read
replica, use the following procedure.

a. In the console, choose Databases, and then choose the read replica that you just upgraded.
b. For Actions, choose Promote.
c. Choose Yes to enable automated backups for the read replica instance. For more information,
see Working with backups (p. 591).
d. Choose Continue.
e. Choose Promote Read Replica.
9. You now have an upgraded version of your MySQL database. At this point, you can direct your
applications to the new MySQL 8.0 DB instance.

1673
Amazon Relational Database Service User Guide
Importing data into a MySQL DB instance

Importing data into a MySQL DB instance


You can use several different techniques to import data into an RDS for MySQL DB instance. The best
approach depends on the source of the data, the amount of data, and whether the import is done one
time or is ongoing. If you are migrating an application along with the data, also consider the amount of
downtime that you are willing to experience.

Overview
Find techniques to import data into an RDS for MySQL DB instance in the following table.

Source Amount One ApplicationTechnique More


of data time or downtime information
ongoing

Existing Any One time Some Create a backup of your on-premises Restoring
MySQL database, store it on Amazon S3, and then a backup
database restore the backup file to a new Amazon into a
on RDS DB instance running MySQL. MySQL
premises DB
or on instance (p. 1680)
Amazon
EC2

Any Any One Minimal Use AWS Database Migration Service What
existing time or to migrate the database with minimal is AWS
database ongoing downtime and, for many database DB Database
engines, continue ongoing replication. Migration
Service
and
Using a
MySQL-
compatible
database
as a
target
for AWS
DMS in
the AWS
Database
Migration
Service
User
Guide

Existing Any One Minimal Create a read replica for ongoing replication. Working
MySQL time or Promote the read replica for one-time with DB
DB ongoing creation of a new DB instance. instance
instance read
replicas (p. 438)

Existing Small One time Some Copy the data directly to your MySQL DB Importing
MariaDB instance using a command-line utility. data
or from a
MySQL MariaDB
database or

1674
Amazon Relational Database Service User Guide
Overview

Source Amount One ApplicationTechnique More


of data time or downtime information
ongoing
MySQL
database
to a
MariaDB
or
MySQL
DB
instance (p. 1688)

Data not Medium One time Some Create flat files and import them using the Importing
stored mysqlimport utility. data
in an from any
existing source
database to a
MariaDB
or
MySQL
DB
instance (p. 1703)

Existing Any Ongoing Minimal Configure replication with an existing Configuring


MariaDB MariaDB or MySQL database as the binary
or replication source. log file
MySQL position
database replication
on with an
premises external
or on source
Amazon instance (p. 1724)
EC2
Importing
data
to an
Amazon
RDS
MariaDB
or
MySQL
database
with
reduced
downtime (p. 1690)

Note
The 'mysql' system database contains authentication and authorization information required
to log in to your DB instance and access your data. Dropping, altering, renaming, or truncating
tables, data, or other contents of the 'mysql' database in your DB instance can result in error
and might render the DB instance and your data inaccessible. If this occurs, you can restore the
DB instance from a snapshot using the AWS CLI restore-db-instance-from-db-snapshot
command. You can recover the DB instance using the AWS CLI restore-db-instance-to-
point-in-time command.

1675
Amazon Relational Database Service User Guide
Importing data considerations

Importing data considerations


Following, you can find additional technical information related to loading data into MySQL. This
information is intended for advanced users who are familiar with the MySQL server architecture. All
comments related to LOAD DATA LOCAL INFILE also apply to mysqlimport.

Binary log
Data loads incur a performance penalty and require additional free disk space (up to four times more)
when binary logging is enabled versus loading the same data with binary logging turned off. The severity
of the performance penalty and the amount of free disk space required is directly proportional to the
size of the transactions used to load the data.

Transaction size
Transaction size plays an important role in MySQL data loads. It has a major influence on resource
consumption, disk space utilization, resume process, time to recover, and input format (flat files or SQL).
This section describes how transaction size affects binary logging and makes the case for disabling
binary logging during large data loads. As noted earlier, binary logging is enabled and disabled by
setting the Amazon RDS automated backup retention period. Non-zero values enable binary logging,
and zero disables it. We also describe the impact of large transactions on InnoDB and why it's important
to keep transaction sizes small.

Small transactions
For small transactions, binary logging doubles the number of disk writes required to load the data. This
effect can severely degrade performance for other database sessions and increase the time required
to load the data. The degradation experienced depends in part upon the upload rate, other database
activity taking place during the load, and the capacity of your Amazon RDS DB instance.

The binary logs also consume disk space roughly equal to the amount of data loaded until they are
backed up and removed. Fortunately, Amazon RDS minimizes this by backing up and removing binary
logs on a frequent basis.

Large transactions
Large transactions incur a 3X penalty for IOPS and disk consumption with binary logging enabled. This
is due to the binary log cache spilling to disk, consuming disk space and incurring additional IO for
each write. The cache cannot be written to the binlog until the transaction commits or rolls back, so it
consumes disk space in proportion to the amount of data loaded. When the transaction commits, the
cache must be copied to the binlog, creating a third copy of the data on disk.

Because of this, there must be at least three times as much free disk space available to load the data
compared to loading with binary logging disabled. For example, 10 GiB of data loaded as a single
transaction consumes at least 30 GiB disk space during the load. It consumes 10 GiB for the table +
10 GiB for the binary log cache + 10 GiB for the binary log itself. The cache file remains on disk until
the session that created it terminates or the session fills its binary log cache again during another
transaction. The binary log must remain on disk until backed up, so it might be some time before the
extra 20 GiB is freed.

If the data was loaded using LOAD DATA LOCAL INFILE, yet another copy of the data is created if the
database has to be recovered from a backup made before the load. During recovery, MySQL extracts
the data from the binary log into a flat file. MySQL then runs LOAD DATA LOCAL INFILE, just as in the
original transaction. However, this time the input file is local to the database server. Continuing with the
example preceding, recovery fails unless there is at least 40 GiB free disk space available.

1676
Amazon Relational Database Service User Guide
Importing data considerations

Disable binary logging


Whenever possible, disable binary logging during large data loads to avoid the resource overhead and
addition disk space requirements. In Amazon RDS, disabling binary logging is as simple as setting the
backup retention period to zero. If you do this, we recommend that you take a DB snapshot of the
database instance immediately before the load. By doing this, you can quickly and easily undo changes
made during loading if you need to.

After the load, set the backup retention period back to an appropriate (no zero) value.

You can't set the backup retention period to zero if the DB instance is a source DB instance for read
replicas.

InnoDB
The information in this section provides a strong argument for keeping transaction sizes small when
using InnoDB.

Undo
InnoDB generates undo to support features such as transaction rollback and MVCC. Undo is stored in the
InnoDB system tablespace (usually ibdata1) and is retained until removed by the purge thread. The purge
thread cannot advance beyond the undo of the oldest active transaction, so it is effectively blocked
until the transaction commits or completes a rollback. If the database is processing other transactions
during the load, their undo also accumulates in the system tablespace and cannot be removed even
if they commit and no other transaction needs the undo for MVCC. In this situation, all transactions
(including read-only transactions) that access any of the rows changed by any transaction (not just the
load transaction) slow down. The slowdown occurs because transactions scan through undo that could
have been purged if not for the long-running load transaction.

Undo is stored in the system tablespace, and the system tablespace never shrinks in size. Thus, large data
load transactions can cause the system tablespace to become quite large, consuming disk space that you
can't reclaim without recreating the database from scratch.

Rollback
InnoDB is optimized for commits. Rolling back a large transaction can take a very, very long time. In
some cases, it might be faster to perform a point-in-time recovery or restore a DB snapshot.

Input data format


MySQL can accept incoming data in one of two forms: flat files and SQL. This section points out some
key advantages and disadvantages of each.

Flat files
Loading flat files with LOAD DATA LOCAL INFILE can be the fastest and least costly method of loading
data as long as transactions are kept relatively small. Compared to loading the same data with SQL, flat
files usually require less network traffic, lowering transmission costs and load much faster due to the
reduced overhead in the database.

One big transaction


LOAD DATA LOCAL INFILE loads the entire flat file as one transaction. This isn't necessarily a bad thing. If
the size of the individual files can be kept small, this has a number of advantages:

• Resume capability – Keeping track of which files have been loaded is easy. If a problem arises
during the load, you can pick up where you left off with little effort. Some data might have to be
retransmitted to Amazon RDS, but with small files, the amount retransmitted is minimal.

1677
Amazon Relational Database Service User Guide
Importing data considerations

• Load data in parallel – If you've got IOPS and network bandwidth to spare with a single file load,
loading in parallel might save time.
• Throttle the load rate – Data load having a negative impact on other processes? Throttle the load by
increasing the interval between files.

Be careful
The advantages of LOAD DATA LOCAL INFILE diminish rapidly as transaction size increases. If breaking up
a large set of data into smaller ones isn't an option, SQL might be the better choice.

SQL
SQL has one main advantage over flat files: it's easy to keep transaction sizes small. However, SQL can
take significantly longer to load than flat files and it can be difficult to determine where to resume
the load after a failure. For example, mysqldump files are not restartable. If a failure occurs while
loading a mysqldump file, the file requires modification or replacement before the load can resume. The
alternative is to restore to the point in time before the load and replay the file after the cause of the
failure has been corrected.

Take checkpoints using Amazon RDS snapshots


If you have a load that's going to take several hours or even days, loading without binary logging isn't
a very attractive prospect unless you can take periodic checkpoints. This is where the Amazon RDS DB
snapshot feature comes in very handy. A DB snapshot creates a point-in-time consistent copy of your
database instance which can be used restore the database to that point in time after a crash or other
mishap.

To create a checkpoint, simply take a DB snapshot. Any previous DB snapshots taken for checkpoints can
be removed without affecting durability or restore time.

Snapshots are fast too, so frequent checkpointing doesn't add significantly to load time.

Decreasing load time


Here are some additional tips to reduce load times:

• Create all secondary indexes before loading. This is counter-intuitive for those familiar with other
databases. Adding or modifying a secondary index causes MySQL to create a new table with the index
changes, copy the data from the existing table to the new table, and drop the original table.
• Load data in PK order. This is particularly helpful for InnoDB tables, where load times can be reduced
by 75–80 percent and data file size cut in half.
• Disable foreign key constraints foreign_key_checks=0. For flat files loaded with LOAD DATA
LOCAL INFILE, this is required in many cases. For any load, disabling FK checks provides significant
performance gains. Just be sure to enable the constraints and verify the data after the load.
• Load in parallel unless already near a resource limit. Use partitioned tables when appropriate.
• Use multi-value inserts when loading with SQL to minimize overhead when running statements. When
using mysqldump, this is done automatically.
• Reduce InnoDB log IO innodb_flush_log_at_trx_commit=0
• If you are loading data into a DB instance that does not have read replicas, set the sync_binlog
parameter to 0 while loading data. When data loading is complete, set the sync_binlog parameter to
back to 1.
• Load data before converting the DB instance to a Multi-AZ deployment. However, if the DB instance
already uses a Multi-AZ deployment, switching to a Single-AZ deployment for data loading is not
recommended, because doing so only provides marginal improvements.

1678
Amazon Relational Database Service User Guide
Importing data considerations

Note
Using innodb_flush_log_at_trx_commit=0 causes InnoDB to flush its logs every second instead
of at each commit. This provides a significant speed advantage, but can lead to data loss during
a crash. Use with caution.

Topics
• Restoring a backup into a MySQL DB instance (p. 1680)
• Importing data from a MariaDB or MySQL database to a MariaDB or MySQL DB instance (p. 1688)
• Importing data to an Amazon RDS MariaDB or MySQL database with reduced downtime (p. 1690)
• Importing data from any source to a MariaDB or MySQL DB instance (p. 1703)

1679
Amazon Relational Database Service User Guide
Restoring a backup into a MySQL DB instance

Restoring a backup into a MySQL DB instance


Amazon RDS supports importing MySQL databases by using backup files. You can create a backup of
your database, store it on Amazon S3, and then restore the backup file onto a new Amazon RDS DB
instance running MySQL.

The scenario described in this section restores a backup of an on-premises database. You can use this
technique for databases in other locations, such as Amazon EC2 or non-AWS cloud services, as long as
the database is accessible.

You can find the supported scenario in the following diagram.

Importing backup files from Amazon S3 is supported for MySQL in all AWS Regions.

We recommend that you import your database to Amazon RDS by using backup files if your on-premises
database can be offline while the backup file is created, copied, and restored. If your database can't be
offline, you can use binary log (binlog) replication to update your database after you have migrated
to Amazon RDS through Amazon S3 as explained in this topic. For more information, see Configuring
binary log file position replication with an external source instance (p. 1724). You can also use the AWS
Database Migration Service to migrate your database to Amazon RDS. For more information, see What is
AWS Database Migration Service?

Limitations and recommendations for importing backup files


from Amazon S3 to Amazon RDS
The following are some limitations and recommendations for importing backup files from Amazon S3:

• You can only import your data to a new DB instance, not an existing DB instance.
• You must use Percona XtraBackup to create the backup of your on-premises database.
• You can't import data from a DB snapshot export to Amazon S3.
• You can't migrate from a source database that has tables defined outside of the default MySQL data
directory.
• You must import your data to the default minor version of your MySQL major version in your AWS
Region. For example, if your major version is MySQL 8.0, and the default minor version for your AWS
Region is 8.0.28, then you must import your data into a MySQL version 8.0.28 DB instance. You can
upgrade your DB instance after importing. For information about determining the default minor
version, see MySQL on Amazon RDS versions (p. 1627).
• Backward migration is not supported for both major versions and minor versions. For example, you
can't migrate from version 8.0 to version 5.7, and you can't migrate from version 8.0.32 to version
8.0.31.

1680
Amazon Relational Database Service User Guide
Restoring a backup into a MySQL DB instance

• You can't import a MySQL 5.5 or 5.6 database.


• You can't import an on-premises MySQL database from one major version to another. For example,
you can't import a MySQL 5.7 database to an RDS for MySQL 8.0 database. You can upgrade your DB
instance after you complete the import.
• You can't restore from an encrypted source database, but you can restore to an encrypted Amazon
RDS DB instance.
• You can't restore from an encrypted backup in the Amazon S3 bucket.
• You can't restore from an Amazon S3 bucket in a different AWS Region than your Amazon RDS DB
instance.
• Importing from Amazon S3 is not supported on the db.t2.micro DB instance class. However, you can
restore to a different DB instance class, and change the DB instance class later. For more information
about instance classes, see Hardware specifications for DB instance classes (p. 87).
• Amazon S3 limits the size of a file uploaded to an Amazon S3 bucket to 5 TB. If a backup file exceeds 5
TB, then you must split the backup file into smaller files.
• When you restore the database, the backup is copied and then extracted on your DB instance.
Therefore, provision storage space for your DB instance that is equal to or greater than the sum of the
backup size, plus the original database's size on disk.
• Amazon RDS limits the number of files uploaded to an Amazon S3 bucket to 1 million. If the backup
data for your database, including all full and incremental backups, exceeds 1 million files, use a Gzip
(.gz), tar (.tar.gz), or Percona xbstream (.xbstream) file to store full and incremental backup files in the
Amazon S3 bucket. Percona XtraBackup 8.0 only supports Percona xbstream for compression.
• User accounts are not imported automatically. Save your user accounts from your source database and
add them to your new DB instance later.
• Functions are not imported automatically. Save your functions from your source database and add
them to your new DB instance later.
• Stored procedures are not imported automatically. Save your stored procedures from your source
database and add them to your new DB instance later.
• Time zone information is not imported automatically. Record the time zone information for your
source database, and set the time zone of your new DB instance later. For more information, see Local
time zone for MySQL DB instances (p. 1749).
• The innodb_data_file_path parameter must be configured with only one data file that uses the
default data file name "ibdata1:12M:autoextend". Databases with two data files, or with a data
file with a different name, can't be migrated using this method.

The following are examples of file names that are not allowed:
"innodb_data_file_path=ibdata1:50M; ibdata2:50M:autoextend" and
"innodb_data_file_path=ibdata01:50M:autoextend".
• The maximum size of the restored database is the maximum database size supported minus the size of
the backup. So, if the maximum database size supported is 64 TiB, and the size of the backup is 30 TiB,
then the maximum size of the restored database is 34 TiB, as in the following example:

64 TiB - 30 TiB = 34 TiB

For information about the maximum database size supported by Amazon RDS for MySQL, see General
Purpose SSD storage (p. 102) and Provisioned IOPS SSD storage (p. 104).

Overview of setting up to import backup files from Amazon S3


to Amazon RDS
These are the components you need to set up to import backup files from Amazon S3 to Amazon RDS:

• An Amazon S3 bucket to store your backup files.

1681
Amazon Relational Database Service User Guide
Restoring a backup into a MySQL DB instance

• A backup of your on-premises database created by Percona XtraBackup.


• An AWS Identity and Access Management (IAM) role to allow Amazon RDS to access the bucket.

If you already have an Amazon S3 bucket, you can use that. If you don't have an Amazon S3 bucket, you
can create a new one. If you want to create a new bucket, see Creating a bucket.

Use the Percona XtraBackup tool to create your backup. For more information, see Creating your
database backup (p. 1682).

If you already have an IAM role, you can use that. If you don't have an IAM role, you can create a new one
manually. Alternatively, you can choose to have a new IAM role created for you in your account by the
wizard when you restore the database by using the AWS Management Console. If you want to create a
new IAM role manually, or attach trust and permissions policies to an existing IAM role, see Creating an
IAM role manually (p. 1684). If you want to have a new IAM role created for you, follow the procedure in
Console (p. 1685).

Creating your database backup


Use the Percona XtraBackup software to create your backup. We recommend that you use the latest
version of Percona XtraBackup. You can install Percona XtraBackup from Download Percona XtraBackup.
Warning
When creating a database backup, XtraBackup might save credentials in the xtrabackup_info
file. Make sure you examine that file so that the tool_command setting in it doesn't contain any
sensitive information.
Note
For MySQL 8.0 migration, you must use Percona XtraBackup 8.0. Percona XtraBackup 8.0.12
and higher versions support migration of all versions of MySQL. If you are migrating to RDS for
MySQL 8.0.20 or higher, you must use Percona XtraBackup 8.0.12 or higher.
For MySQL 5.7 migrations, you can also use Percona XtraBackup 2.4. For migrations of earlier
MySQL versions, you can also use Percona XtraBackup 2.3 or 2.4.

You can create a full backup of your MySQL database files using Percona XtraBackup. Alternatively, if you
already use Percona XtraBackup to back up your MySQL database files, you can upload your existing full
and incremental backup directories and files.

For more information about backing up your database with Percona XtraBackup, see Percona XtraBackup
- documentation and The xtrabackup binary on the Percona website.

Creating a full backup with Percona XtraBackup


To create a full backup of your MySQL database files that can be restored from Amazon S3, use the
Percona XtraBackup utility (xtrabackup) to back up your database.

For example, the following command creates a backup of a MySQL database and stores the files in the
folder /on-premises/s3-restore/backup folder.

xtrabackup --backup --user=<myuser> --password=<password> --target-dir=</on-premises/s3-


restore/backup>

If you want to compress your backup into a single file (which can be split later, if needed), you can save
your backup in one of the following formats:

• Gzip (.gz)
• tar (.tar)

1682
Amazon Relational Database Service User Guide
Restoring a backup into a MySQL DB instance

• Percona xbstream (.xbstream)

Note
Percona XtraBackup 8.0 only supports Percona xbstream for compression.

The following command creates a backup of your MySQL database split into multiple Gzip files.

xtrabackup --backup --user=<myuser> --password=<password> --stream=tar \


--target-dir=</on-premises/s3-restore/backup> | gzip - | split -d --bytes=500MB \
- </on-premises/s3-restore/backup/backup>.tar.gz

The following command creates a backup of your MySQL database split into multiple tar files.

xtrabackup --backup --user=<myuser> --password=<password> --stream=tar \


--target-dir=</on-premises/s3-restore/backup> | split -d --bytes=500MB \
- </on-premises/s3-restore/backup/backup>.tar

The following command creates a backup of your MySQL database split into multiple xbstream files.

xtrabackup --backup --user=<myuser> --password=<password> --stream=xbstream \


--target-dir=</on-premises/s3-restore/backup> | split -d --bytes=500MB \
- </on-premises/s3-restore/backup/backup>.xbstream

Using incremental backups with Percona XtraBackup


If you already use Percona XtraBackup to perform full and incremental backups of your MySQL database
files, you don't need to create a full backup and upload the backup files to Amazon S3. Instead, you can
save a significant amount of time by copying your existing backup directories and files to your Amazon
S3 bucket. For more information about creating incremental backups using Percona XtraBackup, see
Incremental backup.

When copying your existing full and incremental backup files to an Amazon S3 bucket, you must
recursively copy the contents of the base directory. Those contents include the full backup and also all
incremental backup directories and files. This copy must preserve the directory structure in the Amazon
S3 bucket. Amazon RDS iterates through all files and directories. Amazon RDS uses the xtrabackup-
checkpoints file that is included with each incremental backup to identify the base directory, and to
order incremental backups by log sequence number (LSN) range.

Backup considerations for Percona XtraBackup


Amazon RDS consumes your backup files based on the file name. Name your backup files with the
appropriate file extension based on the file format—for example, .xbstream for files stored using the
Percona xbstream format.

Amazon RDS consumes your backup files in alphabetical order and also in natural number order. Use the
split option when you issue the xtrabackup command to ensure that your backup files are written
and named in the proper order.

Amazon RDS doesn't support partial backups created using Percona XtraBackup. You can't use the
following options to create a partial backup when you back up the source files for your database: --
tables, --tables-exclude, --tables-file, --databases, --databases-exclude, or --
databases-file.

Amazon RDS supports incremental backups created using Percona XtraBackup. For more information
about creating incremental backups using Percona XtraBackup, see Incremental backup.

1683
Amazon Relational Database Service User Guide
Restoring a backup into a MySQL DB instance

Creating an IAM role manually


If you don't have an IAM role, you can create a new one manually. Alternatively, you can choose to
have a new IAM role created for you by the wizard when you restore the database by using the AWS
Management Console. If you want to have a new IAM role created for you, follow the procedure in
Console (p. 1685).

To manually create a new IAM role for importing your database from Amazon S3, create a role to
delegate permissions from Amazon RDS to your Amazon S3 bucket. When you create an IAM role,
you attach trust and permissions policies. To import your backup files from Amazon S3, use trust and
permissions policies similar to the examples following. For more information about creating the role, see
Creating a role to delegate permissions to an AWS service.

Alternatively, you can choose to have a new IAM role created for you by the wizard when you restore the
database by using the AWS Management Console. If you want to have a new IAM role created for you,
follow the procedure in Console (p. 1685)

The trust and permissions policies require that you provide an Amazon Resource Name (ARN). For more
information about ARN formatting, see Amazon Resource Names (ARNs) and AWS service namespaces.

Example Trust policy for importing from Amazon S3

{
"Version": "2012-10-17",
"Statement":
[{
"Effect": "Allow",
"Principal": {"Service": "rds.amazonaws.com"},
"Action": "sts:AssumeRole"
}]
}

Example Permissions policy for importing from Amazon S3 — IAM user permissions

{
"Version":"2012-10-17",
"Statement":
[
{
"Sid":"AllowS3AccessRole",
"Effect":"Allow",
"Action":"iam:PassRole",
"Resource":"arn:aws:iam::IAM User ID:role/S3Access"
}
]
}

Example Permissions policy for importing from Amazon S3 — role permissions

{
"Version": "2012-10-17",
"Statement":
[
{
"Effect": "Allow",
"Action":
[
"s3:ListBucket",

1684
Amazon Relational Database Service User Guide
Restoring a backup into a MySQL DB instance

"s3:GetBucketLocation"
],
"Resource": "arn:aws:s3:::bucket_name"
},
{
"Effect": "Allow",
"Action":
[
"s3:GetObject"
],
"Resource": "arn:aws:s3:::bucket_name/prefix*"
}
]
}

Note
If you include a file name prefix, include the asterisk (*) after the prefix. If you don't want to
specify a prefix, specify only an asterisk.

Importing data from Amazon S3 to a new MySQL DB instance


You can import data from Amazon S3 to a new MySQL DB instance using the AWS Management Console,
AWS CLI, or RDS API.

Console

To import data from Amazon S3 to a new MySQL DB instance

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the top right corner of the Amazon RDS console, choose the AWS Region in which to create your
DB instance. Choose the same AWS Region as the Amazon S3 bucket that contains your database
backup.
3. In the navigation pane, choose Databases.
4. Choose Restore from S3.

The Create database by restoring from S3 page appears.

1685
Amazon Relational Database Service User Guide
Restoring a backup into a MySQL DB instance

5. Under S3 destination:

a. Choose the S3 bucket that contains the backup.


b. (Optional) For S3 folder path prefix, enter a file path prefix for the files stored in your Amazon
S3 bucket.

If you don't specify a prefix, then RDS creates your DB instance using all of the files and folders
in the root folder of the S3 bucket. If you do specify a prefix, then RDS creates your DB instance
using the files and folders in the S3 bucket where the path for the file begins with the specified
prefix.

For example, suppose that you store your backup files on S3 in a subfolder named backups, and
you have multiple sets of backup files, each in its own directory (gzip_backup1, gzip_backup2,
and so on). In this case, you specify a prefix of backups/gzip_backup1 to restore from the files in
the gzip_backup1 folder.

1686
Amazon Relational Database Service User Guide
Restoring a backup into a MySQL DB instance

6. Under Engine options:

a. For Engine type, choose MySQL.


b. For Source engine version, choose the MySQL major version of your source database.
c. For Version, choose the default minor version of your MySQL major version in your AWS Region.

In the AWS Management Console, only the default minor version is available. You can upgrade
your DB instance after importing.
7. For IAM role, you can choose an existing IAM role.
8. (Optional) You can also have a new IAM role created for you by choosing Create a new role and
entering the IAM role name.
9. Specify your DB instance information. For information about each setting, see Settings for DB
instances (p. 308).
Note
Be sure to allocate enough memory for your new DB instance so that the restore operation
can succeed.
You can also choose Enable storage autoscaling to allow for future growth automatically.
10. Choose additional settings as needed.
11. Choose Create database.

AWS CLI

To import data from Amazon S3 to a new MySQL DB instance by using the AWS CLI, call the restore-
db-instance-from-s3 command with the following parameters. For information about each setting, see
Settings for DB instances (p. 308).
Note
Be sure to allocate enough memory for your new DB instance so that the restore operation can
succeed.
You can also use the --max-allocated-storage parameter to enable storage autoscaling
and allow for future growth automatically.

• --allocated-storage
• --db-instance-identifier
• --db-instance-class
• --engine
• --master-username
• --manage-master-user-password
• --s3-bucket-name
• --s3-ingestion-role-arn
• --s3-prefix
• --source-engine
• --source-engine-version

Example

For Linux, macOS, or Unix:

aws rds restore-db-instance-from-s3 \


--allocated-storage 250 \

1687
Amazon Relational Database Service User Guide
Importing data from an external database

--db-instance-identifier myidentifier \
--db-instance-class db.m5.large \
--engine mysql \
--master-username admin \
--manage-master-user-password \
--s3-bucket-name mybucket \
--s3-ingestion-role-arn arn:aws:iam::account-number:role/rolename \
--s3-prefix bucketprefix \
--source-engine mysql \
--source-engine-version 8.0.32 \
--max-allocated-storage 1000

For Windows:

aws rds restore-db-instance-from-s3 ^


--allocated-storage 250 ^
--db-instance-identifier myidentifier ^
--db-instance-class db.m5.large ^
--engine mysql ^
--master-username admin ^
--manage-master-user-password ^
--s3-bucket-name mybucket ^
--s3-ingestion-role-arn arn:aws:iam::account-number:role/rolename ^
--s3-prefix bucketprefix ^
--source-engine mysql ^
--source-engine-version 8.0.32 ^
--max-allocated-storage 1000

RDS API

To import data from Amazon S3 to a new MySQL DB instance by using the Amazon RDS API, call the
RestoreDBInstanceFromS3 operation.

Importing data from a MariaDB or MySQL database


to a MariaDB or MySQL DB instance
You can also import data from an existing MariaDB or MySQL database to a MySQL or MariaDB DB
instance. You do so by copying the database with mysqldump and piping it directly into the MariaDB
or MySQL DB instance. The mysqldump command line utility is commonly used to make backups and
transfer data from one MariaDB or MySQL server to another. It's included with MySQL and MariaDB client
software.
Note
If you are using a MySQL DB instance and your scenario supports it, it's easier to move data
in and out of Amazon RDS by using backup files and Amazon S3. For more information, see
Restoring a backup into a MySQL DB instance (p. 1680).

A typical mysqldump command to move data from an external database to an Amazon RDS DB instance
looks similar to the following.

mysqldump -u local_user \
--databases database_name \
--single-transaction \
--compress \
--order-by-primary \
-plocal_password | mysql -u RDS_user \
--port=port_number \
--host=host_name \

1688
Amazon Relational Database Service User Guide
Importing data from an external database

-pRDS_password

Important
Make sure not to leave a space between the -p option and the entered password.
Specify credentials other than the prompts shown here as a security best practice.

Make sure that you're aware of the following recommendations and considerations:

• Exclude the following schemas from the dump file: sys, performance_schema, and
information_schema. The mysqldump utility excludes these schemas by default.
• If you need to migrate users and privileges, consider using a tool that generates the data control
language (DCL) for recreating them, such as the pt-show-grants utility.
• To perform the import, make sure the user doing so has access to the DB instance. For more
information, see Controlling access with security groups (p. 2680).

The parameters used are as follows:

• -u local_user – Use to specify a user name. In the first usage of this parameter, you specify the
name of a user account on the local MariaDB or MySQL database identified by the --databases
parameter.
• --databases database_name – Use to specify the name of the database on the local MariaDB or
MySQL instance that you want to import into Amazon RDS.
• --single-transaction – Use to ensure that all of the data loaded from the local database is
consistent with a single point in time. If there are other processes changing the data while mysqldump
is reading it, using this parameter helps maintain data integrity.
• --compress – Use to reduce network bandwidth consumption by compressing the data from the local
database before sending it to Amazon RDS.
• --order-by-primary – Use to reduce load time by sorting each table's data by its primary key.
• -plocal_password – Use to specify a password. In the first usage of this parameter, you specify the
password for the user account identified by the first -u parameter.
• -u RDS_user – Use to specify a user name. In the second usage of this parameter, you specify the
name of a user account on the default database for the MariaDB or MySQL DB instance identified by
the --host parameter.
• --port port_number – Use to specify the port for your MariaDB or MySQL DB instance. By default,
this is 3306 unless you changed the value when creating the instance.
• --host host_name – Use to specify the Domain Name System (DNS) name from the Amazon RDS DB
instance endpoint, for example, myinstance.123456789012.us-east-1.rds.amazonaws.com.
You can find the endpoint value in the instance details in the Amazon RDS Management Console.
• -pRDS_password – Use to specify a password. In the second usage of this parameter, you specify the
password for the user account identified by the second -u parameter.

Make sure to create any stored procedures, triggers, functions, or events manually in your Amazon RDS
database. If you have any of these objects in the database that you are copying, then exclude them when
you run mysqldump. To do so, include the following parameters with your mysqldump command: --
routines=0 --triggers=0 --events=0.

The following example copies the world sample database on the local host to a MySQL DB instance.

For Linux, macOS, or Unix:

sudo mysqldump -u localuser \

1689
Amazon Relational Database Service User Guide
Importing data with reduced downtime

--databases world \
--single-transaction \
--compress \
--order-by-primary \
--routines=0 \
--triggers=0 \
--events=0 \
-plocalpassword | mysql -u rdsuser \
--port=3306 \
--host=myinstance.123456789012.us-east-1.rds.amazonaws.com \
-prdspassword

For Windows, run the following command in a command prompt that has been opened by right-clicking
Command Prompt on the Windows programs menu and choosing Run as administrator:

mysqldump -u localuser ^
--databases world ^
--single-transaction ^
--compress ^
--order-by-primary ^
--routines=0 ^
--triggers=0 ^
--events=0 ^
-plocalpassword | mysql -u rdsuser ^
--port=3306 ^
--host=myinstance.123456789012.us-east-1.rds.amazonaws.com ^
-prdspassword

Note
Specify credentials other than the prompts shown here as a security best practice.

Importing data to an Amazon RDS MariaDB or MySQL


database with reduced downtime
In some cases, you might need to import data from an external MariaDB or MySQL database that
supports a live application to a MariaDB DB instance, a MySQL DB instance, or a MySQL Multi-AZ
DB cluster. Use the following procedure to minimize the impact on availability of applications. This
procedure can also help if you are working with a very large database. Using this procedure, you can
reduce the cost of the import by reducing the amount of data that is passed across the network to AWS.

In this procedure, you transfer a copy of your database data to an Amazon EC2 instance and import the
data into a new Amazon RDS database. You then use replication to bring the Amazon RDS database
up-to-date with your live external instance, before redirecting your application to the Amazon RDS
database. Configure MariaDB replication based on global transaction identifiers (GTIDs) if the external
instance is MariaDB 10.0.24 or higher and the target instance is RDS for MariaDB. Otherwise, configure
replication based on binary log coordinates. We recommend GTID-based replication if your external
database supports it because GTID-based replication is a more reliable method. For more information,
see Global transaction ID in the MariaDB documentation.
Note
If you want to import data into a MySQL DB instance and your scenario supports it, we
recommend moving data in and out of Amazon RDS by using backup files and Amazon S3. For
more information, see Restoring a backup into a MySQL DB instance (p. 1680).

1690
Amazon Relational Database Service User Guide
Importing data with reduced downtime

Note
We don't recommend that you use this procedure with source MySQL databases from MySQL
versions earlier than version 5.5 because of potential replication issues. For more information,
see Replication compatibility between MySQL versions in the MySQL documentation.

Create a copy of your existing database


The first step in the process of migrating a large amount of data to an RDS for MariaDB or RDS for
MySQL database with minimal downtime is to create a copy of the source data.

1691
Amazon Relational Database Service User Guide
Importing data with reduced downtime

You can use the mysqldump utility to create a database backup in either SQL or delimited-text format.
We recommend that you do a test run with each format in a non-production environment to see which
method minimizes the amount of time that mysqldump runs.

We also recommend that you weigh mysqldump performance against the benefit offered by using the
delimited-text format for loading. A backup using delimited-text format creates a tab-separated text
file for each table being dumped. To reduce the amount of time required to import your database, you
can load these files in parallel using the LOAD DATA LOCAL INFILE command. For more information
about choosing a mysqldump format and then loading the data, see Using mysqldump for backups in
the MySQL documentation.

Before you start the backup operation, make sure to set the replication options on the MariaDB or
MySQL database that you are copying to Amazon RDS. The replication options include turning on
binary logging and setting a unique server ID. Setting these options causes your server to start logging
database transactions and prepares it to be a source replication instance later in this process.
Note
Use the --single-transaction option with mysqldump because it dumps a consistent
state of the database. To ensure a valid dump file, don't run data definition language (DDL)
statements while mysqldump is running. You can schedule a maintenance window for these
operations.
Exclude the following schemas from the dump file: sys, performance_schema, and
information_schema. The mysqldump utility excludes these schemas by default.
To migrate users and privileges, consider using a tool that generates the data control language
(DCL) for recreating them, such as the pt-show-grants utility.

To set replication options


1. Edit the my.cnf file (this file is usually under /etc).

sudo vi /etc/my.cnf

Add the log_bin and server_id options to the [mysqld] section. The log_bin option provides
a file name identifier for binary log files. The server_id option provides a unique identifier for the
server in source-replica relationships.

The following example shows the updated [mysqld] section of a my.cnf file.

[mysqld]
log-bin=mysql-bin
server-id=1

For more information, see the MySQL documentation.


2. For replication with a Multi-AZ DB cluster, set the ENFORCE_GTID_CONSISTENCY and the GTID_MODE
parameter to ON.

mysql> SET @@GLOBAL.ENFORCE_GTID_CONSISTENCY = ON;

mysql> SET @@GLOBAL.GTID_MODE = ON;

These settings aren't required for replication with a DB instance.


3. Restart the mysql service.

sudo service mysqld restart

1692
Amazon Relational Database Service User Guide
Importing data with reduced downtime

To create a backup copy of your existing database


1. Create a backup of your data using the mysqldump utility, specifying either SQL or delimited-text
format.

Specify --master-data=2 to create a backup file that can be used to start replication between
servers. For more information, see the mysqldump documentation.

To improve performance and ensure data integrity, use the --order-by-primary and --single-
transaction options of mysqldump.

To avoid including the MySQL system database in the backup, do not use the --all-databases
option with mysqldump. For more information, see Creating a data snapshot using mysqldump in the
MySQL documentation.

Use chmod if necessary to make sure that the directory where the backup file is being created is
writeable.
Important
On Windows, run the command window as an administrator.
• To produce SQL output, use the following command.

For Linux, macOS, or Unix:

sudo mysqldump \
--databases database_name \
--master-data=2 \
--single-transaction \
--order-by-primary \
-r backup.sql \
-u local_user \
-p password

Note
Specify credentials other than the prompts shown here as a security best practice.

For Windows:

mysqldump ^
--databases database_name ^
--master-data=2 ^
--single-transaction ^
--order-by-primary ^
-r backup.sql ^
-u local_user ^
-p password

Note
Specify credentials other than the prompts shown here as a security best practice.
• To produce delimited-text output, use the following command.

For Linux, macOS, or Unix:

sudo mysqldump \
--tab=target_directory \
--fields-terminated-by ',' \
--fields-enclosed-by '"' \
--lines-terminated-by 0x0d0a \
database_name \
1693
Amazon Relational Database Service User Guide
Importing data with reduced downtime

--master-data=2 \
--single-transaction \
--order-by-primary \
-p password

For Windows:

mysqldump ^
--tab=target_directory ^
--fields-terminated-by "," ^
--fields-enclosed-by """ ^
--lines-terminated-by 0x0d0a ^
database_name ^
--master-data=2 ^
--single-transaction ^
--order-by-primary ^
-p password

Note
Specify credentials other than the prompts shown here as a security best practice.
Make sure to create any stored procedures, triggers, functions, or events manually in
your Amazon RDS database. If you have any of these objects in the database that you
are copying, exclude them when you run mysqldump. To do so, include the following
arguments with your mysqldump command: --routines=0 --triggers=0 --
events=0.

When using the delimited-text format, a CHANGE MASTER TO comment is returned when you
run mysqldump. This comment contains the master log file name and position. If the external
instance is other than MariaDB version 10.0.24 or higher, note the values for MASTER_LOG_FILE
and MASTER_LOG_POS. You need these values when setting up replication.

-- Position to start replication or point-in-time recovery from


--
-- CHANGE MASTER TO MASTER_LOG_FILE='mysql-bin-changelog.000031', MASTER_LOG_POS=107;

If you are using SQL format, you can get the master log file name and position in the CHANGE
MASTER TO comment in the backup file. If the external instance is MariaDB version 10.0.24 or
higher, you can get the GTID in the next step.
2. If the external instance you are using is MariaDB version 10.0.24 or higher, you use GTID-based
replication. Run SHOW MASTER STATUS on the external MariaDB instance to get the binary log file
name and position, then convert them to a GTID by running BINLOG_GTID_POS on the external
MariaDB instance.

SELECT BINLOG_GTID_POS('binary log file name', binary log file position);

Note the GTID returned; you need it to configure replication.


3. Compress the copied data to reduce the amount of network resources needed to copy your data to the
Amazon RDS database. Note the size of the backup file. You need this information when determining
how large an Amazon EC2 instance to create. When you are done, compress the backup file using GZIP
or your preferred compression utility.
• To compress SQL output, use the following command.

gzip backup.sql

• To compress delimited-text output, use the following command.

1694
Amazon Relational Database Service User Guide
Importing data with reduced downtime

tar -zcvf backup.tar.gz target_directory

Create an Amazon EC2 instance and copy the compressed


database
Copying your compressed database backup file to an Amazon EC2 instance takes fewer network
resources than doing a direct copy of uncompressed data between database instances. After your data is
in Amazon EC2, you can copy it from there directly to your MariaDB or MySQL database. For you to save
on the cost of network resources, your Amazon EC2 instance must be in the same AWS Region as your
Amazon RDS DB instance. Having the Amazon EC2 instance in the same AWS Region as your Amazon
RDS database also reduces network latency during the import.

To create an Amazon EC2 instance and copy your data


1. In the AWS Region where you plan to create the RDS database, create a virtual private cloud (VPC),
a VPC security group, and a VPC subnet. Ensure that the inbound rules for your VPC security group
allow the IP addresses required for your application to connect to AWS. You can specify a range
of IP addresses (for example, 203.0.113.0/24), or another VPC security group. You can use the
Amazon VPC Management Console to create and manage VPCs, subnets, and security groups. For
more information, see Getting started with Amazon VPC in the Amazon Virtual Private Cloud Getting
Started Guide.
2. Open the Amazon EC2 Management Console and choose the AWS Region to contain both your
Amazon EC2 instance and your Amazon RDS database. Launch an Amazon EC2 instance using the VPC,
subnet, and security group that you created in Step 1. Ensure that you select an instance type with
enough storage for your database backup file when it is uncompressed. For details on Amazon EC2
instances, see Getting started with Amazon EC2 Linux instances in the Amazon Elastic Compute Cloud
User Guide for Linux.
3. To connect to your Amazon RDS database from your Amazon EC2 instance, edit your VPC security
group. Add an inbound rule specifying the private IP address of your EC2 instance. You can find the
private IP address on the Details tab of the Instance pane in the EC2 console window. To edit the VPC

1695
Amazon Relational Database Service User Guide
Importing data with reduced downtime

security group and add an inbound rule, choose Security Groups in the EC2 console navigation pane,
choose your security group, and then add an inbound rule for MySQL or Aurora specifying the private
IP address of your EC2 instance. To learn how to add an inbound rule to a VPC security group, see
Adding and removing rules in the Amazon VPC User Guide.
4. Copy your compressed database backup file from your local system to your Amazon EC2 instance.
Use chmod if necessary to make sure that you have write permission for the target directory of the
Amazon EC2 instance. You can use scp or a Secure Shell (SSH) client to copy the file. The following is
an example.

$ scp -r -i key pair.pem backup.sql.gz ec2-user@EC2 DNS:/target_directory/backup.sql.gz

Important
Be sure to copy sensitive data using a secure network transfer protocol.
5. Connect to your Amazon EC2 instance and install the latest updates and the MySQL client tools using
the following commands.

sudo yum update -y


sudo yum install mysql -y

For more information, see Connect to your instance in the Amazon Elastic Compute Cloud User Guide
for Linux.
Important
This example installs the MySQL client on an Amazon Machine Image (AMI) for an Amazon
Linux distribution. To install the MySQL client on a different distribution, such as Ubuntu or
Red Hat Enterprise Linux, this example doesn't work. For information about installing MySQL,
see Installing and Upgrading MySQL in the MySQL documentation.
6. While connected to your Amazon EC2 instance, decompress your database backup file. The following
are examples.
• To decompress SQL output, use the following command.

gzip backup.sql.gz -d

• To decompress delimited-text output, use the following command.

tar xzvf backup.tar.gz

Create a MySQL or MariaDB database and import data from


your Amazon EC2 instance
By creating a MariaDB DB instance, a MySQL DB instance, or a MySQL Multi-AZ DB cluster in the same
AWS Region as your Amazon EC2 instance, you can import the database backup file from EC2 faster than
over the internet.

1696
Amazon Relational Database Service User Guide
Importing data with reduced downtime

To create a MariaDB or MySQL database and import your data


1. Determine which DB instance class and what amount of storage space is required to support the
expected workload for this Amazon RDS database. As part of this process, decide what is sufficient
space and processing capacity for your data load procedures. Also decide what is required to handle
the production workload. You can estimate this based on the size and resources of the source
MariaDB or MySQL database. For more information, see DB instance classes (p. 11).
2. Create a DB instance or Multi-AZ DB cluster in the AWS Region that contains your Amazon EC2
instance.

To create a MySQL Multi-AZ DB cluster, follow the instructions in Creating a Multi-AZ DB


cluster (p. 508).

To create a MariaDB or MySQL DB instance, follow the instructions in Creating an Amazon RDS DB
instance (p. 300) and use the following guidelines:

• Specify a DB engine version that is compatible with your source DB instance, as follows:
• If your source instance is MySQL 5.5.x, the Amazon RDS DB instance must be MySQL.
• If your source instance is MySQL 5.6.x or 5.7.x, the Amazon RDS DB instance must be MySQL or
MariaDB.
• If your source instance is MySQL 8.0.x, the Amazon RDS DB instance must be MySQL 8.0.x.
• If your source instance is MariaDB 5.5 or higher, the Amazon RDS DB instance must be MariaDB.
• Specify the same virtual private cloud (VPC) and VPC security group as for your Amazon EC2
instance. This approach ensures that your Amazon EC2 instance and your Amazon RDS instance
are visible to each other over the network. Make sure your DB instance is publicly accessible. To
set up replication with your source database as described later, your DB instance must be publicly
accessible.
• Don't configure multiple Availability Zones, backup retention, or read replicas until after you have
imported the database backup. When that import is completed, you can configure Multi-AZ and
backup retention for the production instance.
3. Review the default configuration options for the Amazon RDS database. If the default parameter
group for the database doesn't have the configuration options that you want, find a different one

1697
Amazon Relational Database Service User Guide
Importing data with reduced downtime

that does or create a new parameter group. For more information on creating a parameter group,
see Working with parameter groups (p. 347).
4. Connect to the new Amazon RDS database as the master user. Create the users required to support
the administrators, applications, and services that need to access the instance. The hostname for the
Amazon RDS database is the Endpoint value for this instance without including the port number.
An example is mysampledb.123456789012.us-west-2.rds.amazonaws.com. You can find the
endpoint value in the database details in the Amazon RDS Management Console.
5. Connect to your Amazon EC2 instance. For more information, see Connect to your instance in the
Amazon Elastic Compute Cloud User Guide for Linux.
6. Connect to your Amazon RDS database as a remote host from your Amazon EC2 instance using the
mysql command. The following is an example.

mysql -h host_name -P 3306 -u db_master_user -p

The hostname is the Amazon RDS database endpoint.


7. At the mysql prompt, run the source command and pass it the name of your database dump file to
load the data into the Amazon RDS DB instance:

• For SQL format, use the following command.

mysql> source backup.sql;

• For delimited-text format, first create the database, if it isn't the default database you created
when setting up the Amazon RDS database.

mysql> create database database_name;


$ mysql> use database_name;

Then create the tables.

mysql> source table1.sql


$ mysql> source table2.sql
etc...

Then import the data.

mysql> LOAD DATA LOCAL INFILE 'table1.txt' INTO TABLE table1 FIELDS TERMINATED BY ','
ENCLOSED BY '"' LINES TERMINATED BY '0x0d0a';
$ mysql> LOAD DATA LOCAL INFILE 'table2.txt' INTO TABLE table2 FIELDS TERMINATED BY
',' ENCLOSED BY '"' LINES TERMINATED BY '0x0d0a';
etc...

To improve performance, you can perform these operations in parallel from multiple connections
so that all of your tables are created and then loaded at the same time.
Note
If you used any data-formatting options with mysqldump when you initially dumped
the table, make sure to use the same options with mysqlimport or LOAD DATA LOCAL
INFILE to ensure proper interpretation of the data file contents.
8. Run a simple SELECT query against one or two of the tables in the imported database to verify that
the import was successful.

1698
Amazon Relational Database Service User Guide
Importing data with reduced downtime

If you no longer need the Amazon EC2 instance used in this procedure, terminate the EC2 instance
to reduce your AWS resource usage. To terminate an EC2 instance, see Terminating an instance in the
Amazon EC2 User Guide.

Replicate between your external database and new Amazon RDS


database
Your source database was likely updated during the time that it took to copy and transfer the data to the
MariaDB or MySQL database. Thus, you can use replication to bring the copied database up-to-date with
the source database.

The permissions required to start replication on an Amazon RDS database are restricted and not
available to your Amazon RDS master user. Because of this, make sure to use either the Amazon RDS
mysql.rds_set_external_master (p. 1769) command or the mysql.rds_set_external_master_gtid (p. 1345)
command to configure replication, and the mysql.rds_start_replication (p. 1780) command to start
replication between your live database and your Amazon RDS database.

To start replication
Earlier, you turned on binary logging and set a unique server ID for your source database. Now you can
set up your Amazon RDS database as a replica with your live database as the source replication instance.

1. In the Amazon RDS Management Console, add the IP address of the server that hosts the source
database to the VPC security group for the Amazon RDS database. For more information on modifying
a VPC security group, see Security groups for your VPC in the Amazon Virtual Private Cloud User Guide.

You might also need to configure your local network to permit connections from the IP address of
your Amazon RDS database, so that it can communicate with your source instance. To find the IP
address of the Amazon RDS database, use the host command.

host rds_db_endpoint

1699
Amazon Relational Database Service User Guide
Importing data with reduced downtime

The hostname is the DNS name from the Amazon RDS database endpoint, for example
myinstance.123456789012.us-east-1.rds.amazonaws.com. You can find the endpoint value
in the instance details in the Amazon RDS Management Console.
2. Using the client of your choice, connect to the source instance and create a user to be used for
replication. This account is used solely for replication and must be restricted to your domain to
improve security. The following is an example.

MySQL 5.5, 5.6, and 5.7

CREATE USER 'repl_user'@'mydomain.com' IDENTIFIED BY 'password';

MySQL 8.0

CREATE USER 'repl_user'@'mydomain.com' IDENTIFIED WITH mysql_native_password BY


'password';

Note
Specify credentials other than the prompts shown here as a security best practice.
3. For the source instance, grant REPLICATION CLIENT and REPLICATION SLAVE privileges to
your replication user. For example, to grant the REPLICATION CLIENT and REPLICATION SLAVE
privileges on all databases for the 'repl_user' user for your domain, issue the following command.

MySQL 5.5, 5.6, and 5.7

GRANT REPLICATION CLIENT, REPLICATION SLAVE ON *.* TO 'repl_user'@'mydomain.com'


IDENTIFIED BY 'password';

MySQL 8.0

GRANT REPLICATION CLIENT, REPLICATION SLAVE ON *.* TO 'repl_user'@'mydomain.com';

Note
Specify credentials other than the prompts shown here as a security best practice.
4. If you used SQL format to create your backup file and the external instance is not MariaDB 10.0.24 or
higher, look at the contents of that file.

cat backup.sql

The file includes a CHANGE MASTER TO comment that contains the master log file name and
position. This comment is included in the backup file when you use the --master-data option with
mysqldump. Note the values for MASTER_LOG_FILE and MASTER_LOG_POS.

--
-- Position to start replication or point-in-time recovery from
--

-- CHANGE MASTER TO MASTER_LOG_FILE='mysql-bin-changelog.000031', MASTER_LOG_POS=107;

If you used delimited text format to create your backup file and the external instance isn't MariaDB
10.0.24 or higher, you should already have binary log coordinates from step 1 of the procedure at "To
create a backup copy of your existing database" in this topic.

1700
Amazon Relational Database Service User Guide
Importing data with reduced downtime

If the external instance is MariaDB 10.0.24 or higher, you should already have the GTID from which to
start replication from step 2 of the procedure at "To create a backup copy of your existing database" in
this topic.
5. Make the Amazon RDS database the replica. If the external instance isn't MariaDB 10.0.24 or higher,
connect to the Amazon RDS database as the master user and identify the source database as the
source replication instance by using the mysql.rds_set_external_master (p. 1769) command. Use the
master log file name and master log position that you determined in the previous step if you have a
SQL format backup file. Or use the name and position that you determined when creating the backup
files if you used delimited-text format. The following is an example.

CALL mysql.rds_set_external_master ('myserver.mydomain.com', 3306,


'repl_user', 'password', 'mysql-bin-changelog.000031', 107, 0);

Note
Specify credentials other than the prompts shown here as a security best practice.

If the external instance is MariaDB 10.0.24 or higher, connect to the Amazon RDS database as
the master user and identify the source database as the source replication instance by using the
mysql.rds_set_external_master_gtid (p. 1345) command. Use the GTID that you determined in step 2
of the procedure at "To create a backup copy of your existing database" in this topic.. The following is
an example.

CALL mysql.rds_set_external_master_gtid ('source_server_ip_address', 3306,


'ReplicationUser', 'password', 'GTID', 0);

The source_server_ip_address is the IP address of source replication instance. An EC2 private


DNS address is currently not supported.
Note
Specify credentials other than the prompts shown here as a security best practice.
6. On the Amazon RDS database, issue the mysql.rds_start_replication (p. 1780) command to start
replication.

CALL mysql.rds_start_replication;

7. On the Amazon RDS database, run the SHOW REPLICA STATUS command to determine when the
replica is up-to-date with the source replication instance. The results of the SHOW REPLICA STATUS
command include the Seconds_Behind_Master field. When the Seconds_Behind_Master field
returns 0, then the replica is up-to-date with the source replication instance.
Note
Previous versions of MySQL used SHOW SLAVE STATUS instead of SHOW REPLICA STATUS.
If you are using a MySQL version before 8.0.23, then use SHOW SLAVE STATUS.

For a MariaDB 10.5, 10.6, or 10.11 DB instance, run the mysql.rds_replica_status (p. 1344) procedure
instead of the MySQL command.
8. After the Amazon RDS database is up-to-date, turn on automated backups so you can restore
that database if needed. You can turn on or modify automated backups for your Amazon RDS
database using the Amazon RDS Management Console. For more information, see Working with
backups (p. 591).

1701
Amazon Relational Database Service User Guide
Importing data with reduced downtime

Redirect your live application to your Amazon RDS instance


After the MariaDB or MySQL database is up-to-date with the source replication instance, you can now
update your live application to use the Amazon RDS instance.

To redirect your live application to your MariaDB or MySQL database and stop
replication
1. To add the VPC security group for the Amazon RDS database, add the IP address of the server that
hosts the application. For more information on modifying a VPC security group, see Security groups
for your VPC in the Amazon Virtual Private Cloud User Guide.
2. Verify that the Seconds_Behind_Master field in the SHOW REPLICA STATUS command results is 0,
which indicates that the replica is up-to-date with the source replication instance.

SHOW REPLICA STATUS;

Note
Previous versions of MySQL used SHOW SLAVE STATUS instead of SHOW REPLICA STATUS.
If you are using a MySQL version before 8.0.23, then use SHOW SLAVE STATUS.

For a MariaDB 10.5, 10.6, or 10.11 DB instance, run the mysql.rds_replica_status (p. 1344) procedure
instead of the MySQL command.
3. Close all connections to the source when their transactions complete.
4. Update your application to use the Amazon RDS database. This update typically involves changing the
connection settings to identify the hostname and port of the Amazon RDS database, the user account
and password to connect with, and the database to use.
5. Connect to the DB instance.

For a Multi-AZ DB cluster, connect to the writer DB instance.


6. Stop replication for the Amazon RDS instance using the mysql.rds_stop_replication (p. 1782)
command.

1702
Amazon Relational Database Service User Guide
Importing data from any source

CALL mysql.rds_stop_replication;

7. Run the mysql.rds_reset_external_master (p. 1769) command on your Amazon RDS database to reset
the replication configuration so this instance is no longer identified as a replica.

CALL mysql.rds_reset_external_master;

8. Turn on additional Amazon RDS features such as Multi-AZ support and read replicas. For more
information, see Configuring and managing a Multi-AZ deployment (p. 492) and Working with DB
instance read replicas (p. 438).

Importing data from any source to a MariaDB or


MySQL DB instance
If you have more than 1 GiB of data to load, or if your data is coming from somewhere other than a
MariaDB or MySQL database, we recommend creating flat files and loading them with mysqlimport.
The mysqlimport utility is another command line utility bundled with the MySQL and MariaDB client
software. Its purpose is to load flat files into MySQL or MariaDB. For information about mysqlimport, see
mysqlimport - a data import program in the MySQL documentation.

We also recommend creating DB snapshots of the target Amazon RDS DB instance before and after the
data load. Amazon RDS DB snapshots are complete backups of your DB instance that can be used to
restore your DB instance to a known state. When you initiate a DB snapshot, I/O operations to your DB
instance are momentarily suspended while your database is backed up.

Creating a DB snapshot immediately before the load makes it possible for you to restore the database
to its state before the load, if you need to. A DB snapshot taken immediately after the load protects
you from having to load the data again in case of a mishap and can also be used to seed new database
instances.

The following list shows the steps to take. Each step is discussed in more detail following.

1. Create flat files containing the data to be loaded.


2. Stop any applications accessing the target DB instance.
3. Create a DB snapshot.
4. Consider turning off Amazon RDS automated backups.
5. Load the data using mysqlimport.
6. Enable automated backups again.

Step 1: Create flat files containing the data to be loaded


Use a common format, such as comma-separated values (CSV), to store the data to be loaded. Each table
must have its own file; you can't combine data for multiple tables in the same file. Give each file the
same name as the table it corresponds to. The file extension can be anything you like. For example, if the
table name is sales, the file name might be sales.csv or sales.txt, but not sales_01.csv.

Whenever possible, order the data by the primary key of the table being loaded. Doing this drastically
improves load times and minimizes disk storage requirements.

The speed and efficiency of this procedure depends on keeping the size of the files small. If the
uncompressed size of any individual file is larger than 1 GiB, split it into multiple files and load each one
separately.

1703
Amazon Relational Database Service User Guide
Importing data from any source

On Unix-like systems (including Linux), use the split command. For example, the following command
splits the sales.csv file into multiple files of less than 1 GiB, splitting only at line breaks (-C 1024m).
The new files are named sales.part_00, sales.part_01, and so on.

split -C 1024m -d sales.csv sales.part_

Similar utilities are available for other operating systems.

Step 2: Stop any applications accessing the target DB instance


Before starting a large load, stop all application activity accessing the target DB instance that you plan
to load to. We recommend this particularly if other sessions will be modifying the tables being loaded
or tables that they reference. Doing this reduces the risk of constraint violations occurring during the
load and improves load performance. It also makes it possible to restore the DB instance to the point just
before the load without losing changes made by processes not involved in the load.

Of course, this might not be possible or practical. If you can't stop applications from accessing the DB
instance before the load, take steps to ensure the availability and integrity of your data. The specific
steps required vary greatly depending upon specific use cases and site requirements.

Step 3: Create a DB snapshot


If you plan to load data into a new DB instance that contains no data, you can skip this step. Otherwise,
creating a DB snapshot of your DB instance makes it possible for you to restore the DB instance to the
point just before the load, if it becomes necessary. As previously mentioned, when you initiate a DB
snapshot, I/O operations to your DB instance are suspended for a few minutes while the database is
backed up.

The example following uses the AWS CLI create-db-snapshot command to create a DB snapshot of
the AcmeRDS instance and give the DB snapshot the identifier "preload".

For Linux, macOS, or Unix:

aws rds create-db-snapshot \


--db-instance-identifier AcmeRDS \
--db-snapshot-identifier preload

For Windows:

aws rds create-db-snapshot ^


--db-instance-identifier AcmeRDS ^
--db-snapshot-identifier preload

You can also use the restore from DB snapshot functionality to create test DB instances for dry runs or to
undo changes made during the load.

Keep in mind that restoring a database from a DB snapshot creates a new DB instance that, like all
DB instances, has a unique identifier and endpoint. To restore the DB instance without changing the
endpoint, first delete the DB instance so that you can reuse the endpoint.

For example, to create a DB instance for dry runs or other testing, you give the DB instance its own
identifier. In the example, AcmeRDS-2" is the identifier. The example connects to the DB instance using
the endpoint associated with AcmeRDS-2.

For Linux, macOS, or Unix:

1704
Amazon Relational Database Service User Guide
Importing data from any source

aws rds restore-db-instance-from-db-snapshot \


--db-instance-identifier AcmeRDS-2 \
--db-snapshot-identifier preload

For Windows:

aws rds restore-db-instance-from-db-snapshot ^


--db-instance-identifier AcmeRDS-2 ^
--db-snapshot-identifier preload

To reuse the existing endpoint, first delete the DB instance and then give the restored database the same
identifier.

For Linux, macOS, or Unix:

aws rds delete-db-instance \


--db-instance-identifier AcmeRDS \
--final-db-snapshot-identifier AcmeRDS-Final

aws rds restore-db-instance-from-db-snapshot \


--db-instance-identifier AcmeRDS \
--db-snapshot-identifier preload

For Windows:

aws rds delete-db-instance ^


--db-instance-identifier AcmeRDS ^
--final-db-snapshot-identifier AcmeRDS-Final

aws rds restore-db-instance-from-db-snapshot ^


--db-instance-identifier AcmeRDS ^
--db-snapshot-identifier preload

The preceding example takes a final DB snapshot of the DB instance before deleting it. This is optional
but recommended.

Step 4: Consider turning off Amazon RDS automated backups


Warning
Do not turn off automated backups if you need to perform point-in-time recovery.

Turning off automated backups erases all existing backups, so point-in-time recovery isn't possible after
automated backups have been turned off. Disabling automated backups is a performance optimization
and isn't required for data loads. Manual DB snapshots aren't affected by turning off automated backups.
All existing manual DB snapshots are still available for restore.

Turning off automated backups reduces load time by about 25 percent and reduces the amount of
storage space required during the load. If you plan to load data into a new DB instance that contains
no data, turning off backups is an easy way to speed up the load and avoid using the additional storage
needed for backups. However, in some cases you might plan to load into a DB instance that already
contains data. If so, weigh the benefits of turning off backups against the impact of losing the ability to
perform point-in-time-recovery.

DB instances have automated backups turned on by default (with a one day retention period). To turn off
automated backups, set the backup retention period to zero. After the load, you can turn backups back
on by setting the backup retention period to a nonzero value. To turn on or turn off backups, Amazon
RDS shuts the DB instance down and restarts it to turn MariaDB or MySQL logging on or off.

1705
Amazon Relational Database Service User Guide
Importing data from any source

Use the AWS CLI modify-db-instance command to set the backup retention to zero and apply the
change immediately. Setting the retention period to zero requires a DB instance restart, so wait until the
restart has completed before proceeding.

For Linux, macOS, or Unix:

aws rds modify-db-instance \


--db-instance-identifier AcmeRDS \
--apply-immediately \
--backup-retention-period 0

For Windows:

aws rds modify-db-instance ^


--db-instance-identifier AcmeRDS ^
--apply-immediately ^
--backup-retention-period 0

You can check the status of your DB instance with the AWS CLI describe-db-instances command.
The following example displays the DB instance status of the AcmeRDS DB instance.

aws rds describe-db-instances --db-instance-identifier AcmeRDS --query "*[].


{DBInstanceStatus:DBInstanceStatus}"

When the DB instance status is available, you're ready to proceed.

Step 5: Load the data


Use the mysqlimport utility to load the flat files into Amazon RDS. The following example tells
mysqlimport to load all of the files named "sales" with an extension starting with "part_". This is a
convenient way to load all of the files created in the "split" example.

Use the --compress option to minimize network traffic. The --fields-terminated-by=',' option is used for
CSV files, and the --local option specifies that the incoming data is located on the client. Without the --
local option, the Amazon RDS DB instance looks for the data on the database host, so always specify the
--local option. For the --host option, specify the DB instance endpoint of the RDS for MySQL DB instance.

In the following examples, replace master_user with the master username for your DB instance.

Replace hostname with the endpoint for your DB instance. An example of a DB instance endpoint is my-
db-instance.123456789012.us-west-2.rds.amazonaws.com.

For RDS for MySQL version 8.0.15 and higher, run the following statement before using the mysqlimport
utility.

GRANT SESSION_VARIABLES_ADMIN ON *.* TO master_user;

For Linux, macOS, or Unix:

mysqlimport --local \
--compress \
--user=master_user \
--password \
--host=hostname \
--fields-terminated-by=',' Acme sales.part_*

For Windows:

1706
Amazon Relational Database Service User Guide
Importing data from any source

mysqlimport --local ^
--compress ^
--user=master_user ^
--password ^
--host=hostname ^
--fields-terminated-by="," Acme sales.part_*

For very large data loads, take additional DB snapshots periodically between loading files and note
which files have been loaded. If a problem occurs, you can easily resume from the point of the last DB
snapshot, avoiding lengthy reloads.

Step 6: Turn Amazon RDS automated backups back on


After the load is finished, turn Amazon RDS automated backups on by setting the backup retention
period back to its preload value. As noted earlier, Amazon RDS restarts the DB instance, so be prepared
for a brief outage.

The following example uses the AWS CLI modify-db-instance command to turn on automated
backups for the AcmeRDS DB instance and set the retention period to one day.

For Linux, macOS, or Unix:

aws rds modify-db-instance \


--db-instance-identifier AcmeRDS \
--backup-retention-period 1 \
--apply-immediately

For Windows:

aws rds modify-db-instance ^


--db-instance-identifier AcmeRDS ^
--backup-retention-period 1 ^
--apply-immediately

1707
Amazon Relational Database Service User Guide
Working with MySQL replication

Working with MySQL replication in Amazon RDS


You usually use read replicas to configure replication between Amazon RDS DB instances. For general
information about read replicas, see Working with DB instance read replicas (p. 438). For specific
information about working with read replicas on Amazon RDS for MySQL, see Working with MySQL read
replicas (p. 1708).

You can use global transaction identifiers (GTIDs) for replication with RDS for MySQL. For more
information, see Using GTID-based replication for Amazon RDS for MySQL (p. 1719).

You can also set up replication between an RDS for MySQL DB instance and a MariaDB or MySQL instance
that is external to Amazon RDS. For information about configuring replication with an external source,
see Configuring binary log file position replication with an external source instance (p. 1724).

For any of these replication options, you can use either row-based replication, statement-based, or mixed
replication. Row-based replication only replicates the changed rows that result from a SQL statement.
Statement-based replication replicates the entire SQL statement. Mixed replication uses statement-
based replication when possible, but switches to row-based replication when SQL statements that
are unsafe for statement-based replication are run. In most cases, mixed replication is recommended.
The binary log format of the DB instance determines whether replication is row-based, statement-
based, or mixed. For information about setting the binary log format, see Configuring MySQL binary
logging (p. 921).
Note
You can configure replication to import databases from a MariaDB or MySQL instance
that is external to Amazon RDS, or to export databases to such instances. For more
information, see Importing data to an Amazon RDS MariaDB or MySQL database with
reduced downtime (p. 1690) and Exporting data from a MySQL DB instance by using
replication (p. 1728).

Topics
• Working with MySQL read replicas (p. 1708)
• Using GTID-based replication for Amazon RDS for MySQL (p. 1719)
• Configuring binary log file position replication with an external source instance (p. 1724)

Working with MySQL read replicas


Following, you can find specific information about working with read replicas on RDS for MySQL. For
general information about read replicas and instructions for using them, see Working with DB instance
read replicas (p. 438).

Topics
• Configuring read replicas with MySQL (p. 1709)
• Configuring replication filters with MySQL (p. 1709)
• Configuring delayed replication with MySQL (p. 1714)
• Updating read replicas with MySQL (p. 1716)
• Working with Multi-AZ read replica deployments with MySQL (p. 1716)
• Using cascading read replicas with RDS for MySQL (p. 1716)
• Monitoring MySQL read replicas (p. 1717)
• Starting and stopping replication with MySQL read replicas (p. 1718)
• Troubleshooting a MySQL read replica problem (p. 1718)

1708
Amazon Relational Database Service User Guide
Working with MySQL read replicas

Configuring read replicas with MySQL


Before a MySQL DB instance can serve as a replication source, make sure to enable automatic backups
on the source DB instance. To do this, set the backup retention period to a value other than 0. This
requirement also applies to a read replica that is the source DB instance for another read replica.
Automatic backups are supported for read replicas running any version of MySQL. You can configure
replication based on binary log coordinates for a MySQL DB instance.

On RDS for MySQL version 5.7.37 and higher MySQL 5.7 versions and RDS for MySQL 8.0.28 and
higher 8.0 versions, you can configure replication using global transaction identifiers (GTIDs). For more
information, see Using GTID-based replication for Amazon RDS for MySQL (p. 1719).

You can create up to 15 read replicas from one DB instance within the same Region. For replication to
operate effectively, each read replica should have the same amount of compute and storage resources as
the source DB instance. If you scale the source DB instance, also scale the read replicas.

RDS for MySQL supports cascading read replicas. To learn how to configure cascading read replicas, see
Using cascading read replicas with RDS for MySQL (p. 1716).

You can run multiple read replica create and delete actions at the same time that reference the same
source DB instance. When you perform these actions, stay within the limit of 15 read replicas for each
source instance.

A read replica of a MySQL DB instance can't use a lower DB engine version than its source DB instance.

Preparing MySQL DB instances that use MyISAM


If your MySQL DB instance uses a nontransactional engine such as MyISAM, you need to perform the
following steps to successfully set up your read replica. These steps are required to make sure that the
read replica has a consistent copy of your data. These steps are not required if all of your tables use a
transactional engine such as InnoDB.

1. Stop all data manipulation language (DML) and data definition language (DDL) operations on non-
transactional tables in the source DB instance and wait for them to complete. SELECT statements can
continue running.
2. Flush and lock the tables in the source DB instance.
3. Create the read replica using one of the methods in the following sections.
4. Check the progress of the read replica creation using, for example, the DescribeDBInstances API
operation. Once the read replica is available, unlock the tables of the source DB instance and resume
normal database operations.

Configuring replication filters with MySQL


You can use replication filters to specify which databases and tables are replicated with a read replica.
Replication filters can include databases and tables in replication or exclude them from replication.

The following are some use cases for replication filters:

• To reduce the size of a read replica. With replication filtering, you can exclude the databases and tables
that aren't needed on the read replica.
• To exclude databases and tables from read replicas for security reasons.
• To replicate different databases and tables for specific use cases at different read replicas. For example,
you might use specific read replicas for analytics or sharding.
• For a DB instance that has read replicas in different AWS Regions, to replicate different databases or
tables in different AWS Regions.

1709
Amazon Relational Database Service User Guide
Working with MySQL read replicas

Note
You can also use replication filters to specify which databases and tables are replicated
with a primary MySQL DB instance that is configured as a replica in an inbound replication
topology. For more information about this configuration, see Configuring binary log file position
replication with an external source instance (p. 1724).

Topics
• Setting replication filtering parameters for RDS for MySQL (p. 1710)
• Replication filtering limitations for RDS for MySQL (p. 1711)
• Replication filtering examples for RDS for MySQL (p. 1711)
• Viewing the replication filters for a read replica (p. 1713)

Setting replication filtering parameters for RDS for MySQL


To configure replication filters, set the following replication filtering parameters on the read replica:

• replicate-do-db – Replicate changes to the specified databases. When you set this parameter for a
read replica, only the databases specified in the parameter are replicated.
• replicate-ignore-db – Don't replicate changes to the specified databases. When the replicate-
do-db parameter is set for a read replica, this parameter isn't evaluated.
• replicate-do-table – Replicate changes to the specified tables. When you set this parameter for a
read replica, only the tables specified in the parameter are replicated. Also, when the replicate-do-
db or replicate-ignore-db parameter is set, make sure to include the database that includes the
specified tables in replication with the read replica.
• replicate-ignore-table – Don't replicate changes to the specified tables. When the replicate-
do-table parameter is set for a read replica, this parameter isn't evaluated.
• replicate-wild-do-table – Replicate tables based on the specified database and table
name patterns. The % and _ wildcard characters are supported. When the replicate-do-db or
replicate-ignore-db parameter is set, make sure to include the database that includes the
specified tables in replication with the read replica.
• replicate-wild-ignore-table – Don't replicate tables based on the specified database and table
name patterns. The % and _ wildcard characters are supported. When the replicate-do-table or
replicate-wild-do-table parameter is set for a read replica, this parameter isn't evaluated.

The parameters are evaluated in the order that they are listed. For more information about how these
parameters work, see the MySQL documentation:

• For general information, see Replica Server Options and Variables.


• For information about how database replication filtering parameters are evaluated, see Evaluation of
Database-Level Replication and Binary Logging Options.
• For information about how table replication filtering parameters are evaluated, see Evaluation of
Table-Level Replication Options.

By default, each of these parameters has an empty value. On each read replica, you can use these
parameters to set, change, and delete replication filters. When you set one of these parameters, separate
each filter from others with a comma.

You can use the % and _ wildcard characters in the replicate-wild-do-table and replicate-
wild-ignore-table parameters. The % wildcard matches any number of characters, and the _
wildcard matches only one character.

The binary logging format of the source DB instance is important for replication because it determines
the record of data changes. The setting of the binlog_format parameter determines whether the

1710
Amazon Relational Database Service User Guide
Working with MySQL read replicas

replication is row-based or statement-based. For more information, see Configuring MySQL binary
logging (p. 921).
Note
All data definition language (DDL) statements are replicated as statements, regardless of the
binlog_format setting on the source DB instance.

Replication filtering limitations for RDS for MySQL


The following limitations apply to replication filtering for RDS for MySQL:

• Each replication filtering parameter has a 2,000-character limit.


• Commas aren't supported in replication filters.
• The MySQL --binlog-do-db and --binlog-ignore-db options for binary log filtering aren't
supported.
• Replication filtering doesn't support XA transactions.

For more information, see Restrictions on XA Transactions in the MySQL documentation.

Replication filtering examples for RDS for MySQL


To configure replication filtering for a read replica, modify the replication filtering parameters in the
parameter group associated with the read replica.
Note
You can't modify a default parameter group. If the read replica is using a default parameter
group, create a new parameter group and associate it with the read replica. For more
information on DB parameter groups, see Working with parameter groups (p. 347).

You can set parameters in a parameter group using the AWS Management Console, AWS CLI, or RDS API.
For information about setting parameters, see Modifying parameters in a DB parameter group (p. 352).
When you set parameters in a parameter group, all of the DB instances associated with the parameter
group use the parameter settings. If you set the replication filtering parameters in a parameter group,
make sure that the parameter group is associated only with read replicas. Leave the replication filtering
parameters empty for source DB instances.

The following examples set the parameters using the AWS CLI. These examples set ApplyMethod to
immediate so that the parameter changes occur immediately after the CLI command completes. If you
want a pending change to be applied after the read replica is rebooted, set ApplyMethod to pending-
reboot.

The following examples set replication filters:

• Including databases in replication


• Including tables in replication
• Including tables in replication with wildcard characters
• Excluding databases from replication
• Excluding tables from replication
• Excluding tables from replication using wildcard characters

Example Including databases in replication

The following example includes the mydb1 and mydb2 databases in replication.

For Linux, macOS, or Unix:

1711
Amazon Relational Database Service User Guide
Working with MySQL read replicas

aws rds modify-db-parameter-group \


--db-parameter-group-name myparametergroup \
--parameters "ParameterName=replicate-do-
db,ParameterValue='mydb1,mydb2',ApplyMethod=immediate"

For Windows:

aws rds modify-db-parameter-group ^


--db-parameter-group-name myparametergroup ^
--parameters "ParameterName=replicate-do-
db,ParameterValue='mydb1,mydb2',ApplyMethod=immediate"

Example Including tables in replication

The following example includes the table1 and table2 tables in database mydb1 in replication.

For Linux, macOS, or Unix:

aws rds modify-db-parameter-group \


--db-parameter-group-name myparametergroup \
--parameters "ParameterName=replicate-do-
table,ParameterValue='mydb1.table1,mydb1.table2',ApplyMethod=immediate"

For Windows:

aws rds modify-db-parameter-group ^


--db-parameter-group-name myparametergroup ^
--parameters "ParameterName=replicate-do-
table,ParameterValue='mydb1.table1,mydb1.table2',ApplyMethod=immediate"

Example Including tables in replication using wildcard characters

The following example includes tables with names that begin with order and return in database mydb
in replication.

For Linux, macOS, or Unix:

aws rds modify-db-parameter-group \


--db-parameter-group-name myparametergroup \
--parameters "ParameterName=replicate-wild-do-table,ParameterValue='mydb.order
%,mydb.return%',ApplyMethod=immediate"

For Windows:

aws rds modify-db-parameter-group ^


--db-parameter-group-name myparametergroup ^
--parameters "ParameterName=replicate-wild-do-table,ParameterValue='mydb.order
%,mydb.return%',ApplyMethod=immediate"

Example Excluding databases from replication

The following example excludes the mydb5 and mydb6 databases from replication.

For Linux, macOS, or Unix:

1712
Amazon Relational Database Service User Guide
Working with MySQL read replicas

aws rds modify-db-parameter-group \


--db-parameter-group-name myparametergroup \
--parameters "ParameterName=replicate-ignore-
db,ParameterValue='mydb5,mydb6',ApplyMethod=immediate"

For Windows:

aws rds modify-db-parameter-group ^


--db-parameter-group-name myparametergroup ^
--parameters "ParameterName=replicate-ignore-
db,ParameterValue='mydb5,mydb6',ApplyMethod=immediate"

Example Excluding tables from replication

The following example excludes tables table1 in database mydb5 and table2 in database mydb6 from
replication.

For Linux, macOS, or Unix:

aws rds modify-db-parameter-group \


--db-parameter-group-name myparametergroup \
--parameters "ParameterName=replicate-ignore-
table,ParameterValue='mydb5.table1,mydb6.table2',ApplyMethod=immediate"

For Windows:

aws rds modify-db-parameter-group ^


--db-parameter-group-name myparametergroup ^
--parameters "ParameterName=replicate-ignore-
table,ParameterValue='mydb5.table1,mydb6.table2',ApplyMethod=immediate"

Example Excluding tables from replication using wildcard characters

The following example excludes tables with names that begin with order and return in database
mydb7 from replication.

For Linux, macOS, or Unix:

aws rds modify-db-parameter-group \


--db-parameter-group-name myparametergroup \
--parameters "ParameterName=replicate-wild-ignore-table,ParameterValue='mydb7.order
%,mydb7.return%',ApplyMethod=immediate"

For Windows:

aws rds modify-db-parameter-group ^


--db-parameter-group-name myparametergroup ^
--parameters "ParameterName=replicate-wild-ignore-table,ParameterValue='mydb7.order
%,mydb7.return%',ApplyMethod=immediate"

Viewing the replication filters for a read replica


You can view the replication filters for a read replica in the following ways:

1713
Amazon Relational Database Service User Guide
Working with MySQL read replicas

• Check the settings of the replication filtering parameters in the parameter group associated with the
read replica.

For instructions, see Viewing parameter values for a DB parameter group (p. 359).
• In a MySQL client, connect to the read replica and run the SHOW REPLICA STATUS statement.

In the output, the following fields show the replication filters for the read replica:
• Replicate_Do_DB
• Replicate_Ignore_DB
• Replicate_Do_Table
• Replicate_Ignore_Table
• Replicate_Wild_Do_Table
• Replicate_Wild_Ignore_Table

For more information about these fields, see Checking Replication Status in the MySQL
documentation.
Note
Previous versions of MySQL used SHOW SLAVE STATUS instead of SHOW REPLICA STATUS.
If you are using a MySQL version before 8.0.23, then use SHOW SLAVE STATUS.

Configuring delayed replication with MySQL


You can use delayed replication as a strategy for disaster recovery. With delayed replication, you specify
the minimum amount of time, in seconds, to delay replication from the source to the read replica. In the
event of a disaster, such as a table deleted unintentionally, you complete the following steps to recover
from the disaster quickly:

• Stop replication to the read replica before the change that caused the disaster is sent to it.

Use the mysql.rds_stop_replication (p. 1782) stored procedure to stop replication.


• Start replication and specify that replication stops automatically at a log file location.

You specify a location just before the disaster using the mysql.rds_start_replication_until (p. 1780)
stored procedure.
• Promote the read replica to be the new source DB instance by using the instructions in Promoting a
read replica to be a standalone DB instance (p. 447).

Note

• On RDS for MySQL 8.0, delayed replication is supported for MySQL 8.0.28 and higher. On RDS
for MySQL 5.7, delayed replication is supported for MySQL 5.7.37 and higher.
• Use stored procedures to configure delayed replication. You can't configure delayed
replication with the AWS Management Console, the AWS CLI, or the Amazon RDS API.
• On RDS for MySQL 5.7.37 and higher MySQL 5.7 versions and RDS for MySQL 8.0.28
and higher 8.0 versions, you can use replication based on global transaction identifiers
(GTIDs) in a delayed replication configuration. If you use GTID-based replication, use
the mysql.rds_start_replication_until_gtid (p. 1781) stored procedure instead of the
mysql.rds_start_replication_until (p. 1780) stored procedure. For more information
about GTID-based replication, see Using GTID-based replication for Amazon RDS for
MySQL (p. 1719).

Topics

1714
Amazon Relational Database Service User Guide
Working with MySQL read replicas

• Configuring delayed replication during read replica creation (p. 1715)


• Modifying delayed replication for an existing read replica (p. 1715)
• Setting a location to stop replication to a read replica (p. 1715)
• Promoting a read replica (p. 1716)

Configuring delayed replication during read replica creation


To configure delayed replication for any future read replica created from a DB instance, run the
mysql.rds_set_configuration (p. 1758) stored procedure with the target delay parameter.

To configure delayed replication during read replica creation

1. Using a MySQL client, connect to the MySQL DB instance to be the source for read replicas as the
master user.
2. Run the mysql.rds_set_configuration (p. 1758) stored procedure with the target delay
parameter.

For example, run the following stored procedure to specify that replication is delayed by at least one
hour (3,600 seconds) for any read replica created from the current DB instance.

call mysql.rds_set_configuration('target delay', 3600);

Note
After running this stored procedure, any read replica you create using the AWS CLI or
Amazon RDS API is configured with replication delayed by the specified number of seconds.

Modifying delayed replication for an existing read replica


To modify delayed replication for an existing read replica, run the mysql.rds_set_source_delay (p. 1777)
stored procedure.

To modify delayed replication for an existing read replica

1. Using a MySQL client, connect to the read replica as the master user.
2. Use the mysql.rds_stop_replication (p. 1782) stored procedure to stop replication.
3. Run the mysql.rds_set_source_delay (p. 1777) stored procedure.

For example, run the following stored procedure to specify that replication to the read replica is
delayed by at least one hour (3600 seconds).

call mysql.rds_set_source_delay(3600);

4. Use the mysql.rds_start_replication (p. 1780) stored procedure to start replication.

Setting a location to stop replication to a read replica


After stopping replication to the read replica, you can start replication and then stop it at a specified
binary log file location using the mysql.rds_start_replication_until (p. 1780) stored procedure.

To start replication to a read replica and stop replication at a specific location

1. Using a MySQL client, connect to the source MySQL DB instance as the master user.
2. Run the mysql.rds_start_replication_until (p. 1780) stored procedure.

1715
Amazon Relational Database Service User Guide
Working with MySQL read replicas

The following example initiates replication and replicates changes until it reaches location 120 in
the mysql-bin-changelog.000777 binary log file. In a disaster recovery scenario, assume that
location 120 is just before the disaster.

call mysql.rds_start_replication_until(
'mysql-bin-changelog.000777',
120);

Replication stops automatically when the stop point is reached. The following RDS event is generated:
Replication has been stopped since the replica reached the stop point specified
by the rds_start_replication_until stored procedure.

Promoting a read replica


After replication is stopped, in a disaster recovery scenario, you can promote a read replica to be the new
source DB instance. For information about promoting a read replica, see Promoting a read replica to be a
standalone DB instance (p. 447).

Updating read replicas with MySQL


Read replicas are designed to support read queries, but you might need occasional updates. For example,
you might need to add an index to optimize the specific types of queries accessing the replica.

Although you can enable updates by setting the read_only parameter to 0 in the DB parameter group
for the read replica, we recommend that you don't do so because it can cause problems if the read
replica becomes incompatible with the source DB instance. For maintenance operations, we recommend
that you use blue/green deployments. For more information, see Using Blue/Green Deployments for
database updates (p. 566).

If you disable read-only on a read replica, change the value of the read_only parameter back to 1 as
soon as possible.

Working with Multi-AZ read replica deployments with MySQL


You can create a read replica from either single-AZ or Multi-AZ DB instance deployments. You use Multi-
AZ deployments to improve the durability and availability of critical data, but you can't use the Multi-AZ
secondary to serve read-only queries. Instead, you can create read replicas from high-traffic Multi-AZ DB
instances to offload read-only queries. If the source instance of a Multi-AZ deployment fails over to the
secondary, any associated read replicas automatically switch to use the secondary (now primary) as their
replication source. For more information, see Configuring and managing a Multi-AZ deployment (p. 492).

You can create a read replica as a Multi-AZ DB instance. Amazon RDS creates a standby of your replica in
another Availability Zone for failover support for the replica. Creating your read replica as a Multi-AZ DB
instance is independent of whether the source database is a Multi-AZ DB instance.

Using cascading read replicas with RDS for MySQL


RDS for MySQL supports cascading read replicas. With cascading read replicas, you can scale reads
without adding overhead to your source RDS for MySQL DB instance.

With cascading read replicas, your RDS for MySQL DB instance sends data to the first read replica in the
chain. That read replica then sends data to the second replica in the chain, and so on. The end result is
that all read replicas in the chain have the changes from the RDS for MySQL DB instance, but without the
overhead solely on the source DB instance.

1716
Amazon Relational Database Service User Guide
Working with MySQL read replicas

You can create a series of up to three read replicas in a chain from a source RDS for MySQL DB instance.
For example, suppose that you have an RDS for MySQL DB instance, mysql-main. You can do the
following:

• Starting with mysql-main, create the first read replica in the chain, read-replica-1.
• Next, from read-replica-1, create the next read replica in the chain, read-replica-2.
• Finally, from read-replica-2, create the third read replica in the chain, read-replica-3.

You can't create another read replica beyond this third cascading read replica in the series for mysql-
main. A complete series of instances from an RDS for MySQL source DB instance through to the end of a
series of cascading read replicas can consist of at most four DB instances.

For cascading read replicas to work, each source RDS for MySQL DB instance must have automated
backups turned on. To turn on automatic backups on a read replica, first create the read replica, and
then modify the read replica to turn on automatic backups. For more information, see Creating a read
replica (p. 445).

As with any read replica, you can promote a read replica that's part of a cascade. Promoting a read
replica from within a chain of read replicas removes that replica from the chain. For example, suppose
that you want to move some of the workload from your mysql-main DB instance to a new instance for
use by the accounting department only. Assuming the chain of three read replicas from the example, you
decide to promote read-replica-2. The chain is affected as follows:

• Promoting read-replica-2 removes it from the replication chain.


• It is now a full read/write DB instance.
• It continues replicating to read-replica-3, just as it was doing before promotion.
• Your mysql-main continues replicating to read-replica-1.

For more information about promoting read replicas, see Promoting a read replica to be a standalone DB
instance (p. 447).

Monitoring MySQL read replicas


For MySQL read replicas, you can monitor replication lag in Amazon CloudWatch by viewing the Amazon
RDS ReplicaLag metric. The ReplicaLag metric reports the value of the Seconds_Behind_Master
field of the SHOW REPLICA STATUS command.
Note
Previous versions of MySQL used SHOW SLAVE STATUS instead of SHOW REPLICA STATUS. If
you are using a MySQL version before 8.0.23, then use SHOW SLAVE STATUS.

Common causes for replication lag for MySQL are the following:

• A network outage.
• Writing to tables that have different indexes on a read replica. If the read_only parameter is set to 0
on the read replica, replication can break if the read replica becomes incompatible with the source DB
instance. After you've performed maintenance tasks on the read replica, we recommend that you set
the read_only parameter back to 1.
• Using a nontransactional storage engine such as MyISAM. Replication is only supported for the InnoDB
storage engine on MySQL.

When the ReplicaLag metric reaches 0, the replica has caught up to the source DB instance. If the
ReplicaLag metric returns -1, then replication is currently not active. ReplicaLag = -1 is equivalent to
Seconds_Behind_Master = NULL.

1717
Amazon Relational Database Service User Guide
Working with MySQL read replicas

Starting and stopping replication with MySQL read replicas


You can stop and restart the replication process on an Amazon RDS DB instance by calling the system
stored procedures mysql.rds_stop_replication (p. 1782) and mysql.rds_start_replication (p. 1780).
You can do this when replicating between two Amazon RDS instances for long-running operations
such as creating large indexes. You also need to stop and start replication when importing or
exporting databases. For more information, see Importing data to an Amazon RDS MariaDB or MySQL
database with reduced downtime (p. 1690) and Exporting data from a MySQL DB instance by using
replication (p. 1728).

If replication is stopped for more than 30 consecutive days, either manually or due to a replication
error, Amazon RDS terminates replication between the source DB instance and all read replicas. It does
so to prevent increased storage requirements on the source DB instance and long failover times. The
read replica DB instance is still available. However, replication can't be resumed because the binary logs
required by the read replica are deleted from the source DB instance after replication is terminated. You
can create a new read replica for the source DB instance to reestablish replication.

Troubleshooting a MySQL read replica problem


For MySQL DB instances, in some cases read replicas present replication errors or data inconsistencies
(or both) between the read replica and its source DB instance. This problem occurs when some binary
log (binlog) events or InnoDB redo logs aren't flushed during a failure of the read replica or the
source DB instance. In these cases, manually delete and recreate the read replicas. You can reduce
the chance of this happening by setting the following parameter values: sync_binlog=1 and
innodb_flush_log_at_trx_commit=1. These settings might reduce performance, so test their
impact before implementing the changes in a production environment.
Warning
In the parameter group associated with the source DB instance, we recommend keeping these
parameter values: sync_binlog=1 and innodb_flush_log_at_trx_commit=1. These
parameters are dynamic. If you don't want to use these settings, we recommend temporarily
setting those values before executing any operation on the source DB instance that might cause
it to restart. These operations include, but are not limited to, rebooting, rebooting with failover,
upgrading the database version, and changing the DB instance class or its storage. The same
recommendation applies to creating new read replicas for the source DB instance.
Failure to follow this guidance increases the risk of read replicas presenting replication errors or
data inconsistencies (or both) between the read replica and its source DB instance.

The replication technologies for MySQL are asynchronous. Because they are asynchronous, occasional
BinLogDiskUsage increases on the source DB instance and ReplicaLag on the read replica are to be
expected. For example, a high volume of write operations to the source DB instance can occur in parallel.
In contrast, write operations to the read replica are serialized using a single I/O thread, which can lead to
a lag between the source instance and read replica. For more information about read-only replicas in the
MySQL documentation, see Replication implementation details.

You can do several things to reduce the lag between updates to a source DB instance and the subsequent
updates to the read replica, such as the following:

• Sizing a read replica to have a storage size and DB instance class comparable to the source DB
instance.
• Ensuring that parameter settings in the DB parameter groups used by the source DB instance and
the read replica are compatible. For more information and an example, see the discussion of the
max_allowed_packet parameter later in this section.

Amazon RDS monitors the replication status of your read replicas and updates the Replication State
field of the read replica instance to Error if replication stops for any reason. An example might be if
DML queries run on your read replica conflict with the updates made on the source DB instance.

1718
Amazon Relational Database Service User Guide
Using GTID-based replication

You can review the details of the associated error thrown by the MySQL engine by viewing the
Replication Error field. Events that indicate the status of the read replica are also generated,
including RDS-EVENT-0045 (p. 887), RDS-EVENT-0046 (p. 888), and RDS-EVENT-0047 (p. 883). For
more information about events and subscribing to events, see Working with Amazon RDS event
notification (p. 855). If a MySQL error message is returned, review the error number in the MySQL error
message documentation.

One common issue that can cause replication errors is when the value for the max_allowed_packet
parameter for a read replica is less than the max_allowed_packet parameter for the source DB
instance. The max_allowed_packet parameter is a custom parameter that you can set in a DB
parameter group. You use max_allowed_packet to specify the maximum size of DML code that can
be run on the database. In some cases, the max_allowed_packet value in the DB parameter group
associated with a read replica is smaller than the max_allowed_packet value in the DB parameter
group associated with the source DB instance. In these cases, the replication process can throw the error
Packet bigger than 'max_allowed_packet' bytes and stop replication. To fix the error, have
the source DB instance and read replica use DB parameter groups with the same max_allowed_packet
parameter values.

Other common situations that can cause replication errors include the following:

• Writing to tables on a read replica. In some cases, you might create indexes on a read replica that are
different from the indexes on the source DB instance. If you do, set the read_only parameter to 0
to create the indexes. If you write to tables on the read replica, it might break replication if the read
replica becomes incompatible with the source DB instance. After you perform maintenance tasks on
the read replica, we recommend that you set the read_only parameter back to 1.
• Using a non-transactional storage engine such as MyISAM. Read replicas require a transactional
storage engine. Replication is only supported for the InnoDB storage engine on MySQL.
• Using unsafe nondeterministic queries such as SYSDATE(). For more information, see Determination
of safe and unsafe statements in binary logging.

If you decide that you can safely skip an error, you can follow the steps described in the section Skipping
the current replication error (p. 1744). Otherwise, you can first delete the read replica. Then you create
an instance using the same DB instance identifier so that the endpoint remains the same as that of your
old read replica. If a replication error is fixed, the Replication State changes to replicating.

Using GTID-based replication for Amazon RDS for


MySQL
Following, you can learn how to use global transaction identifiers (GTIDs) with binary log (binlog)
replication among Amazon RDS for MySQL DB instances.

If you use binlog replication and aren't familiar with GTID-based replication with MySQL, see Replication
with global transaction identifiers in the MySQL documentation for background.

GTID-based replication is supported for all RDS for MySQL 5.7 versions, and RDS for MySQL version
8.0.26 and higher MySQL 8.0 versions. All MySQL DB instances in a replication configuration must meet
this requirement.

Topics
• Overview of global transaction identifiers (GTIDs) (p. 1720)
• Parameters for GTID-based replication (p. 1720)
• Configuring GTID-based replication for new read replicas (p. 1721)
• Configuring GTID-based replication for existing read replicas (p. 1721)

1719
Amazon Relational Database Service User Guide
Using GTID-based replication

• Disabling GTID-based replication for a MySQL DB instance with read replicas (p. 1723)

Overview of global transaction identifiers (GTIDs)


Global transaction identifiers (GTIDs) are unique identifiers generated for committed MySQL transactions.
You can use GTIDs to make binlog replication simpler and easier to troubleshoot.

MySQL uses two different types of transactions for binlog replication:

• GTID transactions – Transactions that are identified by a GTID.


• Anonymous transactions – Transactions that don't have a GTID assigned.

In a replication configuration, GTIDs are unique across all DB instances. GTIDs simplify replication
configuration because when you use them, you don't have to refer to log file positions. GTIDs also make
it easier to track replicated transactions and determine whether the source instance and replicas are
consistent.

You can use GTID-based replication to replicate data with RDS for MySQL read replicas. You can
configure GTID-based replication when you are creating new read replicas, or you can convert existing
read replicas to use GTID-based replication.

You can also use GTID-based replication in a delayed replication configuration with RDS for MySQL. For
more information, see Configuring delayed replication with MySQL (p. 1714).

Parameters for GTID-based replication


Use the following parameters to configure GTID-based replication.

Parameter Valid values Description

gtid_mode OFF, OFF_PERMISSIVE, OFF specifies that new transactions are anonymous
ON_PERMISSIVE, ON transactions (that is, don't have GTIDs), and a
transaction must be anonymous to be replicated.

OFF_PERMISSIVE specifies that new transactions


are anonymous transactions, but all transactions
can be replicated.

ON_PERMISSIVE specifies that new transactions


are GTID transactions, but all transactions can be
replicated.

ON specifies that new transactions are GTID


transactions, and a transaction must be a GTID
transaction to be replicated.

enforce_gtid_consistency
OFF, ON, WARN OFF allows transactions to violate GTID
consistency.

ON prevents transactions from violating GTID


consistency.

WARN allows transactions to violate GTID


consistency but generates a warning when a
violation occurs.

1720
Amazon Relational Database Service User Guide
Using GTID-based replication

Note
In the AWS Management Console, the gtid_mode parameter appears as gtid-mode.

For GTID-based replication, use these settings for the parameter group for your DB instance or read
replica:

• ON and ON_PERMISSIVE apply only to outgoing replication from an RDS DB instance. Both of these
values cause your RDS DB instance to use GTIDs for transactions that are replicated. ON requires that
the target database also use GTID-based replication. ON_PERMISSIVE makes GTID-based replication
optional on the target database.
• OFF_PERMISSIVE, if set, means that your RDS DB instances can accept incoming replication from
a source database. They can do this regardless of whether the source database uses GTID-based
replication.
• OFF, if set, means that your RDS DB instance only accepts incoming replication from source databases
that don't use GTID-based replication.

For more information about parameter groups, see Working with parameter groups (p. 347).

Configuring GTID-based replication for new read replicas


When GTID-based replication is enabled for an RDS for MySQL DB instance, GTID-based replication is
configured automatically for read replicas of the DB instance.

To enable GTID-based replication for new read replicas

1. Make sure that the parameter group associated with the DB instance has the following parameter
settings:

• gtid_mode – ON or ON_PERMISSIVE
• enforce_gtid_consistency – ON

For more information about setting configuration parameters using parameter groups, see Working
with parameter groups (p. 347).
2. If you changed the parameter group of the DB instance, reboot the DB instance. For more
information on how to do so, see Rebooting a DB instance (p. 436).
3. Create one or more read replicas of the DB instance. For more information on how to do so, see
Creating a read replica (p. 445).

Amazon RDS attempts to establish GTID-based replication between the MySQL DB instance and the read
replicas using the MASTER_AUTO_POSITION. If the attempt fails, Amazon RDS uses log file positions for
replication with the read replicas. For more information about the MASTER_AUTO_POSITION, see GTID
auto-positioning in the MySQL documentation.

Configuring GTID-based replication for existing read replicas


For an existing MySQL DB instance with read replicas that doesn't use GTID-based replication, you can
configure GTID-based replication between the DB instance and the read replicas.

To enable GTID-based replication for existing read replicas

1. If the DB instance or any read replica is using an 8.0 version of RDS for MySQL version lower than
8.0.26, upgrade the DB instance or read replica to 8.0.26 or a higher MySQL 8.0 version. All RDS for
MySQL 5.7 versions support GTID-based replication.

For more information, see Upgrading the MySQL DB engine (p. 1664).

1721
Amazon Relational Database Service User Guide
Using GTID-based replication

2. (Optional) Reset the GTID parameters and test the behavior of the DB instance and read replicas:

a. Make sure that the parameter group associated with the DB instance and each read replica has
the enforce_gtid_consistency parameter set to WARN.

For more information about setting configuration parameters using parameter groups, see
Working with parameter groups (p. 347).
b. If you changed the parameter group of the DB instance, reboot the DB instance. If you changed
the parameter group for a read replica, reboot the read replica.

For more information, see Rebooting a DB instance (p. 436).


c. Run your DB instance and read replicas with your normal workload and monitor the log files.

If you see warnings about GTID-incompatible transactions, adjust your application so that it
only uses GTID-compatible features. Make sure that the DB instance is not generating any
warnings about GTID-incompatible transactions before proceeding to the next step.
3. Reset the GTID parameters for GTID-based replication that allows anonymous transactions until the
read replicas have processed all of them.

a. Make sure that the parameter group associated with the DB instance and each read replica has
the following parameter settings:

• gtid_mode – ON_PERMISSIVE
• enforce_gtid_consistency – ON
b. If you changed the parameter group of the DB instance, reboot the DB instance. If you changed
the parameter group for a read replica, reboot the read replica.
4. Wait for all of your anonymous transactions to be replicated. To check that these are replicated, do
the following:

a. Run the following statement on your source DB instance.

SHOW MASTER STATUS;

Note the values in the File and Position columns.


b. On each read replica, use the file and position information from its source instance in the
previous step to run the following query.

SELECT MASTER_POS_WAIT('file', position);

For example, if the file name is mysql-bin-changelog.000031 and the position is 107, run
the following statement.

SELECT MASTER_POS_WAIT('mysql-bin-changelog.000031', 107);

If the read replica is past the specified position, the query returns immediately. Otherwise, the
function waits. Proceed to the next step when the query returns for all read replicas.
5. Reset the GTID parameters for GTID-based replication only.

a. Make sure that the parameter group associated with the DB instance and each read replica has
the following parameter settings:

• gtid_mode – ON
• enforce_gtid_consistency – ON
b. Reboot the DB instance and each read replica.
1722
Amazon Relational Database Service User Guide
Using GTID-based replication

6. On each read replica, run the following procedure.

CALL mysql.rds_set_master_auto_position(1);

Disabling GTID-based replication for a MySQL DB instance with


read replicas
You can disable GTID-based replication for a MySQL DB instance with read replicas.

To disable GTID-based replication for a MySQL DB instance with read replicas

1. On each read replica, run the following procedure.

CALL mysql.rds_set_master_auto_position(0); (Aurora MySQL version 2)


CALL mysql.rds_set_source_auto_position(0); (Aurora MySQL version 3)

2. Reset the gtid_mode to ON_PERMISSIVE.

a. Make sure that the parameter group associated with the MySQL DB instance and each read
replica has gtid_mode set to ON_PERMISSIVE.

For more information about setting configuration parameters using parameter groups, see
Working with parameter groups (p. 347).
b. Reboot the MySQL DB instance and each read replica. For more information about rebooting,
see Rebooting a DB instance (p. 436).
3. Reset the gtid_mode to OFF_PERMISSIVE:

a. Make sure that the parameter group associated with the MySQL DB instance and each read
replica has gtid_mode set to OFF_PERMISSIVE.
b. Reboot the MySQL DB instance and each read replica.
4. Wait for all of the GTID transactions to be applied on all of the read replicas. To check that these are
applied, do the following:

Wait for all of the GTID transactions to be applied on the Aurora primary instance. To check that
these are applied, do the following:

a. On the MySQL DB instance, run the SHOW MASTER STATUS command.

Your output should be similar to the following.

File Position
------------------------------------
mysql-bin-changelog.000031 107
------------------------------------

Note the file and position in your output.


b. On each read replica, use the file and position information from its source instance in the
previous step to run the following query.

SELECT MASTER_POS_WAIT('file', position);

For example, if the file name is mysql-bin-changelog.000031 and the position is 107, run
the following statement.

1723
Amazon Relational Database Service User Guide
Configuring binary log file position
replication with an external source instance

SELECT MASTER_POS_WAIT('mysql-bin-changelog.000031', 107);

If the read replica is past the specified position, the query returns immediately. Otherwise, the
function waits. When the query returns for all read replicas, go to the next step.
5. Reset the GTID parameters to disable GTID-based replication:

a. Make sure that the parameter group associated with the MySQL DB instance and each read
replica has the following parameter settings:

• gtid_mode – OFF
• enforce_gtid_consistency – OFF
b. Reboot the MySQL DB instance and each read replica.

Configuring binary log file position replication with


an external source instance
You can set up replication between an RDS for MySQL or MariaDB DB instance and a MySQL or MariaDB
instance that is external to Amazon RDS using binary log file replication.

Topics
• Before you begin (p. 1331)
• Configuring binary log file position replication with an external source instance (p. 1331)

Before you begin


You can configure replication using the binary log file position of replicated transactions.

The permissions required to start replication on an Amazon RDS DB instance are restricted and not
available to your Amazon RDS master user. Because of this, make sure that you use the Amazon RDS
mysql.rds_set_external_master (p. 1769) and mysql.rds_start_replication (p. 1780) commands to set up
replication between your live database and your Amazon RDS database.

To set the binary logging format for a MySQL or MariaDB database, update the binlog_format
parameter. If your DB instance uses the default DB instance parameter group, create a new DB parameter
group to modify binlog_format settings. We recommend that you use the default setting for
binlog_format, which is MIXED. However, you can also set binlog_format to ROW or STATEMENT if
you need a specific binary log (binlog) format. Reboot your DB instance for the change to take effect.

For information about setting the binlog_format parameter, see Configuring MySQL binary
logging (p. 921). For information about the implications of different MySQL replication types,
see Advantages and disadvantages of statement-based and row-based replication in the MySQL
documentation.

Configuring binary log file position replication with an external


source instance
Follow these guidelines when you set up an external source instance and a replica on Amazon RDS:

• Monitor failover events for the Amazon RDS DB instance that is your replica. If a failover occurs,
then the DB instance that is your replica might be recreated on a new host with a different network
address. For information on how to monitor failover events, see Working with Amazon RDS event
notification (p. 855).

1724
Amazon Relational Database Service User Guide
Configuring binary log file position
replication with an external source instance

• Maintain the binlogs on your source instance until you have verified that they have been applied to
the replica. This maintenance makes sure that you can restore your source instance in the event of a
failure.
• Turn on automated backups on your Amazon RDS DB instance. Turning on automated backups makes
sure that you can restore your replica to a particular point in time if you need to re-synchronize your
source instance and replica. For information on backups and point-in-time restore, see Backing up and
restoring (p. 590).

To configure binary log file replication with an external source instance

1. Make the source MySQL or MariaDB instance read-only.

mysql> FLUSH TABLES WITH READ LOCK;


mysql> SET GLOBAL read_only = ON;

2. Run the SHOW MASTER STATUS command on the source MySQL or MariaDB instance to determine
the binlog location.

You receive output similar to the following example.

File Position
------------------------------------
mysql-bin-changelog.000031 107
------------------------------------

3. Copy the database from the external instance to the Amazon RDS DB instance using mysqldump.
For very large databases, you might want to use the procedure in Importing data to an Amazon RDS
MariaDB or MySQL database with reduced downtime (p. 1690).

For Linux, macOS, or Unix:

mysqldump --databases database_name \


--single-transaction \
--compress \
--order-by-primary \
-u local_user \
-plocal_password | mysql \
--host=hostname \
--port=3306 \
-u RDS_user_name \
-pRDS_password

For Windows:

mysqldump --databases database_name ^


--single-transaction ^
--compress ^
--order-by-primary ^
-u local_user ^
-plocal_password | mysql ^
--host=hostname ^
--port=3306 ^
-u RDS_user_name ^
-pRDS_password

Note
Make sure that there isn't a space between the -p option and the entered password.

1725
Amazon Relational Database Service User Guide
Configuring binary log file position
replication with an external source instance

To specify the host name, user name, port, and password to connect to your Amazon RDS DB
instance, use the --host, --user (-u), --port and -p options in the mysql command. The
host name is the Domain Name Service (DNS) name from the Amazon RDS DB instance endpoint,
for example myinstance.123456789012.us-east-1.rds.amazonaws.com. You can find the
endpoint value in the instance details in the AWS Management Console.
4. Make the source MySQL or MariaDB instance writeable again.

mysql> SET GLOBAL read_only = OFF;


mysql> UNLOCK TABLES;

For more information on making backups for use with replication, see the MySQL documentation.
5. In the AWS Management Console, add the IP address of the server that hosts the external database
to the virtual private cloud (VPC) security group for the Amazon RDS DB instance. For more
information on modifying a VPC security group, see Security groups for your VPC in the Amazon
Virtual Private Cloud User Guide.

The IP address can change when the following conditions are met:

• You are using a public IP address for communication between the external source instance and the
DB instance.
• The external source instance was stopped and restarted.

If these conditions are met, verify the IP address before adding it.

You might also need to configure your local network to permit connections from the IP address of
your Amazon RDS DB instance. You do this so that your local network can communicate with your
external MySQL or MariaDB instance. To find the IP address of the Amazon RDS DB instance, use the
host command.

host db_instance_endpoint

The host name is the DNS name from the Amazon RDS DB instance endpoint.
6. Using the client of your choice, connect to the external instance and create a user to use for
replication. Use this account solely for replication and restrict it to your domain to improve security.
The following is an example.

CREATE USER 'repl_user'@'mydomain.com' IDENTIFIED BY 'password';

Note
Specify a password other than the prompt shown here as a security best practice.
7. For the external instance, grant REPLICATION CLIENT and REPLICATION SLAVE privileges to
your replication user. For example, to grant the REPLICATION CLIENT and REPLICATION SLAVE
privileges on all databases for the 'repl_user' user for your domain, issue the following command.

GRANT REPLICATION CLIENT, REPLICATION SLAVE ON *.* TO 'repl_user'@'mydomain.com';

8. Make the Amazon RDS DB instance the replica. To do so, first connect to the Amazon RDS DB
instance as the master user. Then identify the external MySQL or MariaDB database as the source
instance by using the mysql.rds_set_external_master (p. 1769) command. Use the master log file
name and master log position that you determined in step 2. The following is an example.

1726
Amazon Relational Database Service User Guide
Configuring binary log file position
replication with an external source instance

CALL mysql.rds_set_external_master ('mymasterserver.mydomain.com', 3306, 'repl_user',


'password', 'mysql-bin-changelog.000031', 107, 0);

Note
On RDS for MySQL, you can choose to use delayed replication by running the
mysql.rds_set_external_master_with_delay (p. 1774) stored procedure instead.
On RDS for MySQL, one reason to use delayed replication is to turn on disaster
recovery with the mysql.rds_start_replication_until (p. 1780) stored procedure.
Currently, RDS for MariaDB supports delayed replication but doesn't support the
mysql.rds_start_replication_until procedure.
9. On the Amazon RDS DB instance, issue the mysql.rds_start_replication (p. 1780) command to start
replication.

CALL mysql.rds_start_replication;

1727
Amazon Relational Database Service User Guide
Exporting data from a MySQL DB instance

Exporting data from a MySQL DB instance by using


replication
To export data from an RDS for MySQL DB instance to a MySQL instance running external to Amazon
RDS, you can use replication. In this scenario, the MySQL DB instance is the source MySQL DB instance,
and the MySQL instance running external to Amazon RDS is the external MySQL database.

The external MySQL database can run either on-premises in your data center, or on an Amazon EC2
instance. The external MySQL database must run the same version as the source MySQL DB instance, or a
later version.

Replication to an external MySQL database is only supported during the time it takes to export a
database from the source MySQL DB instance. The replication should be terminated when the data has
been exported and applications can start accessing the external MySQL instance.

The following list shows the steps to take. Each step is discussed in more detail in later sections.

1. Prepare an external MySQL DB instance.


2. Prepare the source MySQL DB instance for replication.
3. Use the mysqldump utility to transfer the database from the source MySQL DB instance to the
external MySQL database.
4. Start replication to the external MySQL database.
5. After the export completes, stop replication.

Prepare an external MySQL database


Perform the following steps to prepare the external MySQL database.

To prepare the external MySQL database

1. Install the external MySQL database.


2. Connect to the external MySQL database as the master user. Then create the users required to
support the administrators, applications, and services that access the database.
3. Follow the directions in the MySQL documentation to prepare the external MySQL database as a
replica. For more information, see the MySQL documentation.
4. Configure an egress rule for the external MySQL database to operate as a read replica during the
export. The egress rule allows the external MySQL database to connect to the source MySQL DB
instance during replication. Specify an egress rule that allows Transmission Control Protocol (TCP)
connections to the port and IP address of the source MySQL DB instance.

Specify the appropriate egress rules for your environment:

• If the external MySQL database is running in an Amazon EC2 instance in a virtual private cloud
(VPC) based on the Amazon VPC service, specify the egress rules in a VPC security group. For more
information, see Controlling access with security groups (p. 2680).
• If the external MySQL database is installed on-premises, specify the egress rules in a firewall.
5. If the external MySQL database is running in a VPC, configure rules for the VPC access control list
(ACL) rules in addition to the security group egress rule:

• Configure an ACL ingress rule allowing TCP traffic to ports 1024–65535 from the IP address of the
source MySQL DB instance.
• Configure an ACL egress rule allowing outbound TCP traffic to the port and IP address of the
source MySQL DB instance.

1728
Amazon Relational Database Service User Guide
Prepare the source MySQL DB instance

For more information about Amazon VPC network ACLs, see Network ACLs in Amazon VPC User
Guide.
6. (Optional) Set the max_allowed_packet parameter to the maximum size to avoid replication
errors. We recommend this setting.

Prepare the source MySQL DB instance


Perform the following steps to prepare the source MySQL DB instance as the replication source.

To prepare the source MySQL DB instance

1. Ensure that your client computer has enough disk space available to save the binary logs while
setting up replication.
2. Connect to the source MySQL DB instance, and create a replication account by following the
directions in Creating a user for replication in the MySQL documentation.
3. Configure ingress rules on the system running the source MySQL DB instance to allow the external
MySQL database to connect during replication. Specify an ingress rule that allows TCP connections
to the port used by the source MySQL DB instance from the IP address of the external MySQL
database.
4. Specify the egress rules:

• If the source MySQL DB instance is running in a VPC, specify the ingress rules in a VPC security
group. For more information, see Controlling access with security groups (p. 2680).
5. If source MySQL DB instance is running in a VPC, configure VPC ACL rules in addition to the security
group ingress rule:

• Configure an ACL ingress rule to allow TCP connections to the port used by the Amazon RDS
instance from the IP address of the external MySQL database.
• Configure an ACL egress rule to allow TCP connections from ports 1024–65535 to the IP address
of the external MySQL database.

For more information about Amazon VPC network ACLs, see Network ACLs in the Amazon VPC User
Guide.
6. Ensure that the backup retention period is set long enough that no binary logs are purged during
the export. If any of the logs are purged before the export has completed, you must restart
replication from the beginning. For more information about setting the backup retention period, see
Working with backups (p. 591).
7. Use the mysql.rds_set_configuration stored procedure to set the binary log retention
period long enough that the binary logs aren't purged during the export. For more information, see
Accessing MySQL binary logs (p. 922).
8. Create an Amazon RDS read replica from the source MySQL DB instance to further ensure that the
binary logs of the source MySQL DB instance are not purged. For more information, see Creating a
read replica (p. 445).
9. After the Amazon RDS read replica has been created, call the mysql.rds_stop_replication
stored procedure to stop the replication process. The source MySQL DB instance no longer purges its
binary log files, so they are available for the replication process.
10. (Optional) Set both the max_allowed_packet parameter and the slave_max_allowed_packet
parameter to the maximum size to avoid replication errors. The maximum size for both parameters
is 1 GB. We recommend this setting for both parameters. For information about setting parameters,
see Modifying parameters in a DB parameter group (p. 352).

1729
Amazon Relational Database Service User Guide
Copy the database

Copy the database


Perform the following steps to copy the database.

To copy the database

1. Connect to the RDS read replica of the source MySQL DB instance, and run the MySQL SHOW
REPLICA STATUS\G statement. Note the values for the following:

• Master_Host
• Master_Port
• Master_Log_File
• Exec_Master_Log_Pos

Note
Previous versions of MySQL used SHOW SLAVE STATUS instead of SHOW REPLICA
STATUS. If you are using a MySQL version before 8.0.23, then use SHOW SLAVE STATUS.
2. Use the mysqldump utility to create a snapshot, which copies the data from Amazon RDS to your
local client computer. Ensure that your client computer has enough space to hold the mysqldump
files from the databases to be replicated. This process can take several hours for very large
databases. Follow the directions in Creating a data snapshot using mysqldump in the MySQL
documentation.

The following example runs mysqldump on a client and writes the dump to a file.

For Linux, macOS, or Unix:

mysqldump -h source_MySQL_DB_instance_endpoint \
-u user \
-ppassword \
--port=3306 \
--single-transaction \
--routines \
--triggers \
--databases database database2 > path/rds-dump.sql

For Windows:

mysqldump -h source_MySQL_DB_instance_endpoint ^
-u user ^
-ppassword ^
--port=3306 ^
--single-transaction ^
--routines ^
--triggers ^
--databases database database2 > path\rds-dump.sql

You can load the backup file into the external MySQL database. For more information, see
Reloading SQL-Format Backups in the MySQL documentation. You can run another utility to load
the data into the external MySQL database.

Complete the export


Perform the following steps to complete the export.

1730
Amazon Relational Database Service User Guide
Complete the export

To complete the export

1. Use the MySQL CHANGE MASTER statement to configure the external MySQL database. Specify the
ID and password of the user granted REPLICATION SLAVE permissions. Specify the Master_Host,
Master_Port, Relay_Master_Log_File, and Exec_Master_Log_Pos values that you got from
the MySQL SHOW REPLICA STATUS\G statement that you ran on the RDS read replica. For more
information, see the MySQL documentation.
Note
Previous versions of MySQL used SHOW SLAVE STATUS instead of SHOW REPLICA
STATUS. If you are using a MySQL version before 8.0.23, then use SHOW SLAVE STATUS.
2. Use the MySQL START REPLICA command to initiate replication from the source MySQL DB
instance to the external MySQL database.

Doing this starts replication from the source MySQL DB instance and exports all source changes that
have occurred after you stopped replication from the Amazon RDS read replica.
Note
Previous versions of MySQL used START SLAVE instead of START REPLICA. If you are
using a MySQL version before 8.0.23, then use START SLAVE.
3. Run the MySQL SHOW REPLICA STATUS\G command on the external MySQL database to verify
that it is operating as a read replica. For more information about interpreting the results, see the
MySQL documentation.
4. After replication on the external MySQL database has caught up with the source MySQL DB instance,
use the MySQL STOP REPLICA command to stop replication from the source MySQL DB instance.
Note
Previous versions of MySQL used STOP SLAVE instead of STOP REPLICA. If you are using a
MySQL version before 8.0.23, then use STOP SLAVE.
5. On the Amazon RDS read replica, call the mysql.rds_start_replication stored procedure.
Doing this allows Amazon RDS to start purging the binary log files from the source MySQL DB
instance.

1731
Amazon Relational Database Service User Guide
Options for MySQL

Options for MySQL DB instances


Following, you can find a description of options, or additional features, that are available for Amazon
RDS instances running the MySQL DB engine. To enable these options, you can add them to a custom
option group, and then associate the option group with your DB instance. For more information about
working with option groups, see Working with option groups (p. 331).

Amazon RDS supports the following options for MySQL:

Option Option ID Engine versions

MariaDB Audit MARIADB_AUDIT_PLUGIN MySQL 8.0.28 and higher 8.0 versions


Plugin support for
MySQL (p. 1733) All MySQL 5.7 versions

MySQL MEMCACHED All MySQL 5.7 and 8.0 versions


memcached
support (p. 1738)

1732
Amazon Relational Database Service User Guide
MariaDB Audit Plugin

MariaDB Audit Plugin support for MySQL


Amazon RDS offers an audit plugin for MySQL database instances based on the open source MariaDB
Audit Plugin. For more information, see the Audit Plugin for MySQL Server GitHub repository.
Note
The audit plugin for MySQL is based on the MariaDB Audit Plugin. Throughout this article, we
refer to it as MariaDB Audit Plugin.

The MariaDB Audit Plugin records database activity, including users logging on to the database and
queries run against the database. The record of database activity is stored in a log file.
Note
Currently, the MariaDB Audit Plugin is only supported for the following RDS for MySQL versions:

• MySQL 8.0.28 and higher 8.0 versions


• All MySQL 5.7 versions

Audit Plugin option settings


Amazon RDS supports the following settings for the MariaDB Audit Plugin option.

Option setting Valid values Default value Description

SERVER_AUDIT_FILE_PATH
/rdsdbdata/ /rdsdbdata/ The location of the log file. The log file
log/audit/ log/audit/ contains the record of the activity specified in
SERVER_AUDIT_EVENTS. For more information,
see Viewing and listing database log files (p. 895)
and MySQL database log files (p. 915).

1–1000000000 1000000
SERVER_AUDIT_FILE_ROTATE_SIZE The size in bytes that when reached, causes the
file to rotate. For more information, see Overview
of RDS for MySQL database logs (p. 915).

0–100 9
SERVER_AUDIT_FILE_ROTATIONS The number of log rotations to save when
server_audit_output_type=file. If set
to 0, then the log file never rotates. For more
information, see Overview of RDS for MySQL
database logs (p. 915) and Downloading a
database log file (p. 896).

SERVER_AUDIT_EVENTS
CONNECT, CONNECT, The types of activity to record in the log. Installing
QUERY, QUERY the MariaDB Audit Plugin is itself logged.
QUERY_DDL,
QUERY_DML, • CONNECT: Log successful and unsuccessful
QUERY_DML_NO_SELECT, connections to the database, and
QUERY_DCL disconnections from the database.
• QUERY: Log the text of all queries run against
the database.
• QUERY_DDL: Similar to the QUERY event, but
returns only data definition language (DDL)
queries (CREATE, ALTER, and so on).
• QUERY_DML: Similar to the QUERY event, but
returns only data manipulation language (DML)
queries (INSERT, UPDATE, and so on, and also
SELECT).

1733
Amazon Relational Database Service User Guide
MariaDB Audit Plugin

Option setting Valid values Default value Description


• QUERY_DML_NO_SELECT: Similar to the
QUERY_DML event, but doesn't log SELECT
queries.

The QUERY_DML_NO_SELECT setting is


supported only for RDS for MySQL 5.7.34 and
higher 5.7 versions, and 8.0.25 and higher 8.0
versions.
• QUERY_DCL: Similar to the QUERY event, but
returns only data control language (DCL)
queries (GRANT, REVOKE, and so on).

For MySQL, TABLE is not supported.

Multiple
SERVER_AUDIT_INCL_USERS None Include only activity from the specified
comma- users. By default, activity is recorded for
separated all users. SERVER_AUDIT_INCL_USERS
values and SERVER_AUDIT_EXCL_USERS are
mutually exclusive. If you add values
to SERVER_AUDIT_INCL_USERS,
make sure no values are added to
SERVER_AUDIT_EXCL_USERS.

Multiple
SERVER_AUDIT_EXCL_USERS None Exclude activity from the specified users.
comma- By default, activity is recorded for all
separated users. SERVER_AUDIT_INCL_USERS
values and SERVER_AUDIT_EXCL_USERS are
mutually exclusive. If you add values
to SERVER_AUDIT_EXCL_USERS,
make sure no values are added to
SERVER_AUDIT_INCL_USERS.

The rdsadmin user queries the database every


second to check the health of the database.
Depending on your other settings, this activity
can possibly cause the size of your log file to
grow very large, very quickly. If you don't need to
record this activity, add the rdsadmin user to the
SERVER_AUDIT_EXCL_USERS list.
Note
CONNECT activity is always recorded for
all users, even if the user is specified for
this option setting.

SERVER_AUDIT_LOGGING
ON ON Logging is active. The only valid value is ON.
Amazon RDS does not support deactivating
logging. If you want to deactivate logging, remove
the MariaDB Audit Plugin. For more information,
see Removing the MariaDB Audit Plugin (p. 1736).

0–2147483647 1024
SERVER_AUDIT_QUERY_LOG_LIMIT The limit on the length of the query string in a
record.

1734
Amazon Relational Database Service User Guide
MariaDB Audit Plugin

Adding the MariaDB Audit Plugin


The general process for adding the MariaDB Audit Plugin to a DB instance is the following:

• Create a new option group, or copy or modify an existing option group


• Add the option to the option group
• Associate the option group with the DB instance

After you add the MariaDB Audit Plugin, you don't need to restart your DB instance. As soon as the
option group is active, auditing begins immediately.
Important
Adding the MariaDB Audit Plugin to a DB instance might cause an outage. We recommend
adding the MariaDB Audit Plugin during a maintenance window or during a time of low
database workload.

To add the MariaDB Audit Plugin

1. Determine the option group you want to use. You can create a new option group or use an existing
option group. If you want to use an existing option group, skip to the next step. Otherwise, create a
custom DB option group. Choose mysql for Engine, and choose 5.7 or 8.0 for Major engine version.
For more information, see Creating an option group (p. 332).
2. Add the MARIADB_AUDIT_PLUGIN option to the option group, and configure the option settings.
For more information about adding options, see Adding an option to an option group (p. 335). For
more information about each setting, see Audit Plugin option settings (p. 1733).
3. Apply the option group to a new or existing DB instance.

• For a new DB instance, you apply the option group when you launch the instance. For more
information, see Creating an Amazon RDS DB instance (p. 300).
• For an existing DB instance, you apply the option group by modifying the instance and attaching
the new option group. For more information, see Modifying an Amazon RDS DB instance (p. 401).

Audit log format


Log files are represented as comma-separated variable (CSV) files in UTF-8 format.
Tip
Log file entries are not in sequential order. To order the entries, use the timestamp value. To
see the latest events, you might have to review all log files. For more flexibility in sorting and
searching the log data, turn on the setting to upload the audit logs to CloudWatch and view
them using the CloudWatch interface.
To view audit data with more types of fields and with output in JSON format, you can also use
the Database Activity Streams feature. For more information, see Monitoring Amazon RDS with
Database Activity Streams (p. 944).

The audit log files include the following comma-delimited information in rows, in the specified order:

Field Description

timestamp The YYYYMMDD followed by the HH:MI:SS (24-hour clock) for the logged event.

serverhost The name of the instance that the event is logged for.

username The connected user name of the user.

1735
Amazon Relational Database Service User Guide
MariaDB Audit Plugin

Field Description

host The host that the user connected from.

connectionid The connection ID number for the logged operation.

queryid The query ID number, which can be used for finding the relational table events and
related queries. For TABLE events, multiple lines are added.

operation The recorded action type. Possible values are: CONNECT, QUERY, READ, WRITE,
CREATE, ALTER, RENAME, and DROP.

database The active database, as set by the USE command.

object For QUERY events, this value indicates the query that the database performed. For
TABLE events, it indicates the table name.

retcode The return code of the logged operation.

connection_type The security state of the connection to the server. Possible values are:

• 0 – Undefined
• 1 – TCP/IP
• 2 – Socket
• 3 – Named pipe
• 4 – SSL/TLS
• 5 – Shared memory

This field is included only for RDS for MySQL version 5.7.34 and higher 5.7 versions,
and all 8.0 versions.

Viewing and downloading the MariaDB Audit Plugin log


After you enable the MariaDB Audit Plugin, you access the results in the log files the same way you
access any other text-based log files. The audit log files are located at /rdsdbdata/log/audit/. For
information about viewing the log file in the console, see Viewing and listing database log files (p. 895).
For information about downloading the log file, see Downloading a database log file (p. 896).

Modifying MariaDB Audit Plugin settings


After you enable the MariaDB Audit Plugin, you can modify the settings. For more information about
how to modify option settings, see Modifying an option setting (p. 340). For more information about
each setting, see Audit Plugin option settings (p. 1733).

Removing the MariaDB Audit Plugin


Amazon RDS doesn't support turning off logging in the MariaDB Audit Plugin. However, you can remove
the plugin from a DB instance. When you remove the MariaDB Audit Plugin, the DB instance is restarted
automatically to stop auditing.

To remove the MariaDB Audit Plugin from a DB instance, do one of the following:

• Remove the MariaDB Audit Plugin option from the option group it belongs to. This change affects all
DB instances that use the option group. For more information, see Removing an option from an option
group (p. 343)

1736
Amazon Relational Database Service User Guide
MariaDB Audit Plugin

• Modify the DB instance and specify a different option group that doesn't include the plugin. This
change affects a single DB instance. You can specify the default (empty) option group, or a different
custom option group. For more information, see Modifying an Amazon RDS DB instance (p. 401).

1737
Amazon Relational Database Service User Guide
memcached

MySQL memcached support


Amazon RDS supports using the memcached interface to InnoDB tables that was introduced in MySQL
5.6. The memcached API enables applications to use InnoDB tables in a manner similar to NoSQL key-
value data stores.

The memcached interface is a simple, key-based cache. Applications use memcached to insert,
manipulate, and retrieve key-value data pairs from the cache. MySQL 5.6 introduced a plugin that
implements a daemon service that exposes data from InnoDB tables through the memcached protocol.
For more information about the MySQL memcached plugin, see InnoDB integration with memcached.

To enable memcached support for an RDS for MySQL DB instance

1. Determine the security group to use for controlling access to the memcached interface. If the set
of applications already using the SQL interface are the same set that will access the memcached
interface, you can use the existing VPC security group used by the SQL interface. If a different set of
applications will access the memcached interface, define a new VPC or DB security group. For more
information about managing security groups, see Controlling access with security groups (p. 2680)
2. Create a custom DB option group, selecting MySQL as the engine type and version. For more
information about creating an option group, see Creating an option group (p. 332).
3. Add the MEMCACHED option to the option group. Specify the port that the memcached interface will
use, and the security group to use in controlling access to the interface. For more information about
adding options, see Adding an option to an option group (p. 335).
4. Modify the option settings to configure the memcached parameters, if necessary. For more
information about how to modify option settings, see Modifying an option setting (p. 340).
5. Apply the option group to an instance. Amazon RDS enables memcached support for that instance
when the option group is applied:

• You enable memcached support for a new instance by specifying the custom option group when
you launch the instance. For more information about launching a MySQL instance, see Creating an
Amazon RDS DB instance (p. 300).
• You enable memcached support for an existing instance by specifying the custom option group
when you modify the instance. For more information about modifying a DB instance, see
Modifying an Amazon RDS DB instance (p. 401).
6. Specify which columns in your MySQL tables can be accessed through the memcached interface.
The memcached plug-in creates a catalog table named containers in a dedicated database named
innodb_memcache. You insert a row into the containers table to map an InnoDB table for access
through memcached. You specify a column in the InnoDB table that is used to store the memcached
key values, and one or more columns that are used to store the data values associated with the
key. You also specify a name that a memcached application uses to refer to that set of columns. For
details on inserting rows in the containers table, see InnoDB memcached plugin internals. For an
example of mapping an InnoDB table and accessing it through memcached, see Writing applications
for the InnoDB memcached plugin.
7. If the applications accessing the memcached interface are on different computers or EC2 instances
than the applications using the SQL interface, add the connection information for those computers
to the VPC security group associated with the MySQL instance. For more information about
managing security groups, see Controlling access with security groups (p. 2680).

You turn off the memcached support for an instance by modifying the instance and specifying the
default option group for your MySQL version. For more information about modifying a DB instance, see
Modifying an Amazon RDS DB instance (p. 401).

1738
Amazon Relational Database Service User Guide
memcached

MySQL memcached security considerations


The memcached protocol does not support user authentication. For more information about MySQL
memcached security considerations, see Security Considerations for the InnoDB memcached Plugin in
the MySQL documentation.

You can take the following actions to help increase the security of the memcached interface:

• Specify a different port than the default of 11211 when adding the MEMCACHED option to the option
group.
• Ensure that you associate the memcached interface with a VPC security group that limits access to
known, trusted client addresses and EC2 instances. For more information about managing security
groups, see Controlling access with security groups (p. 2680).

MySQL memcached connection information


To access the memcached interface, an application must specify both the DNS name of the Amazon RDS
instance and the memcached port number. For example, if an instance has a DNS name of my-cache-
instance.cg034hpkmmjt.region.rds.amazonaws.com and the memcached interface is using port
11212, the connection information specified in PHP would be:

<?php

$cache = new Memcache;


$cache->connect('my-cache-instance.cg034hpkmmjt.region.rds.amazonaws.com',11212);
?>

To find the DNS name and memcached port of a MySQL DB instance

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the top right corner of the AWS Management Console, select the region that contains the DB
instance.
3. In the navigation pane, choose Databases.
4. Choose the MySQL DB instance name to display its details.
5. In the Connect section, note the value of the Endpoint field. The DNS name is the same as the
endpoint. Also, note that the port in the Connect section is not used to access the memcached
interface.
6. In the Details section, note the name listed in the Option Group field.
7. In the navigation pane, choose Option groups.
8. Choose the name of the option group used by the MySQL DB instance to show the option group
details. In the Options section, note the value of the Port setting for the MEMCACHED option.

MySQL memcached option settings


Amazon RDS exposes the MySQL memcached parameters as option settings in the Amazon RDS
MEMCACHED option.

MySQL memcached parameters


• DAEMON_MEMCACHED_R_BATCH_SIZE – an integer that specifies how many memcached read
operations (get) to perform before doing a COMMIT to start a new transaction. The allowed values are
1 to 4294967295; the default is 1. The option does not take effect until the instance is restarted.

1739
Amazon Relational Database Service User Guide
memcached

• DAEMON_MEMCACHED_W_BATCH_SIZE – an integer that specifies how many memcached write


operations, such as add, set, or incr, to perform before doing a COMMIT to start a new transaction.
The allowed values are 1 to 4294967295; the default is 1. The option does not take effect until the
instance is restarted.
• INNODB_API_BK_COMMIT_INTERVAL – an integer that specifies how often to auto-commit idle
connections that use the InnoDB memcached interface. The allowed values are 1 to 1073741824; the
default is 5. The option takes effect immediately, without requiring that you restart the instance.
• INNODB_API_DISABLE_ROWLOCK – a Boolean that disables (1 (true)) or enables (0 (false)) the use of
row locks when using the InnoDB memcached interface. The default is 0 (false). The option does not
take effect until the instance is restarted.
• INNODB_API_ENABLE_MDL – a Boolean that when set to 0 (false) locks the table used by the InnoDB
memcached plugin, so that it cannot be dropped or altered by DDL through the SQL interface. The
default is 0 (false). The option does not take effect until the instance is restarted.
• INNODB_API_TRX_LEVEL – an integer that specifies the transaction isolation level for queries
processed by the memcached interface. The allowed values are 0 to 3. The default is 0. The option
does not take effect until the instance is restarted.

Amazon RDS configures these MySQL memcached parameters, and they cannot be
modified: DAEMON_MEMCACHED_LIB_NAME, DAEMON_MEMCACHED_LIB_PATH, and
INNODB_API_ENABLE_BINLOG. The parameters that MySQL administrators set by using
daemon_memcached_options are available as individual MEMCACHED option settings in Amazon RDS.

MySQL daemon_memcached_options parameters


• BINDING_PROTOCOL – a string that specifies the binding protocol to use. The allowed values are auto,
ascii, or binary. The default is auto, which means the server automatically negotiates the protocol
with the client. The option does not take effect until the instance is restarted.
• BACKLOG_QUEUE_LIMIT – an integer that specifies how many network connections can be waiting
to be processed by memcached. Increasing this limit may reduce errors received by a client that is not
able to connect to the memcached instance, but does not improve the performance of the server. The
allowed values are 1 to 2048; the default is 1024. The option does not take effect until the instance is
restarted.
• CAS_DISABLED – a Boolean that enables (1 (true)) or disables (0 (false)) the use of compare and swap
(CAS), which reduces the per-item size by 8 bytes. The default is 0 (false). The option does not take
effect until the instance is restarted.
• CHUNK_SIZE – an integer that specifies the minimum chunk size, in bytes, to allocate for the smallest
item's key, value, and flags. The allowed values are 1 to 48. The default is 48 and you can significantly
improve memory efficiency with a lower value. The option does not take effect until the instance is
restarted.
• CHUNK_SIZE_GROWTH_FACTOR – a float that controls the size of new chunks. The size of a new chunk
is the size of the previous chunk times CHUNK_SIZE_GROWTH_FACTOR. The allowed values are 1 to 2;
the default is 1.25. The option does not take effect until the instance is restarted.
• ERROR_ON_MEMORY_EXHAUSTED – a Boolean that when set to 1 (true) specifies that memcached will
return an error rather than evicting items when there is no more memory to store items. If set to 0
(false), memcached will evict items if there is no more memory. The default is 0 (false). The option
does not take effect until the instance is restarted.
• MAX_SIMULTANEOUS_CONNECTIONS – an integer that specifies the maximum number of concurrent
connections. Setting this value to anything under 10 prevents MySQL from starting. The allowed
values are 10 to 1024; the default is 1024. The option does not take effect until the instance is
restarted.
• VERBOSITY – a string that specifies the level of information logged in the MySQL error log by the
memcached service. The default is v. The option does not take effect until the instance is restarted. The
allowed values are:

1740
Amazon Relational Database Service User Guide
memcached

• v – Logs errors and warnings while running the main event loop.
• vv – In addition to the information logged by v, also logs each client command and the response.
• vvv – In addition to the information logged by vv, also logs internal state transitions.

Amazon RDS configures these MySQL DAEMON_MEMCACHED_OPTIONS parameters, they cannot be


modified: DAEMON_PROCESS, LARGE_MEMORY_PAGES, MAXIMUM_CORE_FILE_LIMIT, MAX_ITEM_SIZE,
LOCK_DOWN_PAGE_MEMORY, MASK, IDFILE, REQUESTS_PER_EVENT, SOCKET, and USER.

1741
Amazon Relational Database Service User Guide
Parameters for MySQL

Parameters for MySQL


By default, a MySQL DB instance uses a DB parameter group that is specific to a MySQL database. This
parameter group contains parameters for the MySQL database engine. For information about working
with parameter groups and setting parameters, see Working with parameter groups (p. 347).

RDS for MySQL parameters are set to the default values of the storage engine that you have selected.
For more information about MySQL parameters, see the MySQL documentation. For more information
about MySQL storage engines, see Supported storage engines for RDS for MySQL (p. 1624).

You can view the parameters available for a specific RDS for MySQL version using the RDS console or the
AWS CLI. For information about viewing the parameters in a MySQL parameter group in the RDS console,
see Viewing parameter values for a DB parameter group (p. 359).

Using the AWS CLI, you can view the parameters for an RDS for MySQL version by running the
describe-engine-default-parameters command. Specify one of the following values for the --
db-parameter-group-family option:

• mysql8.0
• mysql5.7

For example, to view the parameters for RDS for MySQL version 8.0, run the following command.

aws rds describe-engine-default-parameters --db-parameter-group-family mysql8.0

Your output looks similar to the following.

{
"EngineDefaults": {
"Parameters": [
{
"ParameterName": "activate_all_roles_on_login",
"ParameterValue": "0",
"Description": "Automatically set all granted roles as active after the
user has authenticated successfully.",
"Source": "engine-default",
"ApplyType": "dynamic",
"DataType": "boolean",
"AllowedValues": "0,1",
"IsModifiable": true
},
{
"ParameterName": "allow-suspicious-udfs",
"Description": "Controls whether user-defined functions that have only an
xxx symbol for the main function can be loaded",
"Source": "engine-default",
"ApplyType": "static",
"DataType": "boolean",
"AllowedValues": "0,1",
"IsModifiable": false
},
{
"ParameterName": "auto_generate_certs",
"Description": "Controls whether the server autogenerates SSL key and
certificate files in the data directory, if they do not already exist.",
"Source": "engine-default",
"ApplyType": "static",
"DataType": "boolean",
"AllowedValues": "0,1",

1742
Amazon Relational Database Service User Guide
Parameters for MySQL

"IsModifiable": false
},
...

To list only the modifiable parameters for RDS for MySQL version 8.0, run the following command.

For Linux, macOS, or Unix:

aws rds describe-engine-default-parameters --db-parameter-group-family mysql8.0 \


--query 'EngineDefaults.Parameters[?IsModifiable==`true`]'

For Windows:

aws rds describe-engine-default-parameters --db-parameter-group-family mysql8.0 ^


--query "EngineDefaults.Parameters[?IsModifiable==`true`]"

1743
Amazon Relational Database Service User Guide
Common DBA tasks for MySQL

Common DBA tasks for MySQL DB instances


Following, you can find descriptions of the Amazon RDS–specific implementations of some common DBA
tasks for DB instances running the MySQL database engine. To deliver a managed service experience,
Amazon RDS doesn't provide shell access to DB instances. Also, it restricts access to certain system
procedures and tables that require advanced privileges.

For information about working with MySQL log files on Amazon RDS, see MySQL database log
files (p. 915).

Topics
• Ending a session or query (p. 1744)
• Skipping the current replication error (p. 1744)
• Working with InnoDB tablespaces to improve crash recovery times (p. 1745)
• Managing the Global Status History (p. 1747)

Ending a session or query


You can end user sessions or queries on DB instances by using the rds_kill and rds_kill_query
commands. First connect to your MySQL DB instance, then issue the appropriate command as shown
following. For more information, see Connecting to a DB instance running the MySQL database
engine (p. 1630).

CALL mysql.rds_kill(thread-ID)
CALL mysql.rds_kill_query(thread-ID)

For example, to end the session that is running on thread 99, you would type the following:

CALL mysql.rds_kill(99);

To end the query that is running on thread 99, you would type the following:

CALL mysql.rds_kill_query(99);

Skipping the current replication error


You can skip an error on your read replica if the error is causing your read replica to stop responding and
the error doesn't affect the integrity of your data.
Note
First verify that the error in question can be safely skipped. In a MySQL utility, connect to the
read replica and run the following MySQL command.

SHOW REPLICA STATUS\G

For information about the values returned, see the MySQL documentation.
Previous versions of and MySQL used SHOW SLAVE STATUS instead of SHOW REPLICA
STATUS. If you are using a MySQL version before 8.0.23, then use SHOW SLAVE STATUS.

You can skip an error on your read replica in the following ways.

Topics

1744
Amazon Relational Database Service User Guide
Working with InnoDB tablespaces
to improve crash recovery times

• Calling the mysql.rds_skip_repl_error procedure (p. 1745)


• Setting the slave_skip_errors parameter (p. 1745)

Calling the mysql.rds_skip_repl_error procedure


Amazon RDS provides a stored procedure that you can call to skip an error on your read replicas. First
connect to your read replica, then issue the appropriate commands as shown following. For more
information, see Connecting to a DB instance running the MySQL database engine (p. 1630).

To skip the error, issue the following command.

CALL mysql.rds_skip_repl_error;

This command has no effect if you run it on the source DB instance, or on a read replica that hasn't
encountered a replication error.

For more information, such as the versions of MySQL that support mysql.rds_skip_repl_error, see
mysql.rds_skip_repl_error (p. 1779).
Important
If you attempt to call mysql.rds_skip_repl_error and encounter the following error:
ERROR 1305 (42000): PROCEDURE mysql.rds_skip_repl_error does not exist,
then upgrade your MySQL DB instance to the latest minor version or one of the minimum minor
versions listed in mysql.rds_skip_repl_error (p. 1779).

Setting the slave_skip_errors parameter


To skip one or more errors, you can set the slave_skip_errors static parameter on the read replica.
You can set this parameter to skip one or more specific replication error codes. Currently, you can set this
parameter only for RDS for MySQL 5.7 DB instances. After you change the setting for this parameter,
make sure to reboot your DB instance for the new setting to take effect. For information about setting
this parameter, see the MySQL documentation.

We recommend setting this parameter in a separate DB parameter group. You can associate this DB
parameter group only with the read replicas that need to skip errors. Following this best practice reduces
the potential impact on other DB instances and read replicas.
Important
Setting a nondefault value for this parameter can lead to replication inconsistency. Only set this
parameter to a nondefault value if you have exhausted other options to resolve the problem
and you are sure of the potential impact on your read replica's data.

Working with InnoDB tablespaces to improve crash


recovery times
Every table in MySQL consists of a table definition, data, and indexes. The MySQL storage engine InnoDB
stores table data and indexes in a tablespace. InnoDB creates a global shared tablespace that contains a
data dictionary and other relevant metadata, and it can contain table data and indexes. InnoDB can also
create separate tablespaces for each table and partition. These separate tablespaces are stored in files
with a .ibd extension and the header of each tablespace contains a number that uniquely identifies it.

Amazon RDS provides a parameter in a MySQL parameter group called innodb_file_per_table.


This parameters controls whether InnoDB adds new table data and indexes to the shared tablespace
(by setting the parameter value to 0) or to individual tablespaces (by setting the parameter value to 1).

1745
Amazon Relational Database Service User Guide
Working with InnoDB tablespaces
to improve crash recovery times

Amazon RDS sets the default value for innodb_file_per_table parameter to 1, which allows you to
drop individual InnoDB tables and reclaim storage used by those tables for the DB instance. In most use
cases, setting the innodb_file_per_table parameter to 1 is the recommended setting.

You should set the innodb_file_per_table parameter to 0 when you have a large number of tables,
such as over 1000 tables when you use standard (magnetic) or general purpose SSD storage or over
10,000 tables when you use Provisioned IOPS storage. When you set this parameter to 0, individual
tablespaces are not created and this can improve the time it takes for database crash recovery.

MySQL processes each metadata file, which includes tablespaces, during the crash recovery cycle.
The time it takes MySQL to process the metadata information in the shared tablespace is negligible
compared to the time it takes to process thousands of tablespace files when there are multiple
tablespaces. Because the tablespace number is stored within the header of each file, the aggregate time
to read all the tablespace files can take up to several hours. For example, a million InnoDB tablespaces
on standard storage can take from five to eight hours to process during a crash recovery cycle. In some
cases, InnoDB can determine that it needs additional cleanup after a crash recovery cycle so it will begin
another crash recovery cycle, which will extend the recovery time. Keep in mind that a crash recovery
cycle also entails rolling-back transactions, fixing broken pages, and other operations in addition to the
processing of tablespace information.

Since the innodb_file_per_table parameter resides in a parameter group, you can change the
parameter value by editing the parameter group used by your DB instance without having to reboot the
DB instance. After the setting is changed, for example, from 1 (create individual tables) to 0 (use shared
tablespace), new InnoDB tables will be added to the shared tablespace while existing tables continue to
have individual tablespaces. To move an InnoDB table to the shared tablespace, you must use the ALTER
TABLE command.

Migrating multiple tablespaces to the shared tablespace


You can move an InnoDB table's metadata from its own tablespace to the shared tablespace, which
will rebuild the table metadata according to the innodb_file_per_table parameter setting. First
connect to your MySQL DB instance, then issue the appropriate commands as shown following. For more
information, see Connecting to a DB instance running the MySQL database engine (p. 1630).

ALTER TABLE table_name ENGINE = InnoDB, ALGORITHM=COPY;

For example, the following query returns an ALTER TABLE statement for every InnoDB table that is not
in the shared tablespace.

For MySQL 5.7 DB instances:

SELECT CONCAT('ALTER TABLE `',


REPLACE(LEFT(NAME , INSTR((NAME), '/') - 1), '`', '``'), '`.`',
REPLACE(SUBSTR(NAME FROM INSTR(NAME, '/') + 1), '`', '``'), '` ENGINE=InnoDB,
ALGORITHM=COPY;') AS Query
FROM INFORMATION_SCHEMA.INNODB_SYS_TABLES
WHERE SPACE <> 0 AND LEFT(NAME, INSTR((NAME), '/') - 1) NOT IN ('mysql','');

For MySQL 8.0 DB instances:

SELECT CONCAT('ALTER TABLE `',


REPLACE(LEFT(NAME , INSTR((NAME), '/') - 1), '`', '``'), '`.`',
REPLACE(SUBSTR(NAME FROM INSTR(NAME, '/') + 1), '`', '``'), '` ENGINE=InnoDB,
ALGORITHM=COPY;') AS Query
FROM INFORMATION_SCHEMA.INNODB_TABLES
WHERE SPACE <> 0 AND LEFT(NAME, INSTR((NAME), '/') - 1) NOT IN ('mysql','');

1746
Amazon Relational Database Service User Guide
Managing the Global Status History

Rebuilding a MySQL table to move the table's metadata to the shared tablespace requires additional
storage space temporarily to rebuild the table, so the DB instance must have storage space available.
During rebuilding, the table is locked and inaccessible to queries. For small tables or tables not
frequently accessed, this might not be an issue. For large tables or tables frequently accessed in a heavily
concurrent environment, you can rebuild tables on a read replica.

You can create a read replica and migrate table metadata to the shared tablespace on the read replica.
While the ALTER TABLE statement blocks access on the read replica, the source DB instance is not
affected. The source DB instance will continue to generate its binary logs while the read replica lags
during the table rebuilding process. Because the rebuilding requires additional storage space and the
replay log file can become large, you should create a read replica with storage allocated that is larger
than the source DB instance.

To create a read replica and rebuild InnoDB tables to use the shared tablespace, take the following steps:

1. Make sure that backup retention is enabled on the source DB instance so that binary logging is
enabled.
2. Use the AWS Management Console or AWS CLI to create a read replica for the source DB instance.
Because the creation of a read replica involves many of the same processes as crash recovery, the
creation process can take some time if there is a large number of InnoDB tablespaces. Allocate more
storage space on the read replica than is currently used on the source DB instance.
3. When the read replica has been created, create a parameter group with the parameter settings
read_only = 0 and innodb_file_per_table = 0. Then associate the parameter group with the
read replica.
4. Issue the following SQL statement for all tables that you want migrated on the replica:

ALTER TABLE name ENGINE = InnoDB

5. When all of your ALTER TABLE statements have completed on the read replica, verify that the read
replica is connected to the source DB instance and that the two instances are in sync.
6. Use the console or CLI to promote the read replica to be the instance. Make sure that the parameter
group used for the new standalone DB instance has the innodb_file_per_table parameter set
to 0. Change the name of the new standalone DB instance, and point any applications to the new
standalone DB instance.

Managing the Global Status History


Tip
To analyze database performance, you can also use Performance Insights on Amazon RDS. For
more information, see Monitoring DB load with Performance Insights on Amazon RDS (p. 720).

MySQL maintains many status variables that provide information about its operation. Their value can
help you detect locking or memory issues on a DB instance. The values of these status variables are
cumulative since last time the DB instance was started. You can reset most status variables to 0 by using
the FLUSH STATUS command.

To allow for monitoring of these values over time, Amazon RDS provides a set of procedures that
will snapshot the values of these status variables over time and write them to a table, along with any
changes since the last snapshot. This infrastructure, called Global Status History (GoSH), is installed on
all MySQL DB instances starting with versions 5.5.23. GoSH is disabled by default.

To enable GoSH, you first enable the event scheduler from a DB parameter group by setting the
parameter event_scheduler to ON. For MySQL DB instances running MySQL 5.7, also set the
parameter show_compatibility_56 to 1. For information about creating and modifying a DB
parameter group, see Working with parameter groups (p. 347). For information about the side effects of
enabling this parameter, see show_compatibility_56 in the MySQL 5.7 Reference Manual.

1747
Amazon Relational Database Service User Guide
Managing the Global Status History

You can then use the procedures in the following table to enable and configure GoSH. First connect
to your MySQL DB instance, then issue the appropriate commands as shown following. For more
information, see Connecting to a DB instance running the MySQL database engine (p. 1630). For each
procedure, type the following:

CALL procedure-name;

Where procedure-name is one of the procedures in the table.

Procedure Description

mysql.rds_enable_gsh_collector Enables GoSH to take default snapshots at intervals


specified by rds_set_gsh_collector.

mysql.rds_set_gsh_collector Specifies the interval, in minutes, between snapshots.


Default value is 5.

mysql.rds_disable_gsh_collector Disables snapshots.

Takes a snapshot on demand.


mysql.rds_collect_global_status_history

mysql.rds_enable_gsh_rotation Enables rotation of the contents of the


mysql.rds_global_status_history table to
mysql.rds_global_status_history_old at intervals
specified by rds_set_gsh_rotation.

mysql.rds_set_gsh_rotation Specifies the interval, in days, between table rotations.


Default value is 7.

mysql.rds_disable_gsh_rotation Disables table rotation.

Rotates the contents of the


mysql.rds_rotate_global_status_history
mysql.rds_global_status_history table to
mysql.rds_global_status_history_old on demand.

When GoSH is running, you can query the tables that it writes to. For example, to query the hit ratio of
the Innodb buffer pool, you would issue the following query:

select a.collection_end, a.collection_start, (( a.variable_Delta-b.variable_delta)/


a.variable_delta)*100 as "HitRatio"
from mysql.rds_global_status_history as a join mysql.rds_global_status_history as b on
a.collection_end = b.collection_end
where a. variable_name = 'Innodb_buffer_pool_read_requests' and b.variable_name =
'Innodb_buffer_pool_reads'

1748
Amazon Relational Database Service User Guide
Local time zone

Local time zone for MySQL DB instances


By default, the time zone for a MySQL DB instance is Universal Time Coordinated (UTC). You can set the
time zone for your DB instance to the local time zone for your application instead.

To set the local time zone for a DB instance, set the time_zone parameter in the parameter group for
your DB instance to one of the supported values listed later in this section. When you set the time_zone
parameter for a parameter group, all DB instances and read replicas that are using that parameter group
change to use the new local time zone. For information on setting parameters in a parameter group, see
Working with parameter groups (p. 347).

After you set the local time zone, all new connections to the database reflect the change. If you have any
open connections to your database when you change the local time zone, you won't see the local time
zone update until after you close the connection and open a new connection.

You can set a different local time zone for a DB instance and one or more of its read replicas. To do this,
use a different parameter group for the DB instance and the replica or replicas and set the time_zone
parameter in each parameter group to a different local time zone.

If you are replicating across AWS Regions, then the source DB instance and the read replica use different
parameter groups (parameter groups are unique to an AWS Region). To use the same local time zone
for each instance, you must set the time_zone parameter in the instance's and read replica's parameter
groups.

When you restore a DB instance from a DB snapshot, the local time zone is set to UTC. You can update
the time zone to your local time zone after the restore is complete. If you restore a DB instance to a
point in time, then the local time zone for the restored DB instance is the time zone setting from the
parameter group of the restored DB instance.

The Internet Assigned Numbers Authority (IANA) publishes new time zones at https://www.iana.org/
time-zones several times a year. Every time RDS releases a new minor maintenance release of MySQL, it
ships with the latest time zone data at the time of the release. When you use the latest RDS for MySQL
versions, you have recent time zone data from RDS. To ensure that your DB instance has recent time
zone data, we recommend upgrading to a higher DB engine version. Alternatively, you can modify the
time zone tables in MariaDB DB instances manually. To do so, you can use SQL commands or run the
mysql_tzinfo_to_sql tool in a SQL client. After updating the time zone data manually, reboot your DB
instance so that the changes take effect. RDS doesn't modify or reset the time zone data of running DB
instances. New time zone data is installed only when you perform a database engine version upgrade.

You can set your local time zone to one of the following values.

Africa/Cairo Asia/Riyadh

Africa/Casablanca Asia/Seoul

Africa/Harare Asia/Shanghai

Africa/Monrovia Asia/Singapore

Africa/Nairobi Asia/Taipei

Africa/Tripoli Asia/Tehran

Africa/Windhoek Asia/Tokyo

America/Araguaina Asia/Ulaanbaatar

America/Asuncion Asia/Vladivostok

1749
Amazon Relational Database Service User Guide
Local time zone

America/Bogota Asia/Yakutsk

America/Buenos_Aires Asia/Yerevan

America/Caracas Atlantic/Azores

America/Chihuahua Australia/Adelaide

America/Cuiaba Australia/Brisbane

America/Denver Australia/Darwin

America/Fortaleza Australia/Hobart

America/Guatemala Australia/Perth

America/Halifax Australia/Sydney

America/Manaus Brazil/East

America/Matamoros Canada/Newfoundland

America/Monterrey Canada/Saskatchewan

America/Montevideo Canada/Yukon

America/Phoenix Europe/Amsterdam

America/Santiago Europe/Athens

America/Tijuana Europe/Dublin

Asia/Amman Europe/Helsinki

Asia/Ashgabat Europe/Istanbul

Asia/Baghdad Europe/Kaliningrad

Asia/Baku Europe/Moscow

Asia/Bangkok Europe/Paris

Asia/Beirut Europe/Prague

Asia/Calcutta Europe/Sarajevo

Asia/Damascus Pacific/Auckland

Asia/Dhaka Pacific/Fiji

Asia/Irkutsk Pacific/Guam

Asia/Jerusalem Pacific/Honolulu

Asia/Kabul Pacific/Samoa

Asia/Karachi US/Alaska

Asia/Kathmandu US/Central

Asia/Krasnoyarsk US/Eastern

Asia/Magadan US/East-Indiana

1750
Amazon Relational Database Service User Guide
Local time zone

Asia/Muscat US/Pacific

Asia/Novosibirsk UTC

1751
Amazon Relational Database Service User Guide
Known issues and limitations

Known issues and limitations for Amazon RDS for


MySQL
Known issues and limitations for working with Amazon RDS for MySQL are as follows.

Topics
• InnoDB reserved word (p. 1752)
• Storage-full behavior for Amazon RDS for MySQL (p. 1752)
• Inconsistent InnoDB buffer pool size (p. 1753)
• Index merge optimization returns incorrect results (p. 1753)
• Log file size (p. 1754)
• MySQL parameter exceptions for Amazon RDS DB instances (p. 1754)
• MySQL file size limits in Amazon RDS (p. 1754)
• MySQL Keyring Plugin not supported (p. 1756)
• Custom ports (p. 1756)
• MySQL stored procedure limitations (p. 1756)
• GTID-based replication with an external source instance (p. 1756)

InnoDB reserved word


InnoDB is a reserved word for RDS for MySQL. You can't use this name for a MySQL database.

Storage-full behavior for Amazon RDS for MySQL


When storage becomes full for a MySQL DB instance, there can be metadata inconsistencies, dictionary
mismatches, and orphan tables. To prevent these issues, Amazon RDS automatically stops a DB instance
that reaches the storage-full state.

A MySQL DB instance reaches the storage-full state in the following cases:

• The DB instance has less than 20,000 MiB of storage, and available storage reaches 200 MiB or less.
• The DB instance has more than 102,400 MiB of storage, and available storage reaches 1024 MiB or
less.
• The DB instance has between 20,000 MiB and 102,400 MiB of storage, and has less than 1% of storage
available.

After Amazon RDS stops a DB instance automatically because it reached the storage-full state, you
can still modify it. To restart the DB instance, complete at least one of the following:

• Modify the DB instance to enable storage autoscaling.

For more information about storage autoscaling, see Managing capacity automatically with Amazon
RDS storage autoscaling (p. 480).
• Modify the DB instance to increase its storage capacity.

For more information about increasing storage capacity, see Increasing DB instance storage
capacity (p. 478).

1752
Amazon Relational Database Service User Guide
Inconsistent InnoDB buffer pool size

After you make one of these changes, the DB instance is restarted automatically. For information about
modifying a DB instance, see Modifying an Amazon RDS DB instance (p. 401).

Inconsistent InnoDB buffer pool size


For MySQL 5.7, there is currently a bug in the way that the InnoDB buffer pool size is managed. MySQL
5.7 might adjust the value of the innodb_buffer_pool_size parameter to a large value that can
result in the InnoDB buffer pool growing too large and using up too much memory. This effect can cause
the MySQL database engine to stop running or can prevent it from starting. This issue is more common
for DB instance classes that have less memory available.

To resolve this issue, set the value of the innodb_buffer_pool_size parameter to a


multiple of the product of the innodb_buffer_pool_instances parameter value and
the innodb_buffer_pool_chunk_size parameter value. For example, you might set the
innodb_buffer_pool_size parameter value to a multiple of eight times the product of the
innodb_buffer_pool_instances and innodb_buffer_pool_chunk_size parameter values, as
shown in the following example.

innodb_buffer_pool_chunk_size = 536870912
innodb_buffer_pool_instances = 4
innodb_buffer_pool_size = (536870912 * 4) * 8 = 17179869184

For details on this MySQL 5.7 bug, see https://bugs.mysql.com/bug.php?id=79379 in the MySQL
documentation.

Index merge optimization returns incorrect results


Queries that use index merge optimization might return incorrect results due to a bug in the MySQL
query optimizer that was introduced in MySQL 5.5.37. When you issue a query against a table with
multiple indexes, the optimizer scans ranges of rows based on the multiple indexes, but does not
merge the results together correctly. For more information on the query optimizer bug, see http://
bugs.mysql.com/bug.php?id=72745 and http://bugs.mysql.com/bug.php?id=68194 in the MySQL bug
database.

For example, consider a query on a table with two indexes where the search arguments reference the
indexed columns.

SELECT * FROM table1


WHERE indexed_col1 = 'value1' AND indexed_col2 = 'value2';

In this case, the search engine will search both indexes. However, due to the bug, the merged results are
incorrect.

To resolve this issue, you can do one of the following:

• Set the optimizer_switch parameter to index_merge=off in the DB parameter group for your
MySQL DB instance. For information on setting DB parameter group parameters, see Working with
parameter groups (p. 347).
• Upgrade your MySQL DB instance to MySQL version 5.7 or 8.0. For more information, see Upgrading
the MySQL DB engine (p. 1664).
• If you cannot upgrade your instance or change the optimizer_switch parameter, you can work
around the bug by explicitly identifying an index for the query, for example:

SELECT * FROM table1


USE INDEX covering_index

1753
Amazon Relational Database Service User Guide
Log file size

WHERE indexed_col1 = 'value1' AND indexed_col2 = 'value2';

For more information, see Index merge optimization in the MySQL documentation.

Log file size


For MySQL, there is a size limit on BLOBs written to the redo log. To account for this limit, ensure
that the innodb_log_file_size parameter for your MySQL DB instance is 10 times larger than the
largest BLOB data size found in your tables, plus the length of other variable length fields (VARCHAR,
VARBINARY, TEXT) in the same tables. For information on how to set parameter values, see Working
with parameter groups (p. 347). For information on the innodb_log_file_size parameter, see the
MySQL documentation.

MySQL parameter exceptions for Amazon RDS DB


instances
Some MySQL parameters require special considerations when used with an Amazon RDS DB instance.

lower_case_table_names
Because Amazon RDS uses a case-sensitive file system, setting the value of the
lower_case_table_names server parameter to 2 (names stored as given but compared in lowercase) is
not supported. The following are the supported values for Amazon RDS for MySQL DB instances:

• 0 (names stored as given and comparisons are case-sensitive) is supported for all RDS for MySQL
versions.
• 1 (names stored in lowercase and comparisons are not case-sensitive) is supported for RDS for MySQL
version 5.7 and version 8.0.28 and higher 8.0 versions.

Set the lower_case_table_names parameter in a custom DB parameter group before creating a DB


instance. Then, specify the custom DB parameter group when you create the DB instance.

When a parameter group is associated with a MySQL DB instance with a version lower than 8.0, we
recommend that you avoid changing the lower_case_table_names parameter in the parameter
group. Changing it could cause inconsistencies with point-in-time recovery backups and read replica DB
instances.

When a parameter group is associated with a version 8.0 MySQL DB instance, you can't modify the
lower_case_table_names parameter in the parameter group.

Read replicas should always use the same lower_case_table_names parameter value as the source
DB instance.

long_query_time
You can set the long_query_time parameter to a floating point value so that you can log slow queries
to the MySQL slow query log with microsecond resolution. You can set a value such as 0.1 seconds, which
would be 100 milliseconds, to help when debugging slow transactions that take less than one second.

MySQL file size limits in Amazon RDS


For MySQL DB instances, the maximum provisioned storage limit constrains the size of a table to a
maximum size of 16 TB when using InnoDB file-per-table tablespaces. This limit also constrains the

1754
Amazon Relational Database Service User Guide
MySQL file size limits in Amazon RDS

system tablespace to a maximum size of 16 TB. InnoDB file-per-table tablespaces (with tables each in
their own tablespace) is set by default for MySQL DB instances.
Note
Some existing DB instances have a lower limit. For example, MySQL DB instances created before
April 2014 have a file and table size limit of 2 TB. This 2 TB file size limit also applies to DB
instances or read replicas created from DB snapshots taken before April 2014, regardless of
when the DB instance was created.

There are advantages and disadvantages to using InnoDB file-per-table tablespaces, depending on your
application. To determine the best approach for your application, see File-per-table tablespaces in the
MySQL documentation.

We don't recommend allowing tables to grow to the maximum file size. In general, a better practice is to
partition data into smaller tables, which can improve performance and recovery times.

One option that you can use for breaking up a large table into smaller tables is partitioning. Partitioning
distributes portions of your large table into separate files based on rules that you specify. For example,
if you store transactions by date, you can create partitioning rules that distribute older transactions into
separate files using partitioning. Then periodically, you can archive the historical transaction data that
doesn't need to be readily available to your application. For more information, see Partitioning in the
MySQL documentation.

Because there is no single system table or view that provides the size of all the tables and the InnoDB
system tablespace, you must query multiple tables to determine the size of the tablespaces.

To determine the size of the InnoDB system tablespace and the data dictionary tablespace

• Use the following SQL command to determine if any of your tablespaces are too large and are
candidates for partitioning.
Note
The data dictionary tablespace is specific to MySQL 8.0.

select FILE_NAME,TABLESPACE_NAME, ROUND(((TOTAL_EXTENTS*EXTENT_SIZE)


/1024/1024/1024), 2) as "File Size (GB)" from information_schema.FILES
where tablespace_name in ('mysql','innodb_system');

To determine the size of InnoDB user tables outside of the InnoDB system tablespace (for
MySQL 5.7 versions)

• Use the following SQL command to determine if any of your tables are too large and are candidates
for partitioning.

SELECT SPACE,NAME,ROUND((ALLOCATED_SIZE/1024/1024/1024), 2)
as "Tablespace Size (GB)"
FROM information_schema.INNODB_SYS_TABLESPACES ORDER BY 3 DESC;

To determine the size of InnoDB user tables outside of the InnoDB system tablespace (for
MySQL 8.0 versions)

• Use the following SQL command to determine if any of your tables are too large and are candidates
for partitioning.

SELECT SPACE,NAME,ROUND((ALLOCATED_SIZE/1024/1024/1024), 2)
as "Tablespace Size (GB)"
FROM information_schema.INNODB_TABLESPACES ORDER BY 3 DESC;

1755
Amazon Relational Database Service User Guide
MySQL Keyring Plugin not supported

To determine the size of non-InnoDB user tables

• Use the following SQL command to determine if any of your non-InnoDB user tables are too large.

SELECT TABLE_SCHEMA, TABLE_NAME, round(((DATA_LENGTH + INDEX_LENGTH+DATA_FREE)


/ 1024 / 1024/ 1024), 2) As "Approximate size (GB)" FROM information_schema.TABLES
WHERE TABLE_SCHEMA NOT IN ('mysql', 'information_schema', 'performance_schema')
and ENGINE<>'InnoDB';

To enable InnoDB file-per-table tablespaces

• Set the innodb_file_per_table parameter to 1 in the parameter group for the DB instance.

To disable InnoDB file-per-table tablespaces

• Set the innodb_file_per_table parameter to 0 in the parameter group for the DB instance.

For information on updating a parameter group, see Working with parameter groups (p. 347).

When you have enabled or disabled InnoDB file-per-table tablespaces, you can issue an ALTER TABLE
command to move a table from the global tablespace to its own tablespace, or from its own tablespace
to the global tablespace as shown in the following example:

ALTER TABLE table_name ENGINE=InnoDB;

MySQL Keyring Plugin not supported


Currently, Amazon RDS for MySQL doesn't support the MySQL keyring_aws Amazon Web Services
Keyring Plugin.

Custom ports
Amazon RDS blocks connections to custom port 33060 for the MySQL engine. Choose a different port
for your MySQL engine.

MySQL stored procedure limitations


The mysql.rds_kill (p. 1761) and mysql.rds_kill_query (p. 1761) stored procedures can't terminate
sessions or queries owned by MySQL users with usernames longer than 16 characters on the following
RDS for MySQL versions:

• 8.0.32 and lower 8 versions


• 5.7.41 and lower 5.7 versions

GTID-based replication with an external source


instance
Amazon RDS doesn't support replication based on global transaction identifiers (GTIDs) from an external
MySQL instance into an Amazon RDS for MySQL DB instance that requires setting GTID_PURGED during
configuration.

1756
Amazon Relational Database Service User Guide
RDS for MySQL stored procedures

RDS for MySQL stored procedure reference


These topics describe system stored procedures that are available for Amazon RDS instances running the
MySQL DB engine. The master user must run these procedures.

Topics
• Configuring (p. 1758)
• Ending a session or query (p. 1761)
• Logging (p. 1763)
• Managing the Global Status History (p. 1764)
• Replicating (p. 1767)
• Warming the InnoDB cache (p. 1784)

1757
Amazon Relational Database Service User Guide
Configuring

Configuring
The following stored procedures set and show configuration parameters, such as for binary log file
retention.

Topics
• mysql.rds_set_configuration (p. 1758)
• mysql.rds_show_configuration (p. 1760)

mysql.rds_set_configuration
Specifies the number of hours to retain binary logs or the number of seconds to delay replication.

Syntax

CALL mysql.rds_set_configuration(name,value);

Parameters
name

The name of the configuration parameter to set.


value

The value of the configuration parameter.

Usage notes
The mysql.rds_set_configuration procedure supports the following configuration parameters:

• binlog retention hours (p. 1758)


• source delay (p. 1759)
• target delay (p. 1759)

The configuration parameters are stored permanently and survive any DB instance reboot or failover.

binlog retention hours

The binlog retention hours parameter is used to specify the number of hours to retain binary log
files. Amazon RDS normally purges a binary log as soon as possible, but the binary log might still be
required for replication with a MySQL database external to RDS.

The default value of binlog retention hours is NULL. For RDS for MySQL, NULL means binary logs
aren't retained (0 hours).

To specify the number of hours to retain binary logs on a DB instance, use the
mysql.rds_set_configuration stored procedure and specify a period with enough time for
replication to occur, as shown in the following example.

call mysql.rds_set_configuration('binlog retention hours', 24);


Note
You can't use the value 0 for binlog retention hours.

1758
Amazon Relational Database Service User Guide
Configuring

For MySQL DB instances, the maximum binlog retention hours value is 168 (7 days).

After you set the retention period, monitor storage usage for the DB instance to make sure that the
retained binary logs don't take up too much storage.

source delay

Use the source delay parameter in a read replica to specify the number of seconds to delay
replication from the read replica to its source DB instance. Amazon RDS normally replicates changes
as soon as possible, but you might want some environments to delay replication. For example, when
replication is delayed, you can roll forward a delayed read replica to the time just before a disaster. If a
table is dropped accidentally, you can use delayed replication to quickly recover it. The default value of
target delay is 0 (don't delay replication).

When you use this parameter, it runs mysql.rds_set_source_delay (p. 1777) and applies CHANGE primary
TO MASTER_DELAY = input value. If successful, the procedure saves the source delay parameter to
the mysql.rds_configuration table.

To specify the number of seconds for Amazon RDS to delay replication to a source DB instance, use
the mysql.rds_set_configuration stored procedure and specify the number of seconds to delay
replication. In the following example, the replication is delayed by at least one hour (3,600 seconds).

call mysql.rds_set_configuration('source delay', 3600);

The procedure then runs mysql.rds_set_source_delay(3600).

The limit for the source delay parameter is one day (86400 seconds).
Note
The source delay parameter isn't supported for RDS for MySQL version 8.0 or MariaDB
versions below 10.2.

target delay

Use the target delay parameter to specify the number of seconds to delay replication between
a DB instance and any future RDS-managed read replicas created from this instance. This parameter
is ignored for non-RDS-managed read replicas. Amazon RDS normally replicates changes as soon as
possible, but you might want some environments to delay replication. For example, when replication
is delayed, you can roll forward a delayed read replica to the time just before a disaster. If a table is
dropped accidentally, you can use delayed replication to recover it quickly. The default value of target
delay is 0 (don't delay replication).

For disaster recovery, you can use this configuration parameter with
the mysql.rds_start_replication_until (p. 1780) stored procedure or the
mysql.rds_start_replication_until_gtid (p. 1781) stored procedure. To roll forward changes to a delayed
read replica to the time just before a disaster, you can run the mysql.rds_set_configuration
procedure with this parameter set. After the mysql.rds_start_replication_until or
mysql.rds_start_replication_until_gtid procedure stops replication, you can promote the read
replica to be the new primary DB instance by using the instructions in Promoting a read replica to be a
standalone DB instance (p. 447).

To use the mysql.rds_rds_start_replication_until_gtid procedure, GTID-based replication


must be enabled. To skip a specific GTID-based transaction that is known to cause disaster, you can
use the mysql.rds_skip_transaction_with_gtid (p. 1778) stored procedure. For more information
about working with GTID-based replication, see Using GTID-based replication for Amazon RDS for
MySQL (p. 1719).

To specify the number of seconds for Amazon RDS to delay replication to a read replica, use the
mysql.rds_set_configuration stored procedure and specify the number of seconds to delay

1759
Amazon Relational Database Service User Guide
Configuring

replication. The following example specifies that replication is delayed by at least one hour (3,600
seconds).

call mysql.rds_set_configuration('target delay', 3600);

The limit for the target delay parameter is one day (86400 seconds).
Note
The target delay parameter isn't supported for RDS for MySQL version 8.0 or MariaDB
versions earlier than 10.2.

mysql.rds_show_configuration
The number of hours that binary logs are retained.

Syntax

CALL mysql.rds_show_configuration;

Usage notes
To verify the number of hours that Amazon RDS retains binary logs, use the
mysql.rds_show_configuration stored procedure.

Examples
The following example displays the retention period:

call mysql.rds_show_configuration;
name value description
binlog retention hours 24 binlog retention hours specifies the
duration in hours before binary logs are automatically deleted.

1760
Amazon Relational Database Service User Guide
Ending a session or query

Ending a session or query


The following stored procedures end a session or query.

Topics
• mysql.rds_kill (p. 1761)
• mysql.rds_kill_query (p. 1761)

mysql.rds_kill
Ends a connection to the MySQL server.

Syntax

CALL mysql.rds_kill(processID);

Parameters
processID

The identity of the connection thread to be ended.

Usage notes
Each connection to the MySQL server runs in a separate thread. To end a connection, use the
mysql.rds_kill procedure and pass in the thread ID of that connection. To obtain the thread ID, use
the MySQL SHOW PROCESSLIST command.

For information about limitations, see MySQL stored procedure limitations (p. 1756).

Examples
The following example ends a connection with a thread ID of 4243:

CALL mysql.rds_kill(4243);

mysql.rds_kill_query
Ends a query running against the MySQL server.

Syntax

CALL mysql.rds_kill_query(processID);

Parameters
processID

The identity of the process or thread that is running the query to be ended.

1761
Amazon Relational Database Service User Guide
Ending a session or query

Usage notes
To stop a query running against the MySQL server, use the mysql_rds_kill_query procedure and
pass in the connection ID of the thread that is running the query. The procedure then terminates the
connection.

To obtain the ID, query the MySQL INFORMATION_SCHEMA PROCESSLIST table or use the MySQL SHOW
PROCESSLIST command. The value in the ID column from SHOW PROCESSLIST or SELECT * FROM
INFORMATION_SCHEMA.PROCESSLIST is the processID.

For information about limitations, see MySQL stored procedure limitations (p. 1756).

Examples
The following example stops a query with a query thread ID of 230040:

CALL mysql.rds_kill_query(230040);

1762
Amazon Relational Database Service User Guide
Logging

Logging
The following stored procedures rotate MySQL logs to backup tables. For more information, see MySQL
database log files (p. 915).

Topics
• mysql.rds_rotate_general_log (p. 1763)
• mysql.rds_rotate_slow_log (p. 1763)

mysql.rds_rotate_general_log
Rotates the mysql.general_log table to a backup table.

Syntax

CALL mysql.rds_rotate_general_log;

Usage notes
You can rotate the mysql.general_log table to a backup table by calling the
mysql.rds_rotate_general_log procedure. When log tables are rotated, the current log table is
copied to a backup log table and the entries in the current log table are removed. If a backup log table
already exists, then it is deleted before the current log table is copied to the backup. You can query
the backup log table if needed. The backup log table for the mysql.general_log table is named
mysql.general_log_backup.

You can run this procedure only when the log_output parameter is set to TABLE.

mysql.rds_rotate_slow_log
Rotates the mysql.slow_log table to a backup table.

Syntax

CALL mysql.rds_rotate_slow_log;

Usage notes
You can rotate the mysql.slow_log table to a backup table by calling the
mysql.rds_rotate_slow_log procedure. When log tables are rotated, the current log table is copied
to a backup log table and the entries in the current log table are removed. If a backup log table already
exists, then it is deleted before the current log table is copied to the backup.

You can query the backup log table if needed. The backup log table for the mysql.slow_log table is
named mysql.slow_log_backup.

1763
Amazon Relational Database Service User Guide
Managing the Global Status History

Managing the Global Status History


Amazon RDS provides a set of procedures that take snapshots of the values of status variables over time
and write them to a table, along with any changes since the last snapshot. This infrastructure is called
Global Status History. For more information, see Managing the Global Status History (p. 1747).

The following stored procedures manage how the Global Status History is collected and maintained.

Topics
• mysql.rds_collect_global_status_history (p. 1764)
• mysql.rds_disable_gsh_collector (p. 1764)
• mysql.rds_disable_gsh_rotation (p. 1764)
• mysql.rds_enable_gsh_collector (p. 1764)
• mysql.rds_enable_gsh_rotation (p. 1765)
• mysql.rds_rotate_global_status_history (p. 1765)
• mysql.rds_set_gsh_collector (p. 1765)
• mysql.rds_set_gsh_rotation (p. 1765)

mysql.rds_collect_global_status_history
Takes a snapshot on demand for the Global Status History.

Syntax

CALL mysql.rds_collect_global_status_history;

mysql.rds_disable_gsh_collector
Turns off snapshots taken by the Global Status History.

Syntax

CALL mysql.rds_disable_gsh_collector;

mysql.rds_disable_gsh_rotation
Turns off rotation of the mysql.global_status_history table.

Syntax

CALL mysql.rds_disable_gsh_rotation;

mysql.rds_enable_gsh_collector
Turns on the Global Status History to take default snapshots at intervals specified by
rds_set_gsh_collector.

Syntax

1764
Amazon Relational Database Service User Guide
Managing the Global Status History

CALL mysql.rds_enable_gsh_collector;

mysql.rds_enable_gsh_rotation
Turns on rotation of the contents of the mysql.global_status_history table to
mysql.global_status_history_old at intervals specified by rds_set_gsh_rotation.

Syntax

CALL mysql.rds_enable_gsh_rotation;

mysql.rds_rotate_global_status_history
Rotates the contents of the mysql.global_status_history table to
mysql.global_status_history_old on demand.

Syntax

CALL mysql.rds_rotate_global_status_history;

mysql.rds_set_gsh_collector
Specifies the interval, in minutes, between snapshots taken by the Global Status History.

Syntax

CALL mysql.rds_set_gsh_collector(intervalPeriod);

Parameters
intervalPeriod

The interval, in minutes, between snapshots. Default value is 5.

mysql.rds_set_gsh_rotation
Specifies the interval, in days, between rotations of the mysql.global_status_history table.

Syntax

CALL mysql.rds_set_gsh_rotation(intervalPeriod);

Parameters
intervalPeriod

The interval, in days, between table rotations. Default value is 7.

1765
Amazon Relational Database Service User Guide
Managing the Global Status History

1766
Amazon Relational Database Service User Guide
Replicating

Replicating
The following stored procedures control how transactions are replicated from an external database
into RDS for MySQL, or from RDS for MySQL to an external database. To learn how to use replication
based on global transaction identifiers (GTIDs) with RDS for MySQL, see Using GTID-based replication for
Amazon RDS for MySQL (p. 1719).

Topics
• mysql.rds_next_master_log (p. 1767)
• mysql.rds_reset_external_master (p. 1769)
• mysql.rds_set_external_master (p. 1769)
• mysql.rds_set_external_master_with_auto_position (p. 1772)
• mysql.rds_set_external_master_with_delay (p. 1774)
• mysql.rds_set_master_auto_position (p. 1777)
• mysql.rds_set_source_delay (p. 1777)
• mysql.rds_skip_transaction_with_gtid (p. 1778)
• mysql.rds_skip_repl_error (p. 1779)
• mysql.rds_start_replication (p. 1780)
• mysql.rds_start_replication_until (p. 1780)
• mysql.rds_start_replication_until_gtid (p. 1781)
• mysql.rds_stop_replication (p. 1782)

mysql.rds_next_master_log
Changes the source database instance log position to the start of the next binary log on the source
database instance. Use this procedure only if you are receiving replication I/O error 1236 on a read
replica.

Syntax

CALL mysql.rds_next_master_log(
curr_master_log
);

Parameters
curr_master_log

The index of the current master log file. For example, if the current file is named mysql-bin-
changelog.012345, then the index is 12345. To determine the current master log file name, run
the SHOW REPLICA STATUS command and view the Master_Log_File field.
Note
Previous versions of MySQL used SHOW SLAVE STATUS instead of SHOW REPLICA
STATUS. If you are using a MySQL version before 8.0.23, then use SHOW SLAVE STATUS.

Usage notes
The master user must run the mysql.rds_next_master_log procedure.

1767
Amazon Relational Database Service User Guide
Replicating

Warning
Call mysql.rds_next_master_log only if replication fails after a failover of a Multi-AZ
DB instance that is the replication source, and the Last_IO_Errno field of SHOW REPLICA
STATUS reports I/O error 1236.
Calling mysql.rds_next_master_log can result in data loss in the read replica if transactions
in the source instance were not written to the binary log on disk before the failover event
occurred.
You can reduce the chance of this happening by setting the source instance parameters
sync_binlog and innodb_support_xa to 1, although this might reduce performance. For
more information, see Troubleshooting a MySQL read replica problem (p. 1718).

Examples
Assume replication fails on an RDS for MySQL read replica. Running SHOW REPLICA STATUS\G on the
read replica returns the following result:

*************************** 1. row ***************************


Replica_IO_State:
Source_Host: myhost.XXXXXXXXXXXXXXX.rr-rrrr-1.rds.amazonaws.com
Source_User: MasterUser
Source_Port: 3306
Connect_Retry: 10
Source_Log_File: mysql-bin-changelog.012345
Read_Source_Log_Pos: 1219393
Relay_Log_File: relaylog.012340
Relay_Log_Pos: 30223388
Relay_Source_Log_File: mysql-bin-changelog.012345
Replica_IO_Running: No
Replica_SQL_Running: Yes
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 0
Last_Error:
Skip_Counter: 0
Exec_Source_Log_Pos: 30223232
Relay_Log_Space: 5248928866
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Source_SSL_Allowed: No
Source_SSL_CA_File:
Source_SSL_CA_Path:
Source_SSL_Cert:
Source_SSL_Cipher:
Source_SSL_Key:
Seconds_Behind_Master: NULL
Source_SSL_Verify_Server_Cert: No
Last_IO_Errno: 1236
Last_IO_Error: Got fatal error 1236 from master when reading data from
binary log: 'Client requested master to start replication from impossible position; the
first event 'mysql-bin-changelog.013406' at 1219393, the last event read from '/rdsdbdata/
log/binlog/mysql-bin-changelog.012345' at 4, the last byte read from '/rdsdbdata/log/
binlog/mysql-bin-changelog.012345' at 4.'
Last_SQL_Errno: 0
Last_SQL_Error:
Replicate_Ignore_Server_Ids:
Source_Server_Id: 67285976

1768
Amazon Relational Database Service User Guide
Replicating

The Last_IO_Errno field shows that the instance is receiving I/O error 1236. The Master_Log_File
field shows that the file name is mysql-bin-changelog.012345, which means that the log file
index is 12345. To resolve the error, you can call mysql.rds_next_master_log with the following
parameter:

CALL mysql.rds_next_master_log(12345);

Note
Previous versions of MySQL used SHOW SLAVE STATUS instead of SHOW REPLICA STATUS. If
you are using a MySQL version before 8.0.23, then use SHOW SLAVE STATUS.

mysql.rds_reset_external_master
Reconfigures an RDS for MySQL DB instance to no longer be a read replica of an instance of MySQL
running external to Amazon RDS.
Important
To run this procedure, autocommit must be enabled. To enable it, set the autocommit
parameter to 1. For information about modifying parameters, see Modifying parameters in a DB
parameter group (p. 352).

Syntax

CALL mysql.rds_reset_external_master;

Usage notes
The master user must run the mysql.rds_reset_external_master procedure. This procedure must
be run on the MySQL DB instance to be removed as a read replica of a MySQL instance running external
to Amazon RDS.
Note
We recommend that you use read replicas to manage replication between two Amazon RDS
DB instances when possible. When you do so, we recommend that you use only this and
other replication-related stored procedures. These practices enable more complex replication
topologies between Amazon RDS DB instances. We offer these stored procedures primarily to
enable replication with MySQL instances running external to Amazon RDS. For information
about managing replication between Amazon RDS DB instances, see Working with DB instance
read replicas (p. 438).

For more information about using replication to import data from an instance of MySQL running
external to Amazon RDS, see Configuring binary log file position replication with an external source
instance (p. 1724).

mysql.rds_set_external_master
Configures an RDS for MySQL DB instance to be a read replica of an instance of MySQL running external
to Amazon RDS.
Important
To run this procedure, autocommit must be enabled. To enable it, set the autocommit
parameter to 1. For information about modifying parameters, see Modifying parameters in a DB
parameter group (p. 352).
Note
You can use the mysql.rds_set_external_master_with_delay (p. 1774) stored procedure to
configure an external source database instance and delayed replication.

1769
Amazon Relational Database Service User Guide
Replicating

Syntax

CALL mysql.rds_set_external_master (
host_name
, host_port
, replication_user_name
, replication_user_password
, mysql_binary_log_file_name
, mysql_binary_log_file_location
, ssl_encryption
);

Parameters
host_name

The host name or IP address of the MySQL instance running external to Amazon RDS to become the
source database instance.
host_port

The port used by the MySQL instance running external to Amazon RDS to be configured as the
source database instance. If your network configuration includes Secure Shell (SSH) port replication
that converts the port number, specify the port number that is exposed by SSH.
replication_user_name

The ID of a user with REPLICATION CLIENT and REPLICATION SLAVE permissions on the MySQL
instance running external to Amazon RDS. We recommend that you provide an account that is used
solely for replication with the external instance.
replication_user_password

The password of the user ID specified in replication_user_name.


mysql_binary_log_file_name

The name of the binary log on the source database instance that contains the replication
information.
mysql_binary_log_file_location

The location in the mysql_binary_log_file_name binary log at which replication starts reading
the replication information.

You can determine the binlog file name and location by running SHOW MASTER STATUS on the
source database instance.
ssl_encryption

A value that specifies whether Secure Socket Layer (SSL) encryption is used on the replication
connection. 1 specifies to use SSL encryption, 0 specifies to not use encryption. The default is 0.
Note
The MASTER_SSL_VERIFY_SERVER_CERT option isn't supported. This option is set to 0,
which means that the connection is encrypted, but the certificates aren't verified.

Usage notes
The master user must run the mysql.rds_set_external_master procedure. This procedure must be
run on the MySQL DB instance to be configured as the read replica of a MySQL instance running external
to Amazon RDS.

1770
Amazon Relational Database Service User Guide
Replicating

Before you run mysql.rds_set_external_master, you must configure the instance of MySQL
running external to Amazon RDS to be a source database instance. To connect to the MySQL
instance running external to Amazon RDS, you must specify replication_user_name and
replication_user_password values that indicate a replication user that has REPLICATION CLIENT
and REPLICATION SLAVE permissions on the external instance of MySQL.

To configure an external instance of MySQL as a source database instance

1. Using the MySQL client of your choice, connect to the external instance of MySQL and create a user
account to be used for replication. The following is an example.

MySQL 5.7

CREATE USER 'repl_user'@'mydomain.com' IDENTIFIED BY 'password';

MySQL 8.0

CREATE USER 'repl_user'@'mydomain.com' IDENTIFIED WITH mysql_native_password BY


'password';

Note
Specify a password other than the prompt shown here as a security best practice.
2. On the external instance of MySQL, grant REPLICATION CLIENT and REPLICATION SLAVE
privileges to your replication user. The following example grants REPLICATION CLIENT and
REPLICATION SLAVE privileges on all databases for the 'repl_user' user for your domain.

MySQL 5.7

GRANT REPLICATION CLIENT, REPLICATION SLAVE ON *.* TO 'repl_user'@'mydomain.com'


IDENTIFIED BY 'password';

MySQL 8.0

GRANT REPLICATION CLIENT, REPLICATION SLAVE ON *.* TO 'repl_user'@'mydomain.com';

To use encrypted replication, configure source database instance to use SSL connections.
Note
We recommend that you use read replicas to manage replication between two Amazon RDS
DB instances when possible. When you do so, we recommend that you use only this and
other replication-related stored procedures. These practices enable more complex replication
topologies between Amazon RDS DB instances. We offer these stored procedures primarily to
enable replication with MySQL instances running external to Amazon RDS. For information
about managing replication between Amazon RDS DB instances, see Working with DB instance
read replicas (p. 438).

After calling mysql.rds_set_external_master to configure an Amazon RDS DB instance as


a read replica, you can call mysql.rds_start_replication (p. 1780) on the read replica to start the
replication process. You can call mysql.rds_reset_external_master (p. 1769) to remove the read replica
configuration.

When mysql.rds_set_external_master is called, Amazon RDS records the time, user, and an action
of set master in the mysql.rds_history and mysql.rds_replication_status tables.

1771
Amazon Relational Database Service User Guide
Replicating

Examples
When run on a MySQL DB instance, the following example configures the DB instance to be a read replica
of an instance of MySQL running external to Amazon RDS.

call mysql.rds_set_external_master(
'Externaldb.some.com',
3306,
'repl_user',
'password',
'mysql-bin-changelog.0777',
120,
0);

mysql.rds_set_external_master_with_auto_position
Configures an RDS for MySQL DB instance to be a read replica of an instance of MySQL running external
to Amazon RDS. This procedure also configures delayed replication and replication based on global
transaction identifiers (GTIDs).
Important
To run this procedure, autocommit must be enabled. To enable it, set the autocommit
parameter to 1. For information about modifying parameters, see Modifying parameters in a DB
parameter group (p. 352).

Syntax

CALL mysql.rds_set_external_master_with_auto_position (
host_name
, host_port
, replication_user_name
, replication_user_password
, ssl_encryption
, delay
);

Parameters
host_name

The host name or IP address of the MySQL instance running external to Amazon RDS to become the
source database instance.
host_port

The port used by the MySQL instance running external to Amazon RDS to be configured as the
source database instance. If your network configuration includes Secure Shell (SSH) port replication
that converts the port number, specify the port number that is exposed by SSH.
replication_user_name

The ID of a user with REPLICATION CLIENT and REPLICATION SLAVE permissions on the MySQL
instance running external to Amazon RDS. We recommend that you provide an account that is used
solely for replication with the external instance.
replication_user_password

The password of the user ID specified in replication_user_name.

1772
Amazon Relational Database Service User Guide
Replicating

ssl_encryption

A value that specifies whether Secure Socket Layer (SSL) encryption is used on the replication
connection. 1 specifies to use SSL encryption, 0 specifies to not use encryption. The default is 0.
Note
The MASTER_SSL_VERIFY_SERVER_CERT option isn't supported. This option is set to 0,
which means that the connection is encrypted, but the certificates aren't verified.
delay

The minimum number of seconds to delay replication from source database instance.

The limit for this parameter is one day (86,400 seconds).

Usage notes
The master user must run the mysql.rds_set_external_master_with_auto_position procedure.
This procedure must be run on the MySQL DB instance to be configured as the read replica of a MySQL
instance running external to Amazon RDS.

This procedure is supported for all RDS for MySQL 5.7 versions, and RDS for MySQL 8.0.26 and higher
8.0 versions.

Before you run mysql.rds_set_external_master_with_auto_position, you must configure


the instance of MySQL running external to Amazon RDS to be a source database instance. To
connect to the MySQL instance running external to Amazon RDS, you must specify values for
replication_user_name and replication_user_password. These values must indicate a
replication user that has REPLICATION CLIENT and REPLICATION SLAVE permissions on the external
instance of MySQL.

To configure an external instance of MySQL as a source database instance

1. Using the MySQL client of your choice, connect to the external instance of MySQL and create a user
account to be used for replication. The following is an example.

CREATE USER 'repl_user'@'mydomain.com' IDENTIFIED BY 'SomePassW0rd'

2. On the external instance of MySQL, grant REPLICATION CLIENT and REPLICATION SLAVE
privileges to your replication user. The following example grants REPLICATION CLIENT and
REPLICATION SLAVE privileges on all databases for the 'repl_user' user for your domain.

GRANT REPLICATION CLIENT, REPLICATION SLAVE ON *.* TO 'repl_user'@'mydomain.com'


IDENTIFIED BY 'SomePassW0rd'

For more information, see Configuring binary log file position replication with an external source
instance (p. 1724).
Note
We recommend that you use read replicas to manage replication between two Amazon RDS
DB instances when possible. When you do so, we recommend that you use only this and
other replication-related stored procedures. These practices enable more complex replication
topologies between Amazon RDS DB instances. We offer these stored procedures primarily to
enable replication with MySQL instances running external to Amazon RDS. For information
about managing replication between Amazon RDS DB instances, see Working with DB instance
read replicas (p. 438).

After calling mysql.rds_set_external_master_with_auto_position to configure an Amazon


RDS DB instance as a read replica, you can call mysql.rds_start_replication (p. 1780) on the read replica

1773
Amazon Relational Database Service User Guide
Replicating

to start the replication process. You can call mysql.rds_reset_external_master (p. 1769) to remove the
read replica configuration.

When you call mysql.rds_set_external_master_with_auto_position, Amazon RDS


records the time, the user, and an action of set master in the mysql.rds_history and
mysql.rds_replication_status tables.

For disaster recovery, you can use this procedure with the mysql.rds_start_replication_until (p. 1780)
or mysql.rds_start_replication_until_gtid (p. 1781) stored procedure. To roll forward
changes to a delayed read replica to the time just before a disaster, you can run the
mysql.rds_set_external_master_with_auto_position procedure. After the
mysql.rds_start_replication_until_gtid procedure stops replication, you can promote the read
replica to be the new primary DB instance by using the instructions in Promoting a read replica to be a
standalone DB instance (p. 447).

To use the mysql.rds_rds_start_replication_until_gtid procedure, GTID-based replication


must be enabled. To skip a specific GTID-based transaction that is known to cause disaster, you can
use the mysql.rds_skip_transaction_with_gtid (p. 1778) stored procedure. For more information
about working with GTID-based replication, see Using GTID-based replication for Amazon RDS for
MySQL (p. 1719).

Examples
When run on a MySQL DB instance, the following example configures the DB instance to be a read replica
of an instance of MySQL running external to Amazon RDS. It sets the minimum replication delay to one
hour (3,600 seconds) on the MySQL DB instance. A change from the MySQL source database instance
running external to Amazon RDS isn't applied on the MySQL DB instance read replica for at least one
hour.

call mysql.rds_set_external_master_with_auto_position(
'Externaldb.some.com',
3306,
'repl_user',
'SomePassW0rd',
0,
3600);

mysql.rds_set_external_master_with_delay
Configures an RDS for MySQL DB instance to be a read replica of an instance of MySQL running external
to Amazon RDS and configures delayed replication.
Important
To run this procedure, autocommit must be enabled. To enable it, set the autocommit
parameter to 1. For information about modifying parameters, see Modifying parameters in a DB
parameter group (p. 352).

Syntax

CALL mysql.rds_set_external_master_with_delay (
host_name
, host_port
, replication_user_name
, replication_user_password
, mysql_binary_log_file_name
, mysql_binary_log_file_location
, ssl_encryption

1774
Amazon Relational Database Service User Guide
Replicating

, delay
);

Parameters
host_name

The host name or IP address of the MySQL instance running external to Amazon RDS that will
become the source database instance.
host_port

The port used by the MySQL instance running external to Amazon RDS to be configured as the
source database instance. If your network configuration includes SSH port replication that converts
the port number, specify the port number that is exposed by SSH.
replication_user_name

The ID of a user with REPLICATION CLIENT and REPLICATION SLAVE permissions on the MySQL
instance running external to Amazon RDS. We recommend that you provide an account that is used
solely for replication with the external instance.
replication_user_password

The password of the user ID specified in replication_user_name.


mysql_binary_log_file_name

The name of the binary log on the source database instance contains the replication information.
mysql_binary_log_file_location

The location in the mysql_binary_log_file_name binary log at which replication will start
reading the replication information.

You can determine the binlog file name and location by running SHOW MASTER STATUS on the
source database instance.
ssl_encryption

A value that specifies whether Secure Socket Layer (SSL) encryption is used on the replication
connection. 1 specifies to use SSL encryption, 0 specifies to not use encryption. The default is 0.
Note
The MASTER_SSL_VERIFY_SERVER_CERT option isn't supported. This option is set to 0,
which means that the connection is encrypted, but the certificates aren't verified.
delay

The minimum number of seconds to delay replication from source database instance.

The limit for this parameter is one day (86400 seconds).

Usage notes
The master user must run the mysql.rds_set_external_master_with_delay procedure. This
procedure must be run on the MySQL DB instance to be configured as the read replica of a MySQL
instance running external to Amazon RDS.

Before you run mysql.rds_set_external_master_with_delay, you must configure the instance


of MySQL running external to Amazon RDS to be a source database instance. To connect to the MySQL
instance running external to Amazon RDS, you must specify values for replication_user_name and

1775
Amazon Relational Database Service User Guide
Replicating

replication_user_password. These values must indicate a replication user that has REPLICATION
CLIENT and REPLICATION SLAVE permissions on the external instance of MySQL.

To configure an external instance of MySQL as a source database instance

1. Using the MySQL client of your choice, connect to the external instance of MySQL and create a user
account to be used for replication. The following is an example.

CREATE USER 'repl_user'@'mydomain.com' IDENTIFIED BY 'SomePassW0rd'

2. On the external instance of MySQL, grant REPLICATION CLIENT and REPLICATION SLAVE
privileges to your replication user. The following example grants REPLICATION CLIENT and
REPLICATION SLAVE privileges on all databases for the 'repl_user' user for your domain.

GRANT REPLICATION CLIENT, REPLICATION SLAVE ON *.* TO 'repl_user'@'mydomain.com'


IDENTIFIED BY 'SomePassW0rd'

For more information, see Configuring binary log file position replication with an external source
instance (p. 1724).
Note
We recommend that you use read replicas to manage replication between two Amazon RDS
DB instances when possible. When you do so, we recommend that you use only this and
other replication-related stored procedures. These practices enable more complex replication
topologies between Amazon RDS DB instances. We offer these stored procedures primarily to
enable replication with MySQL instances running external to Amazon RDS. For information
about managing replication between Amazon RDS DB instances, see Working with DB instance
read replicas (p. 438).

After calling mysql.rds_set_external_master_with_delay to configure an Amazon RDS DB


instance as a read replica, you can call mysql.rds_start_replication (p. 1780) on the read replica to start
the replication process. You can call mysql.rds_reset_external_master (p. 1769) to remove the read
replica configuration.

When you call mysql.rds_set_external_master_with_delay, Amazon RDS records


the time, the user, and an action of set master in the mysql.rds_history and
mysql.rds_replication_status tables.

For disaster recovery, you can use this procedure with the mysql.rds_start_replication_until (p. 1780)
or mysql.rds_start_replication_until_gtid (p. 1781) stored procedure. To roll forward
changes to a delayed read replica to the time just before a disaster, you can run
the mysql.rds_set_external_master_with_delay procedure. After the
mysql.rds_start_replication_until procedure stops replication, you can promote the read
replica to be the new primary DB instance by using the instructions in Promoting a read replica to be a
standalone DB instance (p. 447).

To use the mysql.rds_rds_start_replication_until_gtid procedure, GTID-based replication


must be enabled. To skip a specific GTID-based transaction that is known to cause disaster, you can
use the mysql.rds_skip_transaction_with_gtid (p. 1778) stored procedure. For more information
about working with GTID-based replication, see Using GTID-based replication for Amazon RDS for
MySQL (p. 1719).

The mysql.rds_set_external_master_with_delay procedure is available in these versions of RDS


for MySQL:

• MySQL 8.0.26 and higher 8.0 versions


• All 5.7 versions

1776
Amazon Relational Database Service User Guide
Replicating

Examples
When run on a MySQL DB instance, the following example configures the DB instance to be a read replica
of an instance of MySQL running external to Amazon RDS. It sets the minimum replication delay to one
hour (3,600 seconds) on the MySQL DB instance. A change from the MySQL source database instance
running external to Amazon RDS isn't applied on the MySQL DB instance read replica for at least one
hour.

call mysql.rds_set_external_master_with_delay(
'Externaldb.some.com',
3306,
'repl_user',
'SomePassW0rd',
'mysql-bin-changelog.000777',
120,
0,
3600);

mysql.rds_set_master_auto_position
Sets the replication mode to be based on either binary log file positions or on global transaction
identifiers (GTIDs).

Syntax

CALL mysql.rds_set_master_auto_position (
auto_position_mode
);

Parameters
auto_position_mode

A value that indicates whether to use log file position replication or GTID-based replication:
• 0 – Use the replication method based on binary log file position. The default is 0.
• 1 – Use the GTID-based replication method.

Usage notes
The master user must run the mysql.rds_set_master_auto_position procedure.

This procedure is supported for all RDS for MySQL 5.7 versions, and RDS for MySQL 8.0.26 and higher
8.0 versions.

mysql.rds_set_source_delay
Sets the minimum number of seconds to delay replication from source database instance to the current
read replica. Use this procedure when you are connected to a read replica to delay replication from its
source database instance.

Syntax

CALL mysql.rds_set_source_delay(
delay

1777
Amazon Relational Database Service User Guide
Replicating

);

Parameters
delay

The minimum number of seconds to delay replication from the source database instance.

The limit for this parameter is one day (86400 seconds).

Usage notes
The master user must run the mysql.rds_set_source_delay procedure.

For disaster recovery, you can use this procedure with the mysql.rds_start_replication_until (p. 1780)
stored procedure or the mysql.rds_start_replication_until_gtid (p. 1781) stored procedure. To
roll forward changes to a delayed read replica to the time just before a disaster, you can run the
mysql.rds_set_source_delay procedure. After the mysql.rds_start_replication_until or
mysql.rds_start_replication_until_gtid procedure stops replication, you can promote the read
replica to be the new primary DB instance by using the instructions in Promoting a read replica to be a
standalone DB instance (p. 447).

To use the mysql.rds_rds_start_replication_until_gtid procedure, GTID-based replication


must be enabled. To skip a specific GTID-based transaction that is known to cause disaster, you can use
the mysql.rds_skip_transaction_with_gtid (p. 1778) stored procedure. For more information on GTID-
based replication, see Using GTID-based replication for Amazon RDS for MySQL (p. 1719).

The mysql.rds_set_source_delay procedure is available in these versions of RDS for MySQL:

• MySQL 8.0.26 and higher 8.0 versions


• All 5.7 versions

Examples
To delay replication from source database instance to the current read replica for at least one hour
(3,600 seconds), you can call mysql.rds_set_source_delay with the following parameter:

CALL mysql.rds_set_source_delay(3600);

mysql.rds_skip_transaction_with_gtid
Skips replication of a transaction with the specified global transaction identifier (GTID) on a MySQL DB
instance.

You can use this procedure for disaster recovery when a specific GTID transaction is known to cause
a problem. Use this stored procedure to skip the problematic transaction. Examples of problematic
transactions include transactions that disable replication, delete important data, or cause the DB
instance to become unavailable.

Syntax

CALL mysql.rds_skip_transaction_with_gtid (
gtid_to_skip
);

1778
Amazon Relational Database Service User Guide
Replicating

Parameters
gtid_to_skip

The GTID of the replication transaction to skip.

Usage notes
The master user must run the mysql.rds_skip_transaction_with_gtid procedure.

This procedure is supported for all RDS for MySQL 5.7 versions, and RDS for MySQL 8.0.26 and higher
8.0 versions.

Examples
The following example skips replication of the transaction with the GTID 3E11FA47-71CA-11E1-9E33-
C80AA9429562:23.

call mysql.rds_skip_transaction_with_gtid('3E11FA47-71CA-11E1-9E33-C80AA9429562:23');

mysql.rds_skip_repl_error
Skips and deletes a replication error on a MySQL DB read replica.

Syntax

CALL mysql.rds_skip_repl_error;

Usage notes
The master user must run the mysql.rds_skip_repl_error procedure on a read replica. For more
information about this procedure, see Calling the mysql.rds_skip_repl_error procedure (p. 1745).

To determine if there are errors, run the MySQL SHOW REPLICA STATUS\G command. If a replication
error isn't critical, you can run mysql.rds_skip_repl_error to skip the error. If there are multiple
errors, mysql.rds_skip_repl_error deletes the first error, then warns that others are present.
You can then use SHOW REPLICA STATUS\G to determine the correct course of action for the next
error. For information about the values returned, see SHOW REPLICA STATUS statement in the MySQL
documentation.
Note
Previous versions of MySQL used SHOW SLAVE STATUS instead of SHOW REPLICA STATUS. If
you are using a MySQL version before 8.0.23, then use SHOW SLAVE STATUS.

For more information about addressing replication errors with Amazon RDS, see Troubleshooting a
MySQL read replica problem (p. 1718).

Replication stopped error


When you call the mysql.rds_skip_repl_error procedure, you might receive an error message
stating that the replica is down or disabled.

This error message appears if you run the procedure on the primary instance instead of the read replica.
You must run this procedure on the read replica for the procedure to work.

This error message might also appear if you run the procedure on the read replica, but replication can't
be restarted successfully.

1779
Amazon Relational Database Service User Guide
Replicating

If you need to skip a large number of errors, the replication lag can increase beyond the default retention
period for binary log (binlog) files. In this case, you might encounter a fatal error due to binlog files
being purged before they have been replayed on the read replica. This purge causes replication to stop,
and you can no longer call the mysql.rds_skip_repl_error command to skip replication errors.

You can mitigate this issue by increasing the number of hours that binlog files are retained on your
source database instance. After you have increased the binlog retention time, you can restart replication
and call the mysql.rds_skip_repl_error command as needed.

To set the binlog retention time, use the mysql.rds_set_configuration (p. 1758) procedure and specify
a configuration parameter of 'binlog retention hours' along with the number of hours to retain
binlog files on the DB cluster. The following example sets the retention period for binlog files to 48
hours.

CALL mysql.rds_set_configuration('binlog retention hours', 48);

mysql.rds_start_replication
Initiates replication from an RDS for MySQL DB instance.
Note
You can use the mysql.rds_start_replication_until (p. 1780) or
mysql.rds_start_replication_until_gtid (p. 1781) stored procedure to initiate replication from an
RDS for MySQL DB instance and stop replication at the specified binary log file location.

Syntax

CALL mysql.rds_start_replication;

Usage notes
The master user must run the mysql.rds_start_replication procedure.

To import data from an instance of MySQL external to Amazon RDS, call


mysql.rds_start_replication on the read replica to start the replication process after you call
mysql.rds_set_external_master to build the replication configuration. For more information, see
Restoring a backup into a MySQL DB instance (p. 1680).

To export data to an instance of MySQL external to Amazon RDS, call


mysql.rds_start_replication and mysql.rds_stop_replication on the read replica to control
some replication actions, such as purging binary logs. For more information, see Exporting data from a
MySQL DB instance by using replication (p. 1728).

You can also call mysql.rds_start_replication on the read replica to restart any replication
process that you previously stopped by calling mysql.rds_stop_replication. For more information,
see Working with DB instance read replicas (p. 438).

mysql.rds_start_replication_until
Initiates replication from an RDS for MySQL DB instance and stops replication at the specified binary log
file location.

Syntax

CALL mysql.rds_start_replication_until (

1780
Amazon Relational Database Service User Guide
Replicating

replication_log_file
, replication_stop_point
);

Parameters
replication_log_file

The name of the binary log on the source database instance contains the replication information.
replication_stop_point

The location in the replication_log_file binary log at which replication will stop.

Usage notes
The master user must run the mysql.rds_start_replication_until procedure.

The mysql.rds_start_replication_until procedure is available in these versions of RDS for


MySQL:

• MySQL 8.0.26 and higher 8.0 versions


• All 5.7 versions

You can use this procedure with delayed replication for disaster recovery. If you have delayed replication
configured, you can use this procedure to roll forward changes to a delayed read replica to the time
just before a disaster. After this procedure stops replication, you can promote the read replica to be the
new primary DB instance by using the instructions in Promoting a read replica to be a standalone DB
instance (p. 447).

You can configure delayed replication using the following stored procedures:

• mysql.rds_set_configuration (p. 1758)


• mysql.rds_set_external_master_with_delay (p. 1774)
• mysql.rds_set_source_delay (p. 1777)

The file name specified for the replication_log_file parameter must match the source database
instance binlog file name.

When the replication_stop_point parameter specifies a stop location that is in the past, replication
is stopped immediately.

Examples
The following example initiates replication and replicates changes until it reaches location 120 in the
mysql-bin-changelog.000777 binary log file.

call mysql.rds_start_replication_until(
'mysql-bin-changelog.000777',
120);

mysql.rds_start_replication_until_gtid
Initiates replication from an RDS for MySQL DB instance and stops replication immediately after the
specified global transaction identifier (GTID).

1781
Amazon Relational Database Service User Guide
Replicating

Syntax

CALL mysql.rds_start_replication_until_gtid(gtid);

Parameters
gtid

The GTID after which replication is to stop.

Usage notes
The master user must run the mysql.rds_start_replication_until_gtid procedure.

This procedure is supported for all RDS for MySQL 5.7 versions, and RDS for MySQL 8.0.26 and higher
8.0 versions.

You can use this procedure with delayed replication for disaster recovery. If you have delayed replication
configured, you can use this procedure to roll forward changes to a delayed read replica to the time
just before a disaster. After this procedure stops replication, you can promote the read replica to be the
new primary DB instance by using the instructions in Promoting a read replica to be a standalone DB
instance (p. 447).

You can configure delayed replication using the following stored procedures:

• mysql.rds_set_configuration (p. 1758)


• mysql.rds_set_external_master_with_delay (p. 1774)
• mysql.rds_set_source_delay (p. 1777)

When the gtid parameter specifies a transaction that has already been run by the replica, replication is
stopped immediately.

Examples
The following example initiates replication and replicates changes until it reaches GTID
3E11FA47-71CA-11E1-9E33-C80AA9429562:23.

call mysql.rds_start_replication_until_gtid('3E11FA47-71CA-11E1-9E33-C80AA9429562:23');

mysql.rds_stop_replication
Stops replication from a MySQL DB instance.

Syntax

CALL mysql.rds_stop_replication;

Usage notes
The master user must run the mysql.rds_stop_replication procedure.

If you are configuring replication to import data from an instance of MySQL running external to Amazon
RDS, you call mysql.rds_stop_replication on the read replica to stop the replication process

1782
Amazon Relational Database Service User Guide
Replicating

after the import has completed. For more information, see Restoring a backup into a MySQL DB
instance (p. 1680).

If you are configuring replication to export data to an instance of MySQL external to Amazon RDS, you
call mysql.rds_start_replication and mysql.rds_stop_replication on the read replica to
control some replication actions, such as purging binary logs. For more information, see Exporting data
from a MySQL DB instance by using replication (p. 1728).

You can also use mysql.rds_stop_replication to stop replication between two Amazon RDS DB
instances. You typically stop replication to perform a long running operation on the read replica, such
as creating a large index on the read replica. You can restart any replication process that you stopped by
calling mysql.rds_start_replication (p. 1780) on the read replica. For more information, see Working with
DB instance read replicas (p. 438).

1783
Amazon Relational Database Service User Guide
Warming the InnoDB cache

Warming the InnoDB cache


The following stored procedures save, load, or cancel loading the InnoDB buffer pool on RDS for MySQL
DB instances. For more information, see InnoDB cache warming for MySQL on Amazon RDS (p. 1625).

Topics
• mysql.rds_innodb_buffer_pool_dump_now (p. 1784)
• mysql.rds_innodb_buffer_pool_load_abort (p. 1784)
• mysql.rds_innodb_buffer_pool_load_now (p. 1784)

mysql.rds_innodb_buffer_pool_dump_now
Dumps the current state of the buffer pool to disk.

Syntax

CALL mysql.rds_innodb_buffer_pool_dump_now();

Usage notes
The master user must run the mysql.rds_innodb_buffer_pool_dump_now procedure.

mysql.rds_innodb_buffer_pool_load_abort
Cancels a load of the saved buffer pool state while in progress.

Syntax

CALL mysql.rds_innodb_buffer_pool_load_abort();

Usage notes
The master user must run the mysql.rds_innodb_buffer_pool_load_abort procedure.

mysql.rds_innodb_buffer_pool_load_now
Loads the saved state of the buffer pool from disk.

Syntax

CALL mysql.rds_innodb_buffer_pool_load_now();

Usage notes
The master user must run the mysql.rds_innodb_buffer_pool_load_now procedure.

1784
Amazon Relational Database Service User Guide

Amazon RDS for Oracle


Amazon RDS supports DB instances that run the following versions and editions of Oracle Database:

• Oracle Database 21c (21.0.0.0)


• Oracle Database 19c (19.0.0.0)

Note
Oracle Database 11g, Oracle Database 12c, and Oracle Database 18c are legacy versions that are
no longer supported in Amazon RDS.

Before creating a DB instance, complete the steps in the Setting up for Amazon RDS (p. 174) section of
this guide. When you create a DB instance using your master account, the account gets DBA privileges,
with some limitations. Use this account for administrative tasks such as creating additional database
accounts. You can't use SYS, SYSTEM, or other Oracle-supplied administrative accounts.

You can create the following:

• DB instances
• DB snapshots
• Point-in-time restores
• Automated backups
• Manual backups

You can use DB instances running Oracle inside a VPC. You can also add features to your Oracle DB
instance by enabling various options. Amazon RDS supports Multi-AZ deployments for Oracle as a high-
availability, failover solution.
Important
To deliver a managed service experience, Amazon RDS doesn't provide shell access to DB
instances. It also restricts access to certain system procedures and tables that need advanced
privileges. You can access your database using standard SQL clients such as Oracle SQL*Plus.
However, you can't access the host directly by using Telnet or Secure Shell (SSH).

Topics
• Overview of Oracle on Amazon RDS (p. 1786)
• Connecting to your RDS for Oracle DB instance (p. 1806)
• Securing Oracle DB instance connections (p. 1816)
• Working with CDBs in RDS for Oracle (p. 1840)
• Administering your Oracle DB instance (p. 1847)
• Configuring advanced RDS for Oracle features (p. 1936)
• Importing data into Oracle on Amazon RDS (p. 1947)
• Working with read replicas for Amazon RDS for Oracle (p. 1973)
• Adding options to Oracle DB instances (p. 1990)
• Upgrading the RDS for Oracle DB engine (p. 2103)
• Using third-party software with your RDS for Oracle DB instance (p. 2114)

1785
Amazon Relational Database Service User Guide
Oracle overview

• Oracle Database engine release notes (p. 2146)

Overview of Oracle on Amazon RDS


You can read the following sections to get an overview of RDS for Oracle.

Topics
• RDS for Oracle features (p. 1786)
• RDS for Oracle releases (p. 1789)
• RDS for Oracle licensing options (p. 1793)
• RDS for Oracle users and privileges (p. 1796)
• RDS for Oracle instance classes (p. 1796)
• RDS for Oracle database architecture (p. 1800)
• RDS for Oracle parameters (p. 1801)
• RDS for Oracle character sets (p. 1801)
• RDS for Oracle limitations (p. 1804)

RDS for Oracle features


Amazon RDS for Oracle supports most of the features and capabilities of Oracle Database. Some
features might have limited support or restricted privileges. Some features are only available in
Enterprise Edition, and some require additional licenses. For more information about Oracle Database
features for specific Oracle Database versions, see the Oracle Database Licensing Information User
Manual for the version you're using.

You can filter new Amazon RDS features on the What's New with Database? page. For Products, choose
Amazon RDS. Then search using keywords such as Oracle 2022.
Note
The following lists are not exhaustive.

Topics
• New features in RDS for Oracle (p. 1786)
• Supported features in RDS for Oracle (p. 1786)
• Unsupported features in RDS for Oracle (p. 1788)

New features in RDS for Oracle


To see new RDS for Oracle features, use the following techniques:

• Search Document history (p. 2744) for the keyword Oracle.


• You can filter new Amazon RDS features on the What's New with Database? page. For Products,
choose Amazon RDS. Then search for Oracle YYYY, where YYYY is a year such as 2022.

Supported features in RDS for Oracle


Amazon RDS for Oracle supports the following Oracle Database features:

• Advanced Compression

1786
Amazon Relational Database Service User Guide
Oracle features

• Application Express (APEX)

For more information, see Oracle Application Express (APEX) (p. 2009).
• Automatic Memory Management
• Automatic Undo Management
• Automatic Workload Repository (AWR)

For more information, see Generating performance reports with Automatic Workload Repository
(AWR) (p. 1875).
• Active Data Guard with Maximum Performance in the same AWS Region or across AWS Regions

For more information, see Working with read replicas for Amazon RDS for Oracle (p. 1973).
• Blockchain tables (Oracle Database 21c and higher)

For more information, see Managing Blockchain Tables in the Oracle Database documentation.
• Continuous Query Notification (version 12.1.0.2.v7 and higher)

For more information, see Using Continuous Query Notification (CQN) in the Oracle documentation.
• Data Redaction
• Database Change Notification

For more information, see Database Change Notification in the Oracle documentation.
Note
This feature changes to Continuous Query Notification in Oracle Database 12c Release 1
(12.1) and higher.
• Database In-Memory (Oracle Database 12c and higher)
• Distributed Queries and Transactions
• Edition-Based Redefinition

For more information, see Setting the default edition for a DB instance (p. 1879).
• EM Express (12c and higher)

For more information, see Oracle Enterprise Manager (p. 2034).


• Fine-Grained Auditing
• Flashback Table, Flashback Query, Flashback Transaction Query
• Gradual password rollover for applications (Oracle Database 21c and higher)

For more information, see Managing Gradual Database Password Rollover for Applications in the
Oracle Database documentation.
• HugePages

For more information, see Turning on HugePages for an RDS for Oracle instance (p. 1942).
• Import/export (legacy and Data Pump) and SQL*Loader

For more information, see Importing data into Oracle on Amazon RDS (p. 1947).
• Java Virtual Machine (JVM)

For more information, see Oracle Java virtual machine (p. 2031).
• JavaScript (Oracle Database 21c and higher)

For more information, see DBMS_MLE in the Oracle Database documentation.


• Label Security (Oracle Database 12c and higher)

For more information, see Oracle Label Security (p. 2049).

1787
Amazon Relational Database Service User Guide
Oracle features

• Locator

For more information, see Oracle Locator (p. 2052).


• Materialized Views
• Multimedia

For more information, see Oracle Multimedia (p. 2055).


• Multitenant (single-tenant configuration only)

The multitenant architecture is supported for all Oracle Database 19c and higher releases. For more
information, see Overview of RDS for Oracle CDBs (p. 1840) and Limitations of a single-tenant
CDB (p. 1805).
• Network encryption

For more information, see Oracle native network encryption (p. 2057) and Oracle Secure Sockets
Layer (p. 2068).
• Partitioning
• Application-level sharding (but not the Oracle Sharding feature)
• Spatial and Graph

For more information, see Oracle Spatial (p. 2075).


• Star Query Optimization
• Streams and Advanced Queuing
• Summary Management – Materialized View Query Rewrite
• Text (File and URL data store types are not supported)
• Total Recall
• Transparent Data Encryption (TDE)

For more information, see Oracle Transparent Data Encryption (p. 2097).
• Unified Auditing, Mixed Mode

For more information, see Mixed mode auditing in the Oracle documentation.
• XML DB (without the XML DB Protocol Server)

For more information, see Oracle XML DB (p. 2102).


• Virtual Private Database

Unsupported features in RDS for Oracle


Amazon RDS for Oracle doesn't support the following Oracle Database features:

• Automatic Storage Management (ASM)


• Database Vault
• Flashback Database
Note
For alternative solutions, see the AWS Database Blog entry Alternatives to the Oracle
flashback database feature in Amazon RDS for Oracle.
• FTP and SFTP
• Hybrid partitioned tables
• Messaging Gateway

1788
Amazon Relational Database Service User Guide
Oracle versions

• Oracle Enterprise Manager Cloud Control Management Repository


• Real Application Clusters (Oracle RAC)
• Real Application Security (RAS)
• Real Application Testing
• Unified Auditing, Pure Mode
• Workspace Manager (WMSYS) schema

Note
The preceding list is not exhaustive.
Warning
In general, Amazon RDS doesn't prevent you from creating schemas for unsupported features.
However, if you create schemas for Oracle features and components that require SYSDBA
privileges, you can damage the data dictionary and affect the availability of your DB instance.
Use only supported features and schemas that are available in Adding options to Oracle DB
instances (p. 1990).

RDS for Oracle releases


Amazon RDS for Oracle supports multiple Oracle Database releases.
Note
For information about upgrading your releases, see Upgrading the RDS for Oracle DB
engine (p. 2103).

Topics
• Oracle Database 21c with Amazon RDS (p. 1789)
• Oracle Database 19c with Amazon RDS (p. 1791)
• Oracle Database 12c with Amazon RDS (p. 1792)

Oracle Database 21c with Amazon RDS


Amazon RDS supports Oracle Database 21c, which includes Oracle Enterprise Edition and Oracle
Standard Edition Two. Oracle Database 21c (21.0.0.0) includes many new features and updates from the
previous version. A key change is that Oracle Database 21c supports only the multitenant architecture:
you can no longer create a database as a traditional non-CDB. To learn more about the differences
between CDBs and non-CDBs, see Limitations of a single-tenant CDB (p. 1805).

In this section, you can find the features and changes important to using Oracle Database 21c (21.0.0.0)
on Amazon RDS. For a complete list of the changes, see the Oracle database 21c documentation. For
a complete list of features supported by each Oracle Database 21c edition, see Permitted features,
options, and management packs by Oracle database offering in the Oracle documentation.

Amazon RDS parameter changes for Oracle Database 21c (21.0.0.0)


Oracle Database 21c (21.0.0.0) includes several new parameters and parameters with new ranges and
new default values.

Topics
• New parameters (p. 1790)
• Changes for the compatible parameter (p. 1791)
• Removed parameters (p. 1791)

1789
Amazon Relational Database Service User Guide
Oracle versions

New parameters

The following table shows the new Amazon RDS parameters for Oracle Database 21c (21.0.0.0).

Name Range of values Default value ModifiableDescription

NONE | 0 NONE Y Lets you control the maximum


blockchain_table_max_no_drop amount of idle time that can
be specified when creating a
blockchain table.

dbnest_enable NONE | NONE N Allows you to enable or disable


CDB_RESOURCE_PDB_ALL dbNest. DbNest provides operating
system resource isolation and
management, file system isolation,
and secure computing for PDBs.

dbnest_pdb_fs_conf NONE | NONE N Specifies the dbNest file system


pathname configuration file for a PDB.

diagnostics_control ERROR | IGNORE Y Allows you to control and monitor


WARNING | the users who perform potentially
IGNORE unsafe database diagnostic
operations.

drcp_dedicated_opt YES | NO YES Y Enables or disables the use of


dedicated optimization with
Database Resident Connection
Pooling (DRCP).

enable_per_pdb_drcp true | false true N Controls whether Database


Resident Connection Pooling
(DRCP) configures one connection
pool for the entire CDB or one
isolated connection pool for each
PDB.

inmemory_deep_vectorizationtrue | false true Y Enables or disables the deep


vectorization framework.

mandatory_user_profile profile_name N/A N Specifies the mandatory user


profile for a CDB or PDB.

optimizer_capture_sql_quarantine
true | false false Y Enables or disables the deep
vectorization framework.

optimizer_use_sql_quarantinetrue | false false Y Enables or disables the automatic


creation of SQL Quarantine
configurations.

result_cache_execution_threshold
0 to 2 Y Specifies the maximum number
68719476736 of times a PL/SQL function can be
executed before its result is stored
in the result cache.

result_cache_max_temp_result
0 to 100 5 Y Specifies the percentage of
RESULT_CACHE_MAX_TEMP_SIZE
that any single cached query result
can consume.

1790
Amazon Relational Database Service User Guide
Oracle versions

Name Range of values Default value ModifiableDescription

result_cache_max_temp_size 0 to Y
RESULT_CACHE_SIZE Specifies the maximum amount of
2199023255552 * 10 temporary tablespace (in bytes)
that can be consumed by the result
cache.

sga_min_size 0 to 0 Y Indicates a possible minimum


2199023255552 value for SGA usage of a pluggable
(maximum database (PDB).
value is 50% of
sga_target)

tablespace_encryption_default_algorithm
GOST256 | AES128 Y Specifies the default algorithm the
SEED128 | database uses when encrypting a
ARIA256 | tablespace.
ARIA192 |
ARIA128 |
3DES168 |
AES256 |
AES192 |
AES128

Changes for the compatible parameter


The compatible parameter has a new maximum value for Oracle Database 21c (21.0.0.0) on Amazon
RDS. The following table shows the new default value.

Parameter name Oracle Database 21c (21.0.0.0) maximum value

compatible 21.0.0

Removed parameters
The following parameters were removed in Oracle Database 21c (21.0.0.0):

• remote_os_authent
• sec_case_sensitive_logon
• unified_audit_sga_queue_size

Oracle Database 19c with Amazon RDS


Amazon RDS supports Oracle Database 19c, which includes Oracle Enterprise Edition and Oracle
Standard Edition Two.

Oracle Database 19c (19.0.0.0) includes many new features and updates from the previous version. In
this section, you can find the features and changes important to using Oracle Database 19c (19.0.0.0)
on Amazon RDS. For a complete list of the changes, see the Oracle database 19c documentation. For
a complete list of features supported by each Oracle Database 19c edition, see Permitted features,
options, and management packs by Oracle database offering in the Oracle documentation.

Amazon RDS parameter changes for Oracle Database 19c (19.0.0.0)


Oracle Database 19c (19.0.0.0) includes several new parameters and parameters with new ranges and
new default values.

1791
Amazon Relational Database Service User Guide
Oracle versions

Topics
• New parameters (p. 1792)
• Changes to the compatible parameter (p. 1792)
• Removed parameters (p. 1792)

New parameters

The following table shows the new Amazon RDS parameters for Oracle Database 19c (19.0.0.0).

Name Values Modifiable


Description

lob_signature_enable TRUE, FALSE (default) Y Enables or disables the LOB locator


signature feature.

max_datapump_parallel_per_job1 to 1024, or AUTO Y Specifies the maximum number of


parallel processes allowed for each
Oracle Data Pump job.

Changes to the compatible parameter

The compatible parameter has a new maximum value for Oracle Database 19c (19.0.0.0) on Amazon
RDS. The following table shows the new default value.

Parameter name Oracle Database 19c (19.0.0.0) maximum value

compatible 19.0.0

Removed parameters

The following parameters were removed in Oracle Database 19c (19.0.0.0):

• exafusion_enabled
• max_connections
• o7_dictionary_access

Oracle Database 12c with Amazon RDS


Amazon RDS has deprecated support for Oracle Database 12c on both Oracle Enterprise Edition and
Oracle Standard Edition 2.

Topics
• Oracle Database 12c Release 2 (12.2.0.1) with Amazon RDS (p. 1792)
• Oracle Database 12c Release 1 (12.1.0.2) with Amazon RDS (p. 1793)

Oracle Database 12c Release 2 (12.2.0.1) with Amazon RDS


On March 31, 2022, Oracle Corporation deprecated support for Oracle Database 12c Release 2 (12.2.0.1)
for BYOL and LI. On this date, the release moved from Oracle Extended Support to Oracle Sustaining

1792
Amazon Relational Database Service User Guide
Oracle licensing

Support, indicating the end of support for this release. For more information, see the end of support
timeline on AWS re:Post.

Date Action

April 1, 2022 Amazon RDS began automatic upgrades of your Oracle Database 12c Release 2
(12.2.0.1) instances to Oracle Database 19c.

April 1, 2022 Amazon RDS began automatic upgrades to Oracle Database 19c for any Oracle
Database 12c Release 2 (12.2.0.1) DB instances restored from snapshots.
The automatic upgrade occurs during maintenance windows. If maintenance
windows aren't available when the upgrade needs to occur, Amazon RDS
upgrades the engine immediately.

Oracle Database 12c Release 1 (12.1.0.2) with Amazon RDS


On July 31, 2022, Amazon RDS deprecated support for Oracle Database 12c Release 1 (12.1.0.2) for
BYOL and LI. The release moved from Oracle Extended Support to Oracle Sustaining Support, indicating
that Oracle Support will no longer release critical patch updates for this release. For more information,
see the end of support timeline on AWS re:Post.

Date Action

August 1, 2022 Amazon RDS began automatic upgrades of your Oracle Database 12c Release 1
(12.1.0.2) instances to the latest Release Update (RU) for Oracle Database 19c.
The automatic upgrade occurs during maintenance windows. If maintenance
windows aren't available when the upgrade needs to occur, Amazon RDS
upgrades the engine immediately.

August 1, 2022 Amazon RDS began automatic upgrades to Oracle Database 19c for any Oracle
Database 12c Release 1 (12.1.0.2) DB instances restored from snapshots.

RDS for Oracle licensing options


Amazon RDS for Oracle has two licensing options: License Included (LI) and Bring Your Own License
(BYOL). After you create an Oracle DB instance on Amazon RDS, you can change the licensing model by
modifying the DB instance. For more information, see Modifying an Amazon RDS DB instance (p. 401).

License Included
In the License Included model, you don't need to purchase Oracle Database licenses separately. AWS
holds the license for the Oracle database software. In this model, if you have an AWS Support account
with case support, contact AWS Support for both Amazon RDS and Oracle Database service requests.
The License Included model is only supported on Amazon RDS for Oracle Database Standard Edition Two
(SE2).

Bring Your Own License (BYOL)


In the BYOL model, you can use your existing Oracle Database licenses to deploy databases on Amazon
RDS. Make sure that you have the appropriate Oracle Database license (with Software Update License
and Support) for the DB instance class and Oracle Database edition you wish to run. You must also follow
Oracle's policies for licensing Oracle Database software in the cloud computing environment. For more
information on Oracle's licensing policy for Amazon EC2, see Licensing Oracle software in the cloud
computing environment.

1793
Amazon Relational Database Service User Guide
Oracle licensing

In this model, you continue to use your active Oracle support account, and you contact Oracle directly
for Oracle Database service requests. If you have an AWS Support account with case support, you can
contact AWS Support for Amazon RDS issues. Amazon Web Services and Oracle have a multi-vendor
support process for cases that require assistance from both organizations.

Amazon RDS supports the BYOL model only for Oracle Database Enterprise Edition (EE) and Oracle
Database Standard Edition Two (SE2).

Integrating with AWS License Manager


To make it easier to monitor Oracle license usage in the BYOL model, AWS License Manager integrates
with Amazon RDS for Oracle. License Manager supports tracking of RDS for Oracle engine editions
and licensing packs based on virtual cores (vCPUs). You can also use License Manager with AWS
Organizations to manage all of your organizational accounts centrally.

The following table shows the product information filters for RDS for Oracle.

Filter Name Description

Engine Edition oracle-ee Oracle Database Enterprise Edition (EE)

oracle-se2 Oracle Database Standard Edition Two (SE2)

License Pack data guard See Working with read replicas for Amazon RDS for
Oracle (p. 1973) (Oracle Active Data Guard)

olap See Oracle OLAP (p. 2065)

ols See Oracle Label Security (p. 2049)

diagnostic pack See Oracle SQLT (p. 2078)


sqlt

tuning pack sqlt See Oracle SQLT (p. 2078)

To track license usage of your Oracle DB instances, you can create a license configuration. In this case,
RDS for Oracle resources that match the product information filter are automatically associated with the
license configuration. Discovery of Oracle DB instances can take up to 24 hours.

Console

To create a license configuration to track the license usage of your Oracle DB instances

1. Go to https://console.aws.amazon.com/license-manager/.
2. Create a license configuration.

For instructions, see Create a license configuration in the AWS License Manager User Guide.

Add a rule for an RDS Product Information Filter in the Product Information panel.

For more information, see ProductInformation in the AWS License Manager API Reference.

AWS CLI

To create a license configuration by using the AWS CLI, call the create-license-configuration command.
Use the --cli-input-json or --cli-input-yaml parameters to pass the parameters to the
command.

1794
Amazon Relational Database Service User Guide
Oracle licensing

Example

The following code creates a license configuration for Oracle Enterprise Edition.

aws license-manager create-license-configuration -cli-input-json file://rds-oracle-ee.json

The following is the sample rds-oracle-ee.json file used in the example.

{
"Name": "rds-oracle-ee",
"Description": "RDS Oracle Enterprise Edition",
"LicenseCountingType": "vCPU",
"LicenseCountHardLimit": false,
"ProductInformationList": [
{
"ResourceType": "RDS",
"ProductInformationFilterList": [
{
"ProductInformationFilterName": "Engine Edition",
"ProductInformationFilterValue": ["oracle-ee"],
"ProductInformationFilterComparator": "EQUALS"
}
]
}
]
}

For more information about product information, see Automated discovery of resource inventory in the
AWS License Manager User Guide.

For more information about the --cli-input parameter, see Generating AWS CLI skeleton and input
parameters from a JSON or YAML input file in the AWS CLI User Guide.

Migrating between Oracle editions


If you have an unused BYOL Oracle license appropriate for the edition and class of DB instance that you
plan to run, you can migrate from Standard Edition 2 (SE2) to Enterprise Edition (EE). You can't migrate
from Enterprise Edition to other editions.

To change the edition and retain your data

1. Create a snapshot of the DB instance.

For more information, see Creating a DB snapshot (p. 613).


2. Restore the snapshot to a new DB instance, and select the Oracle database edition you want to use.

For more information, see Restoring from a DB snapshot (p. 615).


3. (Optional) Delete the old DB instance, unless you want to keep it running and have the appropriate
Oracle Database licenses for it.

For more information, see Deleting a DB instance (p. 489).

Licensing Oracle Multi-AZ deployments


Amazon RDS supports Multi-AZ deployments for Oracle as a high-availability, failover solution. We
recommend Multi-AZ for production workloads. For more information, see Configuring and managing a
Multi-AZ deployment (p. 492).

1795
Amazon Relational Database Service User Guide
Oracle users and privileges

If you use the Bring Your Own License model, you must have a license for both the primary DB instance
and the standby DB instance in a Multi-AZ deployment.

RDS for Oracle users and privileges


When you create an Amazon RDS for Oracle DB instance, the default master user has most of the
maximum user permissions on the DB instance. Use the master user account for any administrative tasks,
such as creating additional user accounts in your database. Because RDS is a managed service, you aren't
allowed to log in as SYS and SYSTEM, and thus don't have SYSDBA privileges.

Topics
• Limitations for Oracle DBA privileges (p. 1796)
• How to manage privileges on SYS objects (p. 1796)

Limitations for Oracle DBA privileges


In the database, a role is a collection of privileges that you can grant to or revoke from a user. An Oracle
database uses roles to provide security. For more information, see Configuring Privilege and Role
Authorization in the Oracle Database documentation.

The predefined role DBA normally allows all administrative privileges on an Oracle database. When you
create a DB instance, your master user account gets DBA privileges (with some limitations). To deliver a
managed experience, an RDS for Oracle database doesn't provide the following privileges for the DBA
role:

• ALTER DATABASE
• ALTER SYSTEM
• CREATE ANY DIRECTORY
• DROP ANY DIRECTORY
• GRANT ANY PRIVILEGE
• GRANT ANY ROLE

For more RDS for Oracle system privilege and role information, see Master user account
privileges (p. 2682).

How to manage privileges on SYS objects


You can manage privileges on SYS objects by using the rdsadmin.rdsadmin_util
package. For example, if you create the database user myuser, you could use the
rdsadmin.rdsadmin_util.grant_sys_object procedure to grant SELECT privileges on V_
$SQLAREA to myuser. For more information, see the following topics:

• Granting SELECT or EXECUTE privileges to SYS objects (p. 1859)


• Revoking SELECT or EXECUTE privileges on SYS objects (p. 1861)
• Granting privileges to non-master users (p. 1861)

RDS for Oracle instance classes


The computation and memory capacity of a DB instance is determined by its instance class. The DB
instance class you need depends on your processing power and memory requirements.

1796
Amazon Relational Database Service User Guide
Oracle instance classes

Supported RDS for Oracle instance classes


The supported RDS for Oracle instance classes are a subset of the RDS DB instance classes. For the
complete list of RDS instance classes, see DB instance classes (p. 11).

RDS for Oracle also offers instance classes that are optimized for workloads that require additional
memory, storage, and I/O per vCPU. These instance classes use the following naming convention:

db.r5b.instance_size.tpcthreads_per_core.memratio
db.r5.instance_size.tpcthreads_per_core.memratio

The following is an example of a supported instance class:

db.r5b.4xlarge.tpc2.mem2x

The components of the preceding instance class name are as follows:

• db.r5b.4xlarge – The name of the instance class.


• tpc2 – The threads per core. A value of 2 means that multithreading is turned on. If the value is 1,
multithreading is turned off.
• mem2x – The ratio of additional memory to the standard memory for the instance class. In this
example, the optimization provides twice as much memory as a standard db.r5.4xlarge instance.

The following table lists all instance classes supported for Oracle Database. Oracle Database 12c Release
1 (12.1.0.2) and Oracle Database 12c Release 2 (12.2.0.2) are listed in the table, but support for these
releases is deprecated. For information about the memory attributes of each type, see RDS for Oracle
instance types.

Oracle edition Oracle Database 19c and higher, Oracle Oracle Database 12c Release 1
Database 12c Release 2 (12.2.0.1) (12.1.0.2) (deprecated)
(deprecated)

Enterprise Standard instance classes


Edition (EE)
db.m6i.large–db.m6i.32xlarge (19c only) db.m5.large–db.m5.24xlarge
Bring Your Own
License (BYOL) db.m5d.large–db.m5d.24xlarge

db.m5.large–db.m5.24xlarge

Memory optimized instance classes

db.r6i.large–db.r6i.32xlarge (19c only) db.r5.12xlarge.tpc2.mem2x

db.r5d.large–db.r5d.24xlarge db.r5b.large–db.r5b.24xlarge

db.r5b.8xlarge.tpc2.mem3x db.r5.8xlarge.tpc2.mem3x

db.r5b.6xlarge.tpc2.mem4x db.r5.6xlarge.tpc2.mem4x

db.r5b.4xlarge.tpc2.mem4x db.r5.4xlarge.tpc2.mem4x

db.r5b.4xlarge.tpc2.mem3x db.r5.4xlarge.tpc2.mem3x

db.r5b.4xlarge.tpc2.mem2x db.r5.4xlarge.tpc2.mem2x

db.r5b.2xlarge.tpc2.mem8x db.r5.2xlarge.tpc2.mem8x

1797
Amazon Relational Database Service User Guide
Oracle instance classes

Oracle edition Oracle Database 19c and higher, Oracle Oracle Database 12c Release 1
Database 12c Release 2 (12.2.0.1) (12.1.0.2) (deprecated)
(deprecated)
db.r5b.2xlarge.tpc2.mem4x db.r5.2xlarge.tpc2.mem4x

db.r5b.2xlarge.tpc1.mem2x db.r5.2xlarge.tpc1.mem2x

db.r5b.xlarge.tpc2.mem4x db.r5.xlarge.tpc2.mem4x

db.r5b.xlarge.tpc2.mem2x db.r5.xlarge.tpc2.mem2x

db.r5b.large.tpc1.mem2x db.r5.large.tpc1.mem2x

db.r5b.large–db.r5b.24xlarge db.r5.large–db.r5.24xlarge

db.r5.12xlarge.tpc2.mem2x db.x1e.xlarge–db.x1e.32xlarge

db.r5.8xlarge.tpc2.mem3x db.x1.16xlarge–db.x1.32xlarge

db.r5.6xlarge.tpc2.mem4x db.z1d.large–db.z1d.12xlarge

db.r5.4xlarge.tpc2.mem4x

db.r5.4xlarge.tpc2.mem3x

db.r5.4xlarge.tpc2.mem2x

db.r5.2xlarge.tpc2.mem8x

db.r5.2xlarge.tpc2.mem4x

db.r5.2xlarge.tpc1.mem2x

db.r5.xlarge.tpc2.mem4x

db.r5.xlarge.tpc2.mem2x

db.r5.large.tpc1.mem2x

db.r5.large–db.r5.24xlarge

db.x2iedn.xlarge–db.x2iedn.32xlarge

db.x2iezn.2xlarge–db.x2iezn.12xlarge

db.x2idn.16xlarge–db.x2idn.32xlarge

db.x1e.xlarge–db.x1e.32xlarge

db.x1.16xlarge–db.x1.32xlarge

db.z1d.large–db.z1d.12xlarge

Burstable performance instance classes

db.t3.small–db.t3.2xlarge db.t3.micro–db.t3.2xlarge

1798
Amazon Relational Database Service User Guide
Oracle instance classes

Oracle edition Oracle Database 19c and higher, Oracle Oracle Database 12c Release 1
Database 12c Release 2 (12.2.0.1) (12.1.0.2) (deprecated)
(deprecated)

Standard Standard instance classes


Edition 2 (SE2)
db.m6i.large–db.m6i.4xlarge (19c only) db.m5.large–db.m5.4xlarge
Bring Your Own
License (BYOL) db.m5d.large–db.m5d.4xlarge

db.m5.large–db.m5.4xlarge

Memory optimized instance classes

db.r6i.large–db.r6i.4xlarge (19c only) db.r5.4xlarge.tpc2.mem4x

db.r5d.large–db.r5d.4xlarge db.r5.4xlarge.tpc2.mem3x

db.r5.4xlarge.tpc2.mem4x db.r5.4xlarge.tpc2.mem2x

db.r5.4xlarge.tpc2.mem3x db.r5.2xlarge.tpc2.mem8x

db.r5.4xlarge.tpc2.mem2x db.r5.2xlarge.tpc2.mem4x

db.r5.2xlarge.tpc2.mem8x db.r5.2xlarge.tpc1.mem2x

db.r5.2xlarge.tpc2.mem4x db.r5.xlarge.tpc2.mem4x

db.r5.2xlarge.tpc1.mem2x db.r5.xlarge.tpc2.mem2x

db.r5.xlarge.tpc2.mem4x db.r5.large.tpc1.mem2x

db.r5.xlarge.tpc2.mem2x db.r5.large–db.r5.4xlarge

db.r5.large.tpc1.mem2x db.r5b.large–db.r5b.4xlarge

db.r5.large–db.r5.4xlarge db.z1d.large–db.z1d.3xlarge

db.r5b.large–db.r5b.4xlarge

db.x2iedn.xlarge–db.x2iedn.4xlarge

db.x2iezn.2xlarge–db.x2iezn.4xlarge

db.z1d.large–db.z1d.3xlarge

Burstable performance instance classes

db.t3.small–db.t3.2xlarge db.t3.micro–db.t3.2xlarge

Standard Standard instance classes


Edition 2 (SE2)
db.m5.large–db.m5.4xlarge db.m5.large–db.m5.4xlarge
License
Included Memory optimized instance classes

db.r5.large–db.r5.4xlarge db.r5.large–db.r5.4xlarge

Burstable performance instance classes

db.t3.small–db.t3.2xlarge db.t3.micro–db.t3.2xlarge

1799
Amazon Relational Database Service User Guide
Oracle database architecture

Note
We encourage all BYOL customers to consult their licensing agreement to assess the impact
of Amazon RDS for Oracle deprecations. For more information on the compute capacity of DB
instance classes supported by RDS for Oracle, see DB instance classes (p. 11) and Configuring
the processor for a DB instance class in RDS for Oracle (p. 71).
Note
If you have DB snapshots of DB instances that were using deprecated DB instance classes, you
can choose a DB instance class that is not deprecated when you restore the DB snapshots. For
more information, see Restoring from a DB snapshot (p. 615).

Deprecated Oracle DB instance classes


The following DB instance classes are deprecated for RDS for Oracle:

• db.m1, db.m2, db.m3, db.m4


• db.t3.micro (supported only on 12.1.0.2, which is deprecated)
• db.t1, db.t2
• db.r1, db.r2, db.r3, db.r4

The preceding DB instance classes have been replaced by better performing DB instance classes that are
generally available at a lower cost. If you have DB instances that use deprecated DB instance classes, you
have the following options:

• Allow Amazon RDS to modify each DB instance automatically to use a comparable non-deprecated DB
instance class. For deprecation timelines, see DB instance class types (p. 11).
• Change the DB instance class yourself by modifying the DB instance. For more information, see
Modifying an Amazon RDS DB instance (p. 401).

If you have DB snapshots of DB instances that were using deprecated DB instance classes, you can choose
a DB instance class that is not deprecated when you restore the DB snapshots. For more information, see
Restoring from a DB snapshot (p. 615).

RDS for Oracle database architecture


The multitenant architecture enables an Oracle database to function as a multitenant container database
(CDB). A CDB can include customer-created pluggable databases (PDBs). A non-CDB is an Oracle database
that uses the traditional architecture, which can't contain PDBs. For more information about the
multitenant architecture, see Oracle Multitenant Administrator’s Guide.

For Oracle Database 19c and higher, RDS for Oracle supports the single-tenant configuration of the
multitenant architecture. In this case, your CDB contains only one PDB. The single-tenant configuration
of the multitenant architecture uses the same RDS APIs as the non-CDB architecture. Thus, your
experience with a PDB is mostly identical to your experience with a non-CDB.
Note
You can't access the CDB itself.

In Oracle Database 21c and higher, all databases are CDBs. In contrast, you can create an Oracle
Database 19c DB instance as either a CDB or non-CDB. You can't upgrade a non-CDB to a CDB, but you
convert an Oracle Database 19c non-CDB to a CDB, and then upgrade it. You can't convert a CDB to a
non-CDB.

For more information, see the following resources:

• Working with CDBs in RDS for Oracle (p. 1840)

1800
Amazon Relational Database Service User Guide
Oracle parameters

• Limitations of a single-tenant CDB (p. 1805)


• Creating an Amazon RDS DB instance (p. 300)

RDS for Oracle parameters


In Amazon RDS, you manage parameters using parameter groups. For more information, see Working
with parameter groups (p. 347). To view the supported parameters for a specific Oracle Database edition
and version, run the AWS CLI command describe-engine-default-parameters.

For example, to view the supported parameters for the Enterprise Edition of Oracle Database 19c, run
the following command.

aws rds describe-engine-default-parameters \


--db-parameter-group-family oracle-ee-19

RDS for Oracle character sets


RDS for Oracle supports two types of character sets: the DB character set and national character set.

DB character set
The Oracle database character set is used in the CHAR, VARCHAR2, and CLOB data types. The database
also uses this character set for metadata such as table names, column names, and SQL statements. The
Oracle database character set is typically referred to as the DB character set.

You set the character set when you create a DB instance. You can't change the DB character set after you
create the database.

Supported DB character sets


The following table lists the Oracle DB character sets that are supported in Amazon RDS. You can use a
value from this table with the --character-set-name parameter of the AWS CLI create-db-instance
command or with the CharacterSetName parameter of the Amazon RDS API CreateDBInstance
operation.
Note
The character set for a CDB is always AL32UTF8. You can set a different character set for the
PDB only.

Value Description

AL32UTF8 Unicode 5.0 UTF-8 Universal character set


(default)

AR8ISO8859P6 ISO 8859-6 Latin/Arabic

AR8MSWIN1256 Microsoft Windows Code Page 1256 8-bit Latin/


Arabic

BLT8ISO8859P13 ISO 8859-13 Baltic

BLT8MSWIN1257 Microsoft Windows Code Page 1257 8-bit Baltic

CL8ISO8859P5 ISO 88559-5 Latin/Cyrillic

CL8MSWIN1251 Microsoft Windows Code Page 1251 8-bit Latin/


Cyrillic

1801
Amazon Relational Database Service User Guide
Oracle character sets

Value Description

EE8ISO8859P2 ISO 8859-2 East European

EL8ISO8859P7 ISO 8859-7 Latin/Greek

EE8MSWIN1250 Microsoft Windows Code Page 1250 8-bit East


European

EL8MSWIN1253 Microsoft Windows Code Page 1253 8-bit Latin/


Greek

IW8ISO8859P8 ISO 8859-8 Latin/Hebrew

IW8MSWIN1255 Microsoft Windows Code Page 1255 8-bit Latin/


Hebrew

JA16EUC EUC 24-bit Japanese

JA16EUCTILDE Same as JA16EUC except for mapping of wave


dash and tilde to and from Unicode

JA16SJIS Shift-JIS 16-bit Japanese

JA16SJISTILDE Same as JA16SJIS except for mapping of wave


dash and tilde to and from Unicode

KO16MSWIN949 Microsoft Windows Code Page 949 Korean

NE8ISO8859P10 ISO 8859-10 North European

NEE8ISO8859P4 ISO 8859-4 North and Northeast European

TH8TISASCII Thai Industrial Standard 620-2533-ASCII 8-bit

TR8MSWIN1254 Microsoft Windows Code Page 1254 8-bit Turkish

US7ASCII ASCII 7-bit American

UTF8 Unicode 3.0 UTF-8 Universal character set,


CESU-8 compliant

VN8MSWIN1258 Microsoft Windows Code Page 1258 8-bit


Vietnamese

WE8ISO8859P1 Western European 8-bit ISO 8859 Part 1

WE8ISO8859P15 ISO 8859-15 West European

WE8ISO8859P9 ISO 8859-9 West European and Turkish

WE8MSWIN1252 Microsoft Windows Code Page 1252 8-bit West


European

ZHS16GBK GBK 16-bit Simplified Chinese

ZHT16HKSCS Microsoft Windows Code Page 950 with Hong


Kong Supplementary Character Set HKSCS-2001.
Character set conversion is based on Unicode 3.0.

ZHT16MSWIN950 Microsoft Windows Code Page 950 Traditional


Chinese

1802
Amazon Relational Database Service User Guide
Oracle character sets

Value Description

ZHT32EUC EUC 32-bit Traditional Chinese

NLS_LANG environment variable


A locale is a set of information addressing linguistic and cultural requirements that corresponds to a
given language and country. Setting the NLS_LANG environment variable in your client's environment
is the simplest way to specify locale behavior for Oracle. This variable sets the language and territory
used by the client application and the database server. It also indicates the client's character set,
which corresponds to the character set for data entered or displayed by a client application. For more
information on NLS_LANG and character sets, see What is a character set or code page? in the Oracle
documentation.

NLS initialization parameters


You can also set the following National Language Support (NLS) initialization parameters at the instance
level for an Oracle DB instance in Amazon RDS:

• NLS_DATE_FORMAT
• NLS_LENGTH_SEMANTICS
• NLS_NCHAR_CONV_EXCP
• NLS_TIME_FORMAT
• NLS_TIME_TZ_FORMAT
• NLS_TIMESTAMP_FORMAT
• NLS_TIMESTAMP_TZ_FORMAT

For information about modifying instance parameters, see Working with parameter groups (p. 347).

You can set other NLS initialization parameters in your SQL client. For example, the following statement
sets the NLS_LANGUAGE initialization parameter to GERMAN in a SQL client that is connected to an
Oracle DB instance:

ALTER SESSION SET NLS_LANGUAGE=GERMAN;

For information about connecting to an Oracle DB instance with a SQL client, see Connecting to your
RDS for Oracle DB instance (p. 1806).

National character set


The national character set is used in the NCHAR, NVARCHAR2, and NCLOB data types. The national
character set is typically referred to as the NCHAR character set. Unlike the DB character set, the NCHAR
character set doesn't affect database metadata.

The NCHAR character set supports the following character sets:

• AL16UTF16 (default)
• UTF8

You can specify either value with the --nchar-character-set-name parameter of the create-
db-instance command (AWS CLI version 2 only). If you use the Amazon RDS API, specify the
NcharCharacterSetName parameter of CreateDBInstance operation. You can't change the national
character set after you create the database.

1803
Amazon Relational Database Service User Guide
Oracle limitations

For more information about Unicode in Oracle databases, see Supporting multilingual databases with
unicode in the Oracle documentation.

RDS for Oracle limitations


In the following sections, you can find important limitations of using RDS for Oracle.
Note
This list is not exhaustive.

Topics
• Oracle file size limits in Amazon RDS (p. 1804)
• Public synonyms for Oracle-supplied schemas (p. 1804)
• Schemas for unsupported features (p. 1804)
• Limitations for Oracle DBA privileges (p. 1796)
• Limitations of a single-tenant CDB (p. 1805)
• Deprecation of TLS 1.0 and 1.1 Transport Layer Security (p. 1805)

Oracle file size limits in Amazon RDS


The maximum size of a single file on RDS for Oracle DB instances is 16 TiB (tebibytes). This limit is
imposed by the ext4 filesystem used by the instance. Thus, Oracle bigfile data files are limited to 16 TiB.
If you try to resize a data file in a bigfile tablespace to a value over the limit, you receive an error such as
the following.

ORA-01237: cannot extend datafile 6


ORA-01110: data file 6: '/rdsdbdata/db/mydir/datafile/myfile.dbf'
ORA-27059: could not reduce file size
Linux-x86_64 Error: 27: File too large
Additional information: 2

Public synonyms for Oracle-supplied schemas


Don't create or modify public synonyms for Oracle-supplied schemas, including SYS, SYSTEM, and
RDSADMIN. Such actions might result in invalidation of core database components and affect the
availability of your DB instance.

You can create public synonyms referencing objects in your own schemas.

Schemas for unsupported features


In general, Amazon RDS doesn't prevent you from creating schemas for unsupported features. However,
if you create schemas for Oracle features and components that require SYS privileges, you can damage
the data dictionary and affect your instance availability. Use only supported features and schemas that
are available in Adding options to Oracle DB instances (p. 1990).

Limitations for Oracle DBA privileges


In the database, a role is a collection of privileges that you can grant to or revoke from a user. An Oracle
database uses roles to provide security.

The predefined role DBA normally allows all administrative privileges on an Oracle database. When you
create a DB instance, your master user account gets DBA privileges (with some limitations). To deliver a

1804
Amazon Relational Database Service User Guide
Oracle limitations

managed experience, an RDS for Oracle database doesn't provide the following privileges for the DBA
role:

• ALTER DATABASE
• ALTER SYSTEM
• CREATE ANY DIRECTORY
• DROP ANY DIRECTORY
• GRANT ANY PRIVILEGE
• GRANT ANY ROLE

Use the master user account for administrative tasks such as creating additional user accounts in the
database. You can't use SYS, SYSTEM, and other Oracle-supplied administrative accounts.

Limitations of a single-tenant CDB


The following options aren't supported for the single-tenant configuration of the multitenant
architecture:

• Database Activity Streams


• Oracle Enterprise Manager
• Oracle Enterprise Manager Agent
• Oracle Label Security

The following operations work in a single-tenant CDB, but no customer-visible mechanism can detect
the current status of the operations:

• Enabling and disabling block change tracking (p. 1903)


• Enabling auditing for the SYS.AUD$ table (p. 1880)

Note
Auditing information isn't available from within the PDB.

Deprecation of TLS 1.0 and 1.1 Transport Layer Security


Transport Layer Security protocol versions 1.0 and 1.1 (TLS 1.0 and TLS 1.1) are deprecated. In
accordance with security best practices, Oracle has deprecated the use of TLS 1.0 and TLS 1.1. To meet
your security requirements, RDS for Oracle strongly recommends that you use TLS 1.2 instead.

1805
Amazon Relational Database Service User Guide
Connecting to your Oracle DB instance

Connecting to your RDS for Oracle DB instance


After Amazon RDS provisions your Oracle DB instance, you can use any standard SQL client application
to log in to your DB instance. Because RDS is a managed service, you can't log in as SYS or SYSTEM. For
more information, see RDS for Oracle users and privileges (p. 1796).

In this topic, you learn how to use Oracle SQL Developer or SQL*Plus to connect to an RDS for Oracle DB
instance. For an example that walks you through the process of creating and connecting to a sample DB
instance, see Creating and connecting to an Oracle DB instance (p. 222).

Topics
• Finding the endpoint of your RDS for Oracle DB instance (p. 1806)
• Connecting to your DB instance using Oracle SQL developer (p. 1808)
• Connecting to your DB instance using SQL*Plus (p. 1810)
• Considerations for security groups (p. 1811)
• Considerations for process architecture (p. 1811)
• Troubleshooting connections to your Oracle DB instance (p. 1811)
• Modifying connection properties using sqlnet.ora parameters (p. 1812)

Finding the endpoint of your RDS for Oracle DB


instance
Each Amazon RDS DB instance has an endpoint, and each endpoint has the DNS name and port number
for the DB instance. To connect to your DB instance using a SQL client application, you need the DNS
name and port number for your DB instance.

You can find the endpoint for a DB instance using the Amazon RDS console or the AWS CLI.
Note
If you are using Kerberos authentication, see Connecting to Oracle with Kerberos
authentication (p. 1831).

Console
To find the endpoint using the console

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the upper-right corner of the console, choose the AWS Region of your DB instance.
3. Find the DNS name and port number for your DB instance.

a. Choose Databases to display a list of your DB instances.


b. Choose the Oracle DB instance name to display the instance details.
c. On the Connectivity & security tab, copy the endpoint. Also, note the port number. You need
both the endpoint and the port number to connect to the DB instance.

1806
Amazon Relational Database Service User Guide
Finding the endpoint

AWS CLI
To find the endpoint of an Oracle DB instance by using the AWS CLI, call the describe-db-instances
command.

Example To find the endpoint using the AWS CLI

aws rds describe-db-instances

Search for Endpoint in the output to find the DNS name and port number for your DB instance. The
Address line in the output contains the DNS name. The following is an example of the JSON endpoint
output.

"Endpoint": {
"HostedZoneId": "Z1PVIF0B656C1W",
"Port": 3306,
"Address": "myinstance.123456789012.us-west-2.rds.amazonaws.com"

1807
Amazon Relational Database Service User Guide
SQL developer

},

Note
The output might contain information for multiple DB instances.

Connecting to your DB instance using Oracle SQL


developer
In this procedure, you connect to your DB instance by using Oracle SQL Developer. To download a
standalone version of this utility, see the Oracle SQL developer downloads page.

To connect to your DB instance, you need its DNS name and port number. For information about finding
the DNS name and port number for a DB instance, see Finding the endpoint of your RDS for Oracle DB
instance (p. 1806).

To connect to a DB instance using SQL developer

1. Start Oracle SQL Developer.


2. On the Connections tab, choose the add (+) icon.

3. In the New/Select Database Connection dialog box, provide the information for your DB instance:

• For Connection Name, enter a name that describes the connection, such as Oracle-RDS.
• For Username, enter the name of the database administrator for the DB instance.
• For Password, enter the password for the database administrator.
• For Hostname, enter the DNS name of the DB instance.
• For Port, enter the port number.

1808
Amazon Relational Database Service User Guide
SQL developer

• For SID, enter the DB name. You can find the DB name on the Configuration tab of your database
details page.

The completed dialog box should look similar to the following.

4. Choose Connect.
5. You can now start creating your own databases and running queries against your DB instance and
databases as usual. To run a test query against your DB instance, do the following:

a. In the Worksheet tab for your connection, enter the following SQL query.

SELECT NAME FROM V$DATABASE;

b. Choose the execute icon to run the query.

SQL Developer returns the database name.

1809
Amazon Relational Database Service User Guide
SQL*Plus

Connecting to your DB instance using SQL*Plus


You can use a utility like SQL*Plus to connect to an Amazon RDS DB instance running Oracle. To
download Oracle Instant Client, which includes a standalone version of SQL*Plus, see Oracle Instant
Client Downloads.

To connect to your DB instance, you need its DNS name and port number. For information about finding
the DNS name and port number for a DB instance, see Finding the endpoint of your RDS for Oracle DB
instance (p. 1806).

Example To connect to an Oracle DB instance using SQL*Plus


In the following examples, substitute the user name of your DB instance administrator. Also, substitute
the DNS name for your DB instance, and then include the port number and the Oracle SID. The SID value
is the name of the DB instance's database that you specified when you created the DB instance, and not
the name of the DB instance.

For Linux, macOS, or Unix:

sqlplus 'user_name@(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dns_name)(PORT=port))
(CONNECT_DATA=(SID=database_name)))'

For Windows:

sqlplus user_name@(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dns_name)(PORT=port))
(CONNECT_DATA=(SID=database_name)))

You should see output similar to the following.

SQL*Plus: Release 12.1.0.2.0 Production on Mon Aug 21 09:42:20 2017

After you enter the password for the user, the SQL prompt appears.

1810
Amazon Relational Database Service User Guide
Security group considerations

SQL>

Note
The shorter format connection string (EZ Connect), such as sqlplus USER/
PASSWORD@longer-than-63-chars-rds-endpoint-here:1521/database-identifier,
might encounter a maximum character limit, so you we recommend that you don't use it to
connect.

Considerations for security groups


For you to connect to your DB instance, it must be associated with a security group that contains the
necessary IP addresses and network configuration. Your DB instance might use the default security
group. If you assigned a default, nonconfigured security group when you created the DB instance, the
firewall prevents connections. For information about creating a new security group, see Controlling
access with security groups (p. 2680).

After you create the new security group, you modify your DB instance to associate it with the security
group. For more information, see Modifying an Amazon RDS DB instance (p. 401).

You can enhance security by using SSL to encrypt connections to your DB instance. For more information,
see Oracle Secure Sockets Layer (p. 2068).

Considerations for process architecture


Server processes handle user connections to an Oracle DB instance. By default, the Oracle DB instance
uses dedicated server processes. With dedicated server processes, each server process services only one
user process. You can optionally configure shared server processes. With shared server processes, each
server process can service multiple user processes.

You might consider using shared server processes when a high number of user sessions are using too
much memory on the server. You might also consider shared server processes when sessions connect
and disconnect very often, resulting in performance issues. There are also disadvantages to using
shared server processes. For example, they can strain CPU resources, and they are more complicated to
configure and administer.

For more information about dedicated and shared server processes, see About dedicated and shared
server processes in the Oracle documentation. For more information about configuring shared server
processes on an RDS for Oracle DB instance, see How do I configure Amazon RDS for Oracle database to
work with shared servers? in the Knowledge Center.

Troubleshooting connections to your Oracle DB


instance
The following are issues you might encounter when you try to connect to your Oracle DB instance.

Issue Troubleshooting suggestions

Unable to connect to your For a newly created DB instance, the DB instance has a status of
DB instance. creating until it is ready to use. When the state changes to available,
you can connect to the DB instance. Depending on the DB instance
class and the amount of storage, it can take up to 20 minutes before
the new DB instance is available.

Unable to connect to your If you can't send or receive communications over the port that you
DB instance. specified when you created the DB instance, you can't connect to the
DB instance. Check with your network administrator to verify that the

1811
Amazon Relational Database Service User Guide
Modifying Oracle sqlnet.ora parameters

Issue Troubleshooting suggestions


port you specified for your DB instance allows inbound and outbound
communication.

Unable to connect to your The access rules enforced by your local firewall and the IP addresses
DB instance. you authorized to access your DB instance in the security group for the
DB instance might not match. The problem is most likely the inbound
or outbound rules on your firewall.

You can add or edit an inbound rule in the security group. For Source,
choose My IP. This allows access to the DB instance from the IP address
detected in your browser. For more information, see Amazon VPC VPCs
and Amazon RDS (p. 2688).

For more information about security groups, see Controlling access


with security groups (p. 2680).

To walk through the process of setting up rules for your security


group, see Tutorial: Create a VPC for use with a DB instance (IPv4
only) (p. 2706).

Connect failed because Make sure that you specified the server name and port number
target host or object does correctly. For Server name, enter the DNS name from the console.
not exist – Oracle, Error:
ORA-12545 For information about finding the DNS name and port number for
a DB instance, see Finding the endpoint of your RDS for Oracle DB
instance (p. 1806).

Invalid username/password; You were able to reach the DB instance, but the connection was
logon denied – Oracle, refused. This is usually caused by providing an incorrect user name or
Error: ORA-01017 password. Verify the user name and password, and then retry.

TNS:listener does not Ensure the correct SID is entered. The SID is the same as your DB name.
currently know of SID given Find the DB name in the Configuration tab of the Databases page for
in connect descriptor - your instance. You can also find the DB name using the AWS CLI:
Oracle, ERROR: ORA-12505
aws rds describe-db-instances --query 'DBInstances[*].
[DBInstanceIdentifier,DBName]' --output text

For more information on connection issues, see Can't connect to Amazon RDS DB instance (p. 2727).

Modifying connection properties using sqlnet.ora


parameters
The sqlnet.ora file includes parameters that configure Oracle Net features on Oracle database servers
and clients. Using the parameters in the sqlnet.ora file, you can modify properties for connections in and
out of the database.

For more information about why you might set sqlnet.ora parameters, see Configuring profile
parameters in the Oracle documentation.

Setting sqlnet.ora parameters


Amazon RDS for Oracle parameter groups include a subset of sqlnet.ora parameters. You set them
in the same way that you set other Oracle parameters. The sqlnetora. prefix identifies which

1812
Amazon Relational Database Service User Guide
Modifying Oracle sqlnet.ora parameters

parameters are sqlnet.ora parameters. For example, in an Oracle parameter group in Amazon RDS, the
default_sdu_size sqlnet.ora parameter is sqlnetora.default_sdu_size.

For information about managing parameter groups and setting parameter values, see Working with
parameter groups (p. 347).

Supported sqlnet.ora parameters


Amazon RDS supports the following sqlnet.ora parameters. Changes to dynamic sqlnet.ora parameters
take effect immediately.

Parameter Valid Static/ Description


values Dynamic

sqlnetora.default_sdu_size Oracle Dynamic The session data unit (SDU) size, in


12c – bytes.
512 to
2097152 The SDU is the amount of data
that is put in a buffer and sent
across the network at one time.

sqlnetora.diag_adr_enabled ON, OFF Dynamic A value that enables or disables


Automatic Diagnostic Repository
(ADR) tracing.

ON specifies that ADR file tracing is


used.

OFF specifies that non-ADR file


tracing is used.

sqlnetora.recv_buf_size 8192 to Dynamic The buffer space limit for receive


268435456 operations of sessions, supported
by the TCP/IP, TCP/IP with SSL, and
SDP protocols.

sqlnetora.send_buf_size 8192 to Dynamic The buffer space limit for send


268435456 operations of sessions, supported
by the TCP/IP, TCP/IP with SSL, and
SDP protocols.

sqlnetora.sqlnet.allowed_logon_version_client
8, 10, Dynamic Minimum authentication protocol
11, 12 version allowed for clients, and
servers acting as clients, to
establish a connection to Oracle
DB instances.

sqlnetora.sqlnet.allowed_logon_version_server
8, 9, 10, Dynamic Minimum authentication protocol
11, 12, version allowed to establish a
12a connection to Oracle DB instances.

sqlnetora.sqlnet.expire_time 0 to Dynamic Time interval, in minutes, to send


1440 a check to verify that client-server
connections are active.

sqlnetora.sqlnet.inbound_connect_timeout 0 or 10 Dynamic Time, in seconds, for a client


to 7200 to connect with the database
server and provide the necessary
authentication information.

1813
Amazon Relational Database Service User Guide
Modifying Oracle sqlnet.ora parameters

Parameter Valid Static/ Description


values Dynamic

sqlnetora.sqlnet.outbound_connect_timeout 0 or 10 Dynamic Time, in seconds, for a client to


to 7200 establish an Oracle Net connection
to the DB instance.

sqlnetora.sqlnet.recv_timeout 0 or 10 Dynamic Time, in seconds, for a database


to 7200 server to wait for client data after
establishing a connection.

sqlnetora.sqlnet.send_timeout 0 or 10 Dynamic Time, in seconds, for a database


to 7200 server to complete a send
operation to clients after
establishing a connection.

sqlnetora.tcp.connect_timeout 0 or 10 Dynamic Time, in seconds, for a client to


to 7200 establish a TCP connection to the
database server.

sqlnetora.trace_level_server 0, 4, Dynamic For non-ADR tracing, turns server


10, 16, tracing on at a specified level or
OFF, turns it off.
USER,
ADMIN,
SUPPORT

The default value for each supported sqlnet.ora parameter is the Oracle default for the release. For
information about default values for Oracle Database 12c, see Parameters for the sqlnet.ora file in the
Oracle Database 12c documentation.

Viewing sqlnet.ora parameters


You can view sqlnet.ora parameters and their settings using the AWS Management Console, the AWS CLI,
or a SQL client.

Viewing sqlnet.ora parameters using the console


For information about viewing parameters in a parameter group, see Working with parameter
groups (p. 347).

In Oracle parameter groups, the sqlnetora. prefix identifies which parameters are sqlnet.ora
parameters.

Viewing sqlnet.ora parameters using the AWS CLI


To view the sqlnet.ora parameters that were configured in an Oracle parameter group, use the AWS CLI
describe-db-parameters command.

To view the all of the sqlnet.ora parameters for an Oracle DB instance, call the AWS CLI download-db-
log-file-portion command. Specify the DB instance identifier, the log file name, and the type of output.

Example

The following code lists all of the sqlnet.ora parameters for mydbinstance.

For Linux, macOS, or Unix:

1814
Amazon Relational Database Service User Guide
Modifying Oracle sqlnet.ora parameters

aws rds download-db-log-file-portion \


--db-instance-identifier mydbinstance \
--log-file-name trace/sqlnet-parameters \
--output text

For Windows:

aws rds download-db-log-file-portion ^


--db-instance-identifier mydbinstance ^
--log-file-name trace/sqlnet-parameters ^
--output text

Viewing sqlnet.ora parameters using a SQL client


After you connect to the Oracle DB instance in a SQL client, the following query lists the sqlnet.ora
parameters.

SELECT * FROM TABLE


(rdsadmin.rds_file_util.read_text_file(
p_directory => 'BDUMP',
p_filename => 'sqlnet-parameters'));

For information about connecting to an Oracle DB instance in a SQL client, see Connecting to your RDS
for Oracle DB instance (p. 1806).

1815
Amazon Relational Database Service User Guide
Securing Oracle connections

Securing Oracle DB instance connections


Amazon RDS for Oracle supports SSL/TLS encrypted connections and also the Oracle Native Network
Encryption (NNE) option to encrypt connections between your application and your Oracle DB instance.
For more information about the Oracle Native Network Encryption option, see Oracle native network
encryption (p. 2057).

Topics
• Using SSL with an RDS for Oracle DB instance (p. 1816)
• Updating applications to connect to Oracle DB instances using new SSL/TLS certificates (p. 1816)
• Using native network encryption with an RDS for Oracle DB instance (p. 1819)
• Configuring Kerberos authentication for Amazon RDS for Oracle (p. 1819)
• Configuring UTL_HTTP access using certificates and an Oracle wallet (p. 1832)

Using SSL with an RDS for Oracle DB instance


Secure Sockets Layer (SSL) is an industry-standard protocol for securing network connections between
client and server. After SSL version 3.0, the name was changed to Transport Layer Security (TLS), but we
still often refer to the protocol as SSL. Amazon RDS supports SSL encryption for Oracle DB instances.
Using SSL, you can encrypt a connection between your application client and your Oracle DB instance.
SSL support is available in all AWS Regions for Oracle.

To enable SSL encryption for an Oracle DB instance, add the Oracle SSL option to the option group
associated with the DB instance. Amazon RDS uses a second port, as required by Oracle, for SSL
connections. Doing this allows both clear text and SSL-encrypted communication to occur at the same
time between a DB instance and an Oracle client. For example, you can use the port with clear text
communication to communicate with other resources inside a VPC while using the port with SSL-
encrypted communication to communicate with resources outside the VPC.

For more information, see Oracle Secure Sockets Layer (p. 2068).
Note
You can't use both SSL and Oracle native network encryption (NNE) on the same DB instance.
Before you can use SSL encryption, you must disable any other connection encryption.

Updating applications to connect to Oracle DB


instances using new SSL/TLS certificates
As of January 13, 2023, Amazon RDS has published new Certificate Authority (CA) certificates for
connecting to your RDS DB instances using Secure Socket Layer or Transport Layer Security (SSL/TLS).
Following, you can find information about updating your applications to use the new certificates.

This topic can help you to determine whether any client applications use SSL/TLS to connect to your DB
instances.
Important
When you change the certificate for an Amazon RDS for Oracle DB instance, only the database
listener is restarted. The DB instance isn't restarted. Existing database connections are
unaffected, but new connections will encounter errors for a brief period while the listener is
restarted.
Note
For client applications that use SSL/TLS to connect to your DB instances, you must update your
client application trust stores to include the new CA certificates.

1816
Amazon Relational Database Service User Guide
Using new SSL/TLS certificates

After you update your CA certificates in the client application trust stores, you can rotate the certificates
on your DB instances. We strongly recommend testing these procedures in a development or staging
environment before implementing them in your production environments.

For more information about certificate rotation, see Rotating your SSL/TLS certificate (p. 2596). For
more information about downloading certificates, see Using SSL/TLS to encrypt a connection to a DB
instance (p. 2591). For information about using SSL/TLS with Oracle DB instances, see Oracle Secure
Sockets Layer (p. 2068).

Topics
• Finding out whether applications connect using SSL (p. 1817)
• Updating your application trust store (p. 1817)
• Example Java code for establishing SSL connections (p. 1818)

Finding out whether applications connect using SSL


If your Oracle DB instance uses an option group with the SSL option added, you might be using
SSL. Check this by following the instructions in Listing the options and option settings for an option
group (p. 339). For information about the SSL option, see Oracle Secure Sockets Layer (p. 2068).

Check the listener log to determine whether there are SSL connections. The following is sample output
in a listener log.

date time * (CONNECT_DATA=(CID=(PROGRAM=program)


(HOST=host)(USER=user))(SID=sid)) *
(ADDRESS=(PROTOCOL=tcps)(HOST=host)(PORT=port)) * establish * ORCL * 0

When PROTOCOL has the value tcps for an entry, it shows an SSL connection. However, when HOST
is 127.0.0.1, you can ignore the entry. Connections from 127.0.0.1 are a local management
agent on the DB instance. These connections aren't external SSL connections. Therefore, you have
applications connecting using SSL if you see listener log entries where PROTOCOL is tcps and HOST is
not 127.0.0.1.

To check the listener log, you can publish the log to Amazon CloudWatch Logs. For more information,
see Publishing Oracle logs to Amazon CloudWatch Logs (p. 927).

Updating your application trust store


You can update the trust store for applications that use SQL*Plus or JDBC for SSL/TLS connections.

Updating your application trust store for SQL*Plus


You can update the trust store for applications that use SQL*Plus for SSL/TLS connections.
Note
When you update the trust store, you can retain older certificates in addition to adding the new
certificates.

To update the trust store for SQL*Plus applications

1. Download the new root certificate that works for all AWS Regions and put the file in the
ssl_wallet directory.

For information about downloading the root certificate, see Using SSL/TLS to encrypt a connection
to a DB instance (p. 2591).
2. Run the following command to update the Oracle wallet.

1817
Amazon Relational Database Service User Guide
Using new SSL/TLS certificates

prompt>orapki wallet add -wallet $ORACLE_HOME/ssl_wallet -trusted_cert -cert


$ORACLE_HOME/ssl_wallet/ssl-cert.pem -auto_login_only

Replace the file name with the one that you downloaded.
3. Run the following command to confirm that the wallet was updated successfully.

prompt>orapki wallet display -wallet $ORACLE_HOME/ssl_wallet

Your output should contain the following.

Trusted Certificates:
Subject: CN=Amazon RDS Root 2019 CA,OU=Amazon RDS,O=Amazon Web Services\,
Inc.,L=Seattle,ST=Washington,C=US

Updating your application trust store for JDBC


You can update the trust store for applications that use JDBC for SSL/TLS connections.

For information about downloading the root certificate, see Using SSL/TLS to encrypt a connection to a
DB instance (p. 2591).

For sample scripts that import certificates, see Sample script for importing certificates into your trust
store (p. 2603).

Example Java code for establishing SSL connections


The following code example shows how to set up the SSL connection using JDBC.

import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.SQLException;
import java.util.Properties;

public class OracleSslConnectionTest {


private static final String DB_SERVER_NAME = "<dns-name-provided-by-amazon-rds>";
private static final Integer SSL_PORT = "<ssl-option-port-configured-in-option-group>";
private static final String DB_SID = "<oracle-sid>";
private static final String DB_USER = "<user name>";
private static final String DB_PASSWORD = "<password>";
// This key store has only the prod root ca.
private static final String KEY_STORE_FILE_PATH = "<file-path-to-keystore>";
private static final String KEY_STORE_PASS = "<keystore-password>";

public static void main(String[] args) throws SQLException {


final Properties properties = new Properties();
final String connectionString = String.format(
"jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS=(PROTOCOL=TCPS)(HOST=%s)(PORT=%d))
(CONNECT_DATA=(SID=%s)))",
DB_SERVER_NAME, SSL_PORT, DB_SID);
properties.put("user", DB_USER);
properties.put("password", DB_PASSWORD);
properties.put("oracle.jdbc.J2EE13Compliant", "true");
properties.put("javax.net.ssl.trustStore", KEY_STORE_FILE_PATH);
properties.put("javax.net.ssl.trustStoreType", "JKS");
properties.put("javax.net.ssl.trustStorePassword", KEY_STORE_PASS);
final Connection connection = DriverManager.getConnection(connectionString,
properties);

1818
Amazon Relational Database Service User Guide
Encrypting with NNE

// If no exception, that means handshake has passed, and an SSL connection can be
opened
}
}

Important
After you have determined that your database connections use SSL/TLS and have updated
your application trust store, you can update your database to use the rds-ca-rsa2048-g1
certificates. For instructions, see step 3 in Updating your CA certificate by modifying your DB
instance (p. 2597).

Using native network encryption with an RDS for


Oracle DB instance
Oracle Database offers two ways to encrypt data over the network: native network encryption (NNE) and
Transport Layer Security (TLS). NNE is a proprietary Oracle security feature, whereas TLS is an industry
standard. RDS for Oracle supports NNE for all editions of Oracle Database.

NNE has the following advantages over TLS:

• You can control NNE on the client and server using settings in the NNE option:
• SQLNET.ALLOW_WEAK_CRYPTO_CLIENTS and SQLNET.ALLOW_WEAK_CRYPTO
• SQLNET.CRYPTO_CHECKSUM_CLIENT and SQLNET.CRYPTO_CHECKSUM_SERVER
• SQLNET.CRYPTO_CHECKSUM_TYPES_CLIENT and SQLNET.CRYPTO_CHECKSUM_TYPES_SERVER
• SQLNET.ENCRYPTION_CLIENT and SQLNET.ENCRYPTION_SERVER
• SQLNET.ENCRYPTION_TYPES_CLIENT and SQLNET.ENCRYPTION_TYPES_SERVER
• In most cases, you don't need to configure your client or server. In contrast, TSL requires you to
configure both client and server.
• No certificates are required. In TLS, the server requires a certificate (which eventually expires), and the
client requires a trusted root certificate for the certificate authority that issued the server’s certificate.

To enable NNE encryption for an Oracle DB instance, add the Oracle NNE option to the option group
associated with the DB instance. For more information, see Oracle native network encryption (p. 2057).
Note
You can't use both NNE and TLS on the same DB instance.

Configuring Kerberos authentication for Amazon RDS


for Oracle
You can use Kerberos authentication to authenticate users when they connect to your Amazon RDS
for Oracle DB instance. In this configuration, your DB instance works with AWS Directory Service for
Microsoft Active Directory, also called AWS Managed Microsoft AD. When users authenticate with an
RDS for Oracle DB instance joined to the trusting domain, authentication requests are forwarded to the
directory that you create with AWS Directory Service.

Keeping all of your credentials in the same directory can save you time and effort. You have a centralized
place for storing and managing credentials for multiple database instances. A directory can also improve
your overall security profile.

1819
Amazon Relational Database Service User Guide
Configuring Kerberos authentication

Region and version availability


Feature availability and support varies across specific versions of each database engine, and across
AWS Regions. For more information on version and Region availability of RDS for Oracle with Kerberos
authentication, see Kerberos authentication (p. 141).
Note
Kerberos authentication isn't supported for DB instance classes that are deprecated for RDS for
Oracle DB instances. For more information, see RDS for Oracle instance classes (p. 1796).

Topics
• Setting up Kerberos authentication for Oracle DB instances (p. 1820)
• Managing a DB instance in a domain (p. 1829)
• Connecting to Oracle with Kerberos authentication (p. 1831)

Setting up Kerberos authentication for Oracle DB instances


Use AWS Directory Service for Microsoft Active Directory, also called AWS Managed Microsoft AD, to set
up Kerberos authentication for an Oracle DB instance. To set up Kerberos authentication, complete the
following steps:

• Step 1: Create a directory using the AWS Managed Microsoft AD (p. 1820)
• Step 2: Create a trust (p. 1824)
• Step 3: Configure IAM permissions for Amazon RDS (p. 1824)
• Step 4: Create and configure users (p. 1826)
• Step 5: Enable cross-VPC traffic between the directory and the DB instance (p. 1826)
• Step 6: Create or modify an Oracle DB instance (p. 1826)
• Step 7: Create Kerberos authentication Oracle logins (p. 1828)
• Step 8: Configure an Oracle client (p. 1829)

Note
During the setup, RDS creates an Oracle database user named
[email protected] with the CREATE SESSION privilege, where
example.com is your domain name. This user corresponds to the user that Directory Service
creates inside your Managed Active Directory. Periodically, RDS uses the credentials provided by
the Directory Service to log in to your Oracle database. Afterwards, RDS immediately destroys
the ticket cache.

Step 1: Create a directory using the AWS Managed Microsoft AD


AWS Directory Service creates a fully managed Active Directory in the AWS Cloud. When you create an
AWS Managed Microsoft AD directory, AWS Directory Service creates two domain controllers and Domain
Name System (DNS) servers on your behalf. The directory servers are created in different subnets in a
VPC. This redundancy helps make sure that your directory remains accessible even if a failure occurs.

When you create an AWS Managed Microsoft AD directory, AWS Directory Service performs the following
tasks on your behalf:

• Sets up an Active Directory within the VPC.


• Creates a directory administrator account with the user name Admin and the specified password. You
use this account to manage your directory.

1820
Amazon Relational Database Service User Guide
Configuring Kerberos authentication

Note
Be sure to save this password. AWS Directory Service doesn't store it. You can reset it, but you
can't retrieve it.
• Creates a security group for the directory controllers.

When you launch an AWS Managed Microsoft AD, AWS creates an Organizational Unit (OU) that contains
all of your directory's objects. This OU has the NetBIOS name that you typed when you created your
directory and is located in the domain root. The domain root is owned and managed by AWS.

The Admin account that was created with your AWS Managed Microsoft AD directory has permissions for
the most common administrative activities for your OU:

• Create, update, or delete users


• Add resources to your domain such as file or print servers, and then assign permissions for those
resources to users in your OU
• Create additional OUs and containers
• Delegate authority
• Restore deleted objects from the Active Directory Recycle Bin
• Run AD and DNS Windows PowerShell modules on the Active Directory Web Service

The Admin account also has rights to perform the following domain-wide activities:

• Manage DNS configurations (add, remove, or update records, zones, and forwarders)
• View DNS event logs
• View security event logs

To create the directory, use the AWS Management Console, the AWS CLI, or the AWS Directory Service
API. Make sure to open the relevant outbound ports on the directory security group so that the directory
can communicate with the Oracle DB instance.

To create a directory with AWS Managed Microsoft AD

1. Sign in to the AWS Management Console and open the AWS Directory Service console at https://
console.aws.amazon.com/directoryservicev2/.
2. In the navigation pane, choose Directories and choose Set up Directory.
3. Choose AWS Managed Microsoft AD. AWS Managed Microsoft AD is the only option that you can
currently use with Amazon RDS.
4. Enter the following information:

Directory DNS name

The fully qualified name for the directory, such as corp.example.com.


Directory NetBIOS name

The short name for the directory, such as CORP.


Directory description

(Optional) A description for the directory.


Admin password

The password for the directory administrator. The directory creation process creates an
administrator account with the user name Admin and this password.

1821
Amazon Relational Database Service User Guide
Configuring Kerberos authentication

The directory administrator password and can't include the word "admin." The password is case-
sensitive and must be 8–64 characters in length. It must also contain at least one character from
three of the following four categories:
• Lowercase letters (a–z)
• Uppercase letters (A–Z)
• Numbers (0–9)
• Non-alphanumeric characters (~!@#$%^&*_-+=`|\(){}[]:;"'<>,.?/)
Confirm password

The administrator password retyped.


5. Choose Next.
6. Enter the following information in the Networking section and then choose Next:

VPC

The VPC for the directory. Create the Oracle DB instance in this same VPC.
Subnets

Subnets for the directory servers. The two subnets must be in different Availability Zones.
7. Review the directory information and make any necessary changes. When the information is correct,
choose Create directory.

1822
Amazon Relational Database Service User Guide
Configuring Kerberos authentication

It takes several minutes for the directory to be created. When it has been successfully created, the Status
value changes to Active.

To see information about your directory, choose the directory name in the directory listing. Note the
Directory ID value because you need this value when you create or modify your Oracle DB instance.

1823
Amazon Relational Database Service User Guide
Configuring Kerberos authentication

Step 2: Create a trust


If you plan to use AWS Managed Microsoft AD only, move on to Step 3: Configure IAM permissions for
Amazon RDS (p. 1824).

To get Kerberos authentication using an on-premises or self-hosted Microsoft Active Directory, create a
forest trust or external trust. The trust can be one-way or two-way. For more information about setting
up forest trusts using AWS Directory Service, see When to create a trust relationship in the AWS Directory
Service Administration Guide.

Step 3: Configure IAM permissions for Amazon RDS


To call AWS Directory Service for you, Amazon RDS requires an IAM role that uses the managed IAM
policy AmazonRDSDirectoryServiceAccess. This role allows Amazon RDS to make calls to the AWS
Directory Service.
Note
For the role to allow access, the AWS Security Token Service (AWS STS) endpoint must be
activated in the correct AWS Region for your AWS account. AWS STS endpoints are active
by default in all AWS Regions, and you can use them without any further actions. For more
information, see Activating and deactivating AWS STS in an AWS Region in the IAM User Guide.

Creating an IAM role


When you create a DB instance using the AWS Management Console, and the console user has
the iam:CreateRole permission, the console creates rds-directoryservice-kerberos-

1824
Amazon Relational Database Service User Guide
Configuring Kerberos authentication

access-role automatically. Otherwise, you must create the IAM role manually. When you
create an IAM role manually, choose Directory Service, and attach the AWS managed policy
AmazonRDSDirectoryServiceAccess to it.

For more information about creating IAM roles for a service, see Creating a role to delegate permissions
to an AWS service in the IAM User Guide.
Note
The IAM role used for Windows Authentication for RDS for Microsoft SQL Server can't be used
for RDS for Oracle.

Creating an IAM trust policy manually


Optionally, you can create resource policies with the required permissions instead of
using the managed IAM policy AmazonRDSDirectoryServiceAccess. Specify both
directoryservice.rds.amazonaws.com and rds.amazonaws.com as principals.

To limit the permissions that Amazon RDS gives another service for a specific resource, we recommend
using the aws:SourceArn and aws:SourceAccount global condition context keys in resource policies.
The most effective way to protect against the confused deputy problem is to use the aws:SourceArn
global condition context key with the full ARN of an Amazon RDS resource. For more information, see
Preventing cross-service confused deputy problems (p. 2640).

The following example shows how you can use the aws:SourceArn and aws:SourceAccount global
condition context keys in Amazon RDS to prevent the confused deputy problem.

{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": [
"directoryservice.rds.amazonaws.com",
"rds.amazonaws.com"
]
},
"Action": "sts:AssumeRole",
"Condition": {
"ArnLike": {
"aws:SourceArn": "arn:aws:rds:us-east-1:123456789012:db:mydbinstance"
},
"StringEquals": {
"aws:SourceAccount": "123456789012"
}
}
}
]
}

The role must also have the following IAM policy.

{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"ds:DescribeDirectories",
"ds:AuthorizeApplication",
"ds:UnauthorizeApplication",
"ds:GetAuthorizedApplicationDetails"
],

1825
Amazon Relational Database Service User Guide
Configuring Kerberos authentication

"Effect": "Allow",
"Resource": "*"
}
]
}

Step 4: Create and configure users


You can create users with the Active Directory Users and Computers tool, which is one of the Active
Directory Domain Services and Active Directory Lightweight Directory Services tools. In this case, users
are individual people or entities that have access to your directory.

To create users in an AWS Directory Service directory, you must be connected to a Windows-based
Amazon EC2 instance that is a member of the AWS Directory Service directory. At the same time, you
must be logged in as a user that has privileges to create users. For more information about creating users
in your Microsoft Active Directory, see Manage users and groups in AWS Managed Microsoft AD in the
AWS Directory Service Administration Guide.

Step 5: Enable cross-VPC traffic between the directory and the DB instance
If you plan to locate the directory and the DB instance in the same VPC, skip this step and move on to
Step 6: Create or modify an Oracle DB instance (p. 1826).

If you plan to locate the directory and the DB instance in different AWS accounts or VPCs, configure
cross-VPC traffic using VPC peering or AWS Transit Gateway. The following procedure enables traffic
between VPCs using VPC peering. Follow the instructions in What is VPC peering? in the Amazon Virtual
Private Cloud Peering Guide.

To enable cross-VPC traffic using VPC peering

1. Set up appropriate VPC routing rules to ensure that network traffic can flow both ways.
2. Ensure that the DB instance's security group can receive inbound traffic from the directory's security
group. For more information, see Best practices for AWS Managed Microsoft AD in the AWS
Directory Service Administration Guide.
3. Ensure that there is no network access control list (ACL) rule to block traffic.

If a different AWS account owns the directory, you must share the directory.

To share the directory between AWS accounts

1. Start sharing the directory with the AWS account that the DB instance will be created in by following
the instructions in Tutorial: Sharing your AWS Managed Microsoft AD directory for seamless EC2
Domain-join in the AWS Directory Service Administration Guide.
2. Sign in to the AWS Directory Service console using the account for the DB instance, and ensure that
the domain has the SHARED status before proceeding.
3. While signed into the AWS Directory Service console using the account for the DB instance, note the
Directory ID value. You use this directory ID to join the DB instance to the domain.

Step 6: Create or modify an Oracle DB instance


Create or modify an Oracle DB instance for use with your directory. You can use the console, CLI, or RDS
API to associate a DB instance with a directory. You can do this in one of the following ways:

• Create a new Oracle DB instance using the console, the create-db-instance CLI command, or the
CreateDBInstance RDS API operation.

For instructions, see Creating an Amazon RDS DB instance (p. 300).

1826
Amazon Relational Database Service User Guide
Configuring Kerberos authentication

• Modify an existing Oracle DB instance using the console, the modify-db-instance CLI command, or the
ModifyDBInstance RDS API operation.

For instructions, see Modifying an Amazon RDS DB instance (p. 401).


• Restore an Oracle DB instance from a DB snapshot using the console, the restore-db-instance-from-
db-snapshot CLI command, or the RestoreDBInstanceFromDBSnapshot RDS API operation.

For instructions, see Restoring from a DB snapshot (p. 615).


• Restore an Oracle DB instance to a point-in-time using the console, the restore-db-instance-to-point-
in-time CLI command, or the RestoreDBInstanceToPointInTime RDS API operation.

For instructions, see Restoring a DB instance to a specified time (p. 660).

Kerberos authentication is only supported for Oracle DB instances in a VPC. The DB instance can be in
the same VPC as the directory, or in a different VPC. When you create or modify the DB instance, do the
following:

• Provide the domain identifier (d-* identifier) that was generated when you created your directory.
• Provide the name of the IAM role that you created.
• Ensure that the DB instance security group can receive inbound traffic from the directory security
group and send outbound traffic to the directory.

When you use the console to create a DB instance, choose Password and Kerberos authentication in
the Database authentication section. Choose Browse Directory and then select the directory, or choose
Create a new directory.

When you use the console to modify or restore a DB instance, choose the directory in the Kerberos
authentication section, or choose Create a new directory.

1827
Amazon Relational Database Service User Guide
Configuring Kerberos authentication

When you use the AWS CLI, the following parameters are required for the DB instance to be able to use
the directory that you created:

• For the --domain parameter, use the domain identifier ("d-*" identifier) generated when you created
the directory.
• For the --domain-iam-role-name parameter, use the role you created that uses the managed IAM
policy AmazonRDSDirectoryServiceAccess.

For example, the following CLI command modifies a DB instance to use a directory.

For Linux, macOS, or Unix:

aws rds modify-db-instance \


--db-instance-identifier mydbinstance \
--domain d-ID \
--domain-iam-role-name role-name

For Windows:

aws rds modify-db-instance ^


--db-instance-identifier mydbinstance ^
--domain d-ID ^
--domain-iam-role-name role-name

Important
If you modify a DB instance to enable Kerberos authentication, reboot the DB instance after
making the change.
Note
MANAGED_SERVICE_USER is a service account whose name is randomly generated by Directory
Service for RDS. During the Kerberos authentication setup, RDS for Oracle creates a user with
the same name and assigns it the CREATE SESSION privilege. The Oracle DB user is identified
externally as [email protected], where EXAMPLE.COM is the name of
your domain. Periodically, RDS uses the credentials provided by the Directory Service to log in to
your Oracle database. Afterward, RDS immediately destroys the ticket cache.

Step 7: Create Kerberos authentication Oracle logins


Use the Amazon RDS master user credentials to connect to the Oracle DB instance as you do any
other DB instance. The DB instance is joined to the AWS Managed Microsoft AD domain. Thus, you can
provision Oracle logins and users from the Microsoft Active Directory users and groups in your domain.
To manage database permissions, you grant and revoke standard Oracle permissions to these logins.

To allow a Microsoft Active Directory user to authenticate with Oracle

1. Connect to the Oracle DB instance using your Amazon RDS master user credentials.
2. Create an externally authenticated user in Oracle database.

In the following example, replace [email protected] with the user name and domain
name.

CREATE USER "[email protected]" IDENTIFIED EXTERNALLY;


GRANT CREATE SESSION TO "[email protected]";

Users (both humans and applications) from your domain can now connect to the Oracle DB instance
from a domain joined client machine using Kerberos authentication.

1828
Amazon Relational Database Service User Guide
Configuring Kerberos authentication

Step 8: Configure an Oracle client


To configure an Oracle client, meet the following requirements:

• Create a configuration file named krb5.conf (Linux) or krb5.ini (Windows) to point to the domain.
Configure the Oracle client to use this configuration file.
• Verify that traffic can flow between the client host and AWS Directory Service over DNS port 53 over
TCP/UDP, Kerberos ports (88 and 464 for managed AWS Directory Service) over TCP, and LDAP port
389 over TCP.
• Verify that traffic can flow between the client host and the DB instance over the database port.

Following is sample content for AWS Managed Microsoft AD.

[libdefaults]
default_realm = EXAMPLE.COM
[realms]
EXAMPLE.COM = {
kdc = example.com
admin_server = example.com
}
[domain_realm]
.example.com = CORP.EXAMPLE.COM
example.com = CORP.EXAMPLE.COM

Following is sample content for on-premise Microsoft AD. In your krb5.conf or krb5.ini file, replace on-
prem-ad-server-name with the name of your on-premises AD server.

[libdefaults]
default_realm = ONPREM.COM
[realms]
AWSAD.COM = {
kdc = awsad.com
admin_server = awsad.com
}
ONPREM.COM = {
kdc = on-prem-ad-server-name
admin_server = on-prem-ad-server-name
}
[domain_realm]
.awsad.com = AWSAD.COM
awsad.com= AWSAD.COM
.onprem.com = ONPREM.COM
onprem.com= ONPREM.COM

Note
After you configure your krb5.ini or krb5.conf file, we recommend that you reboot the server.

The following is sample sqlnet.ora content for a SQL*Plus configuration:

SQLNET.AUTHENTICATION_SERVICES=(KERBEROS5PRE,KERBEROS5)
SQLNET.KERBEROS5_CONF=path_to_krb5.conf_file

For an example of a SQL Developer configuration, see Document 1609359.1 from Oracle Support.

Managing a DB instance in a domain


You can use the console, the CLI, or the RDS API to manage your DB instance and its relationship with
your Microsoft Active Directory. For example, you can associate a Microsoft Active Directory to enable
Kerberos authentication. You can also disassociate a Microsoft Active Directory to disable Kerberos

1829
Amazon Relational Database Service User Guide
Configuring Kerberos authentication

authentication. You can also move a DB instance to be externally authenticated by one Microsoft Active
Directory to another.

For example, using the CLI, you can do the following:

• To reattempt enabling Kerberos authentication for a failed membership, use the modify-db-instance
CLI command and specify the current membership's directory ID for the --domain option.
• To disable Kerberos authentication on a DB instance, use the modify-db-instance CLI command and
specify none for the --domain option.
• To move a DB instance from one domain to another, use the modify-db-instance CLI command and
specify the domain identifier of the new domain for the --domain option.

Viewing the status of domain membership


After you create or modify your DB instance, the DB instance becomes a member of the domain. You can
view the status of the domain membership for the DB instance in the console or by running the describe-
db-instances CLI command. The status of the DB instance can be one of the following:

• kerberos-enabled – The DB instance has Kerberos authentication enabled.


• enabling-kerberos – AWS is in the process of enabling Kerberos authentication on this DB instance.
• pending-enable-kerberos – Enabling Kerberos authentication is pending on this DB instance.
• pending-maintenance-enable-kerberos – AWS will attempt to enable Kerberos authentication
on the DB instance during the next scheduled maintenance window.
• pending-disable-kerberos – Disabling Kerberos authentication is pending on this DB instance.
• pending-maintenance-disable-kerberos – AWS will attempt to disable Kerberos authentication
on the DB instance during the next scheduled maintenance window.
• enable-kerberos-failed – A configuration problem has prevented AWS from enabling Kerberos
authentication on the DB instance. Correct the configuration problem before reissuing the command
to modify the DB instance.
• disabling-kerberos – AWS is in the process of disabling Kerberos authentication on this DB
instance.

A request to enable Kerberos authentication can fail because of a network connectivity issue or an
incorrect IAM role. If the attempt to enable Kerberos authentication fails when you create or modify a
DB instance, make sure that you're using the correct IAM role. Then modify the DB instance to join the
domain.
Note
Only Kerberos authentication with Amazon RDS for Oracle sends traffic to the domain's DNS
servers. All other DNS requests are treated as outbound network access on your DB instances
running Oracle. For more information about outbound network access with Amazon RDS for
Oracle, see Setting up a custom DNS server (p. 1865).

Force-rotating Kerberos keys


A secret key is shared between AWS Managed Microsoft AD and Amazon RDS for Oracle DB instance. This
key is rotated automatically every 45 days. You can use the following Amazon RDS procedure to force
the rotation of this key.

SELECT rdsadmin.rdsadmin_kerberos_auth_tasks.rotate_kerberos_keytab AS TASK_ID FROM DUAL;

Note
In a read replica configuration, this procedure is available only on the source DB instance and
not on the read replica.

1830
Amazon Relational Database Service User Guide
Configuring Kerberos authentication

The SELECT statement returns the ID of the task in a VARCHAR2 data type. You can view the status of
an ongoing task in a bdump file. The bdump files are located in the /rdsdbdata/log/trace directory.
Each bdump file name is in the following format.

dbtask-task-id.log

You can view the result by displaying the task's output file.

SELECT text FROM table(rdsadmin.rds_file_util.read_text_file('BDUMP','dbtask-task-


id.log'));

Replace task-id with the task ID returned by the procedure.


Note
Tasks are executed asynchronously.

Connecting to Oracle with Kerberos authentication


This section assumes that you have set up your Oracle client as described in Step 8: Configure an
Oracle client (p. 1829). To connect to the Oracle DB with Kerberos authentication, log in using the
Kerberos authentication type. For example, after launching Oracle SQL Developer, choose Kerberos
Authentication as the authentication type, as shown following.

To connect to Oracle with Kerberos authentication with SQL*Plus:

1. At a command prompt, run the following command:

kinit username

Replace username with the user name and, at the prompt, enter the password stored in the Microsoft
Active Directory for the user.
2. Open SQL*Plus and connect using the DNS name and port number for the Oracle DB instance.

For more information about connecting to an Oracle DB instance in SQL*Plus, see Connecting to your
DB instance using SQL*Plus (p. 1810).

1831
Amazon Relational Database Service User Guide
Configuring UTL_HTTP access

Configuring UTL_HTTP access using certificates and


an Oracle wallet
Amazon RDS supports outbound network access on your Oracle DB instances. To connect your DB
instance to the network, you can use the following PL/SQL packages:

UTL_HTTP

This package makes HTTP calls from SQL and PL/SQL. You can use it to access data on the Internet
over HTTP. For more information, see UTL_HTTP in the Oracle documentation.
UTL_TCP

This package provides TCP/IP client-side access functionality in PL/SQL. This package is useful to
PL/SQL applications that use Internet protocols and email. For more information, see UTL_TCP in
the Oracle documentation.
UTL_SMTP

This package provides interfaces to the SMTP commands that enable a client to dispatch emails to
an SMTP server. For more information, see UTL_SMTP in the Oracle documentation.

Before configuring your instance for network access, review the following requirements and
considerations:

• To use SMTP with the UTL_MAIL option, see Oracle UTL_MAIL (p. 2099).
• The Domain Name Server (DNS) name of the remote host can be any of the following:
• Publicly resolvable.
• The endpoint of an Amazon RDS DB instance.
• Resolvable through a custom DNS server. For more information, see Setting up a custom DNS
server (p. 1865).
• The private DNS name of an Amazon EC2 instance in the same VPC or a peered VPC. In this case,
make sure that the name is resolvable through a custom DNS server. Alternatively, to use the DNS
provided by Amazon, you can enable the enableDnsSupport attribute in the VPC settings and
enable DNS resolution support for the VPC peering connection. For more information, see DNS
support in your VPC and Modifying your VPC peering connection.
• To connect securely to remote SSL/TLS resources, we recommend that you create and upload
customized Oracle wallets. By using the Amazon S3 integration with Amazon RDS for Oracle feature,
you can download a wallet from Amazon S3 into Oracle DB instances. For information about
Amazon S3 integration for Oracle, see Amazon S3 integration (p. 1992).
• You can establish database links between Oracle DB instances over an SSL/TLS endpoint if the Oracle
SSL option is configured for each instance. No further configuration is required. For more information,
see Oracle Secure Sockets Layer (p. 2068).

By completing the following tasks, you can configure UTL_HTTP.REQUEST to work with websites that
require client authentication certificates during the SSL handshake. You can also configure password
authentication for UTL_HTTP access to websites by modifying the Oracle wallet generation commands
and the DBMS_NETWORK_ACL_ADMIN.APPEND_WALLET_ACE procedure. For more information, see
DBMS_NETWORK_ACL_ADMIN in the Oracle Database documentation.
Note
You can adapt the following tasks for UTL_SMTP, which enables you to send emails over SSL/
TLS (including Amazon Simple Email Service).

Topics

1832
Amazon Relational Database Service User Guide
Configuring UTL_HTTP access

• Step 1: Get the root certificate for a website (p. 1833)


• Step 2: Create an Oracle wallet (p. 1833)
• Step 3: Download your Oracle wallet to your RDS for Oracle instance (p. 1834)
• Step 4: Grant user permissions for the Oracle wallet (p. 1835)
• Step 5: Configure access to a website from your DB instance (p. 1836)
• Step 6: Test connections from your DB instance to a website (p. 1838)

Step 1: Get the root certificate for a website


For the RDS for Oracle instance to make secure connections to a website, add the root CA certificate.
Amazon RDS uses the root certificate to sign the website certificate to the Oracle wallet.

You can get the root certificate in various ways. For example, you can do the following:

1. Use a web server to visit the website secured by the certificate.


2. Download the root certificate that was used for signing.

For AWS services, root certificates typically reside in the Amazon trust services repository.

Step 2: Create an Oracle wallet


Create an Oracle wallet that contains both the web server certificates and the client authentication
certificates. The RDS Oracle instance uses the web server certificate to establish a secure connection to
the website. The website needs the client certificate to authenticate the Oracle database user.

You might want to configure secure connections without using client certificates for authentication. In
this case, you can skip the Java keystore steps in the following procedure.

To create an Oracle wallet

1. Place the root and client certificates in a single directory, and then change into this directory.
2. Convert the .p12 client certificate to the Java keystore.
Note
If you're not using client certificates for authentication, you can skip this step.

The following example converts the client certificate named client_certificate.p12 to the
Java keystore named client_keystore.jks. The keystore is then included in the Oracle wallet.
The keystore password is P12PASSWORD.

orapki wallet pkcs12_to_jks -wallet ./client_certificate.p12 -


jksKeyStoreLoc ./client_keystore.jks -jksKeyStorepwd P12PASSWORD

3. Create a directory for your Oracle wallet that is different from the certificate directory.

The following example creates the directory /tmp/wallet.

mkdir -p /tmp/wallet

4. Create an Oracle wallet in your wallet directory.

The following example sets the Oracle wallet password to P12PASSWORD, which is the same
password used by the Java keystore in a previous step. Using the same password is convenient, but
not necessary. The -auto_login parameter turns on the automatic login feature, so that you don’t
need to specify a password every time you want to access it.

1833
Amazon Relational Database Service User Guide
Configuring UTL_HTTP access

Note
Specify a password other than the prompt shown here as a security best practice.

orapki wallet create -wallet /tmp/wallet -pwd P12PASSWORD -auto_login

5. Add the Java keystore to your Oracle wallet.


Note
If you're not using client certificates for authentication, you can skip this step.

The following example adds the keystore client_keystore.jks to the Oracle wallet named /
tmp/wallet. In this example, you specify the same password for the Java keystore and the Oracle
wallet.

orapki wallet jks_to_pkcs12 -wallet /tmp/wallet -pwd P12PASSWORD -


keystore ./client_keystore.jks -jkspwd P12PASSWORD

6. Add the root certificate for your target website to the Oracle wallet.

The following example adds a certificate named Root_CA.cer.

orapki wallet add -wallet /tmp/wallet -trusted_cert -cert ./Root_CA.cer -


pwd P12PASSWORD

7. Add any intermediate certificates.

The following example adds a certificate named Intermediate.cer. Repeat this step as many
times as need to load all intermediate certificates.

orapki wallet add -wallet /tmp/wallet -trusted_cert -cert ./Intermediate.cer -


pwd P12PASSWORD

8. Confirm that your newly created Oracle wallet has the required certificates.

orapki wallet display -wallet /tmp/wallet -pwd P12PASSWORD

Step 3: Download your Oracle wallet to your RDS for Oracle


instance
In this step, you upload your Oracle wallet to Amazon S3, and then download the wallet from Amazon
S3 to your RDS for Oracle instance.

To download your Oracle wallet to your RDS for Oracle DB instance

1. Complete the prerequisites for Amazon S3 integration with Oracle, and add the S3_INTEGRATION
option to your Oracle DB instance. Ensure that the IAM role for the option has access to the Amazon
S3 bucket you are using.

For more information, see Amazon S3 integration (p. 1992).


2. Log in to your DB instance as the master user, and then create an Oracle directory to hold the Oracle
wallet.

The following example creates an Oracle directory named WALLET_DIR.

EXEC rdsadmin.rdsadmin_util.create_directory('WALLET_DIR');

1834
Amazon Relational Database Service User Guide
Configuring UTL_HTTP access

For more information, see Creating and dropping directories in the main data storage
space (p. 1926).
3. Upload the Oracle wallet to your Amazon S3 bucket.

You can use any supported upload technique.


4. If you're re-uploading an Oracle wallet, delete the existing wallet. Otherwise, skip to the next step.

The following example removes the existing wallet, which is named cwallet.sso.

EXEC UTL_FILE.FREMOVE ('WALLET_DIR','cwallet.sso');

5. Download the Oracle wallet from your Amazon S3 bucket to the Oracle DB instance.

The following example downloads the wallet named cwallet.sso from the Amazon S3 bucket
named my_s3_bucket to the DB instance directory named WALLET_DIR.

SELECT rdsadmin.rdsadmin_s3_tasks.download_from_s3(
p_bucket_name => 'my_s3_bucket',
p_s3_prefix => 'cwallet.sso',
p_directory_name => 'WALLET_DIR')
AS TASK_ID FROM DUAL;

6. (Optional) Download a password-protected Oracle wallet.

Download this wallet only if you want to require a password for every use of the wallet. The
following example downloads password-protected wallet ewallet.p12.

SELECT rdsadmin.rdsadmin_s3_tasks.download_from_s3(
p_bucket_name => 'my_s3_bucket',
p_s3_prefix => 'ewallet.p12',
p_directory_name => 'WALLET_DIR')
AS TASK_ID FROM DUAL;

7. Check the status of your DB task.

Substitute the task ID returned from the preceding steps for dbtask-1234567890123-4567.log
in the following example.

SELECT TEXT FROM


TABLE(rdsadmin.rds_file_util.read_text_file('BDUMP','dbtask-1234567890123-4567.log'));

8. Check the contents of the directory that you're using to store the Oracle wallet.

SELECT * FROM TABLE(rdsadmin.rds_file_util.listdir(p_directory => 'WALLET_DIR'));

For more information, see Listing files in a DB instance directory (p. 1927).

Step 4: Grant user permissions for the Oracle wallet


You can either create a new database user or configure an existing user. In either case, you must
configure the user to access the Oracle wallet for secure connections and client authentication using
certificates.

To grant user permissions for the Oracle wallet

1. Log in your RDS for Oracle DB instance as the master user.

1835
Amazon Relational Database Service User Guide
Configuring UTL_HTTP access

2. If you don't want to configure an existing database user, create a new user. Otherwise, skip to the
next step.

The following example creates a database user named my-user.

CREATE USER my-user IDENTIFIED BY my-user-pwd;


GRANT CONNECT TO my-user;

3. Grant permission to your database user on the directory containing your Oracle wallet.

The following example grants read access to user my-user on directory WALLET_DIR.

GRANT READ ON DIRECTORY WALLET_DIR TO my-user;

4. Grant permission to your database user to use the UTL_HTTP package.

The following PL/SQL program grants UTL_HTTP access to user my-user.

BEGIN
rdsadmin.rdsadmin_util.grant_sys_object('UTL_HTTP', UPPER('my-user'));
END;
/

5. Grant permission to your database user to use the UTL_FILE package.

The following PL/SQL program grants UTL_FILE access to user my-user.

BEGIN
rdsadmin.rdsadmin_util.grant_sys_object('UTL_FILE', UPPER('my-user'));
END;
/

Step 5: Configure access to a website from your DB instance


In this step, you configure your Oracle database user so that it can connect to your target website
using UTL_HTTP, your uploaded Oracle Wallet, and the client certificate. For more information, see
Configuring Access Control to an Oracle Wallet in the Oracle Database documentation.

To configure access to a website from your RDS for Oracle DB instance

1. Log in your RDS for Oracle DB instance as the master user.


2. Create a Host Access Control Entry (ACE) for your user and the target website on a secure port.

The following example configures my-user to access secret.encrypted-website.com on


secure port 443.

BEGIN
DBMS_NETWORK_ACL_ADMIN.APPEND_HOST_ACE(
host => 'secret.encrypted-website.com',
lower_port => 443,
upper_port => 443,
ace => xs$ace_type(privilege_list => xs$name_list('http'),
principal_name => 'my-user',
principal_type => xs_acl.ptype_db));
END;
/

1836
Amazon Relational Database Service User Guide
Configuring UTL_HTTP access

For more information, see Configuring Access Control for External Network Services in the Oracle
Database documentation.
3. (Optional) Create an ACE for your user and target website on the standard port.

You might need to use the standard port if some web pages are served from the standard web
server port (80) instead of the secure port (443).

BEGIN
DBMS_NETWORK_ACL_ADMIN.APPEND_HOST_ACE(
host => 'secret.encrypted-website.com',
lower_port => 80,
upper_port => 80,
ace => xs$ace_type(privilege_list => xs$name_list('http'),
principal_name => 'my-user',
principal_type => xs_acl.ptype_db));
END;
/

4. Confirm that the access control entries exist.

SET LINESIZE 150


COLUMN HOST FORMAT A40
COLUMN ACL FORMAT A50

SELECT HOST, LOWER_PORT, UPPER_PORT, ACL


FROM DBA_NETWORK_ACLS
ORDER BY HOST;

5. Grant permission to your database user to use the UTL_HTTP package.

The following PL/SQL program grants UTL_HTTP access to user my-user.

BEGIN
rdsadmin.rdsadmin_util.grant_sys_object('UTL_HTTP', UPPER('my-user'));
END;
/

6. Confirm that related access control lists exist.

SET LINESIZE 150


COLUMN ACL FORMAT A50
COLUMN PRINCIPAL FORMAT A20
COLUMN PRIVILEGE FORMAT A10

SELECT ACL, PRINCIPAL, PRIVILEGE, IS_GRANT,


TO_CHAR(START_DATE, 'DD-MON-YYYY') AS START_DATE,
TO_CHAR(END_DATE, 'DD-MON-YYYY') AS END_DATE
FROM DBA_NETWORK_ACL_PRIVILEGES
ORDER BY ACL, PRINCIPAL, PRIVILEGE;

7. Grant permission to your database user to use certificates for client authentication and your Oracle
wallet for connections.
Note
If you're not using client certificates for authentication, you can skip this step.

DECLARE
l_wallet_path all_directories.directory_path%type;
BEGIN
SELECT DIRECTORY_PATH

1837
Amazon Relational Database Service User Guide
Configuring UTL_HTTP access

INTO l_wallet_path
FROM ALL_DIRECTORIES
WHERE UPPER(DIRECTORY_NAME)='WALLET_DIR';
DBMS_NETWORK_ACL_ADMIN.APPEND_WALLET_ACE(
wallet_path => 'file:/' || l_wallet_path,
ace => xs$ace_type(privilege_list => xs
$name_list('use_client_certificates'),
principal_name => 'my-user',
principal_type => xs_acl.ptype_db));
END;
/

Step 6: Test connections from your DB instance to a website


In this step, you configure your database user so that it can connect to the website using UTL_HTTP, your
uploaded Oracle Wallet, and the client certificate.

To configure access to a website from your RDS for Oracle DB instance

1. Log in your RDS for Oracle DB instance as a database user with UTL_HTTP permissions.
2. Confirm that a connection to your target website can resolve the host address.

The following example gets the host address from secret.encrypted-website.com.

SELECT UTL_INADDR.GET_HOST_ADDRESS(host => 'secret.encrypted-website.com')


FROM DUAL;

3. Test a failed connection.

The following query fails because UTL_HTTP requires the location of the Oracle wallet with the
certificates.

SELECT UTL_HTTP.REQUEST('secret.encrypted-website.com') FROM DUAL;

4. Test website access by using UTL_HTTP.SET_WALLET and selecting from DUAL.

DECLARE
l_wallet_path all_directories.directory_path%type;
BEGIN
SELECT DIRECTORY_PATH
INTO l_wallet_path
FROM ALL_DIRECTORIES
WHERE UPPER(DIRECTORY_NAME)='WALLET_DIR';
UTL_HTTP.SET_WALLET('file:/' || l_wallet_path);
END;
/

SELECT UTL_HTTP.REQUEST('secret.encrypted-website.com') FROM DUAL;

5. (Optional) Test website access by storing your query in a variable and using EXECUTE IMMEDIATE.

DECLARE
l_wallet_path all_directories.directory_path%type;
v_webpage_sql VARCHAR2(1000);
v_results VARCHAR2(32767);
BEGIN
SELECT DIRECTORY_PATH

1838
Amazon Relational Database Service User Guide
Configuring UTL_HTTP access

INTO l_wallet_path
FROM ALL_DIRECTORIES
WHERE UPPER(DIRECTORY_NAME)='WALLET_DIR';
v_webpage_sql := 'SELECT UTL_HTTP.REQUEST(''secret.encrypted-website.com'', '''',
''file:/' ||l_wallet_path||''') FROM DUAL';
DBMS_OUTPUT.PUT_LINE(v_webpage_sql);
EXECUTE IMMEDIATE v_webpage_sql INTO v_results;
DBMS_OUTPUT.PUT_LINE(v_results);
END;
/

6. (Optional) Find the file system location of your Oracle wallet directory.

SELECT * FROM TABLE(rdsadmin.rds_file_util.listdir(p_directory => 'WALLET_DIR'));

Use the output from the previous command to make an HTTP request. For example, if the directory
is rdsdbdata/userdirs/01, run the following query.

SELECT UTL_HTTP.REQUEST('https://secret.encrypted-website.com/', '', 'file://rdsdbdata/


userdirs/01')
FROM DUAL;

1839
Amazon Relational Database Service User Guide
Working with CDBs

Working with CDBs in RDS for Oracle


In the Oracle multitenant architecture, a container database (CDB) can include customer-created
pluggable databases (PDBs). For more information about CDBs, see Introduction to the Multitenant
Architecture in the Oracle Database documentation.

Topics
• Overview of RDS for Oracle CDBs (p. 1840)
• Configuring an RDS for Oracle CDB (p. 1841)
• Backing up and restoring a CDB (p. 1844)
• Converting an RDS for Oracle non-CDB to a CDB (p. 1844)
• Upgrading your CDB (p. 1846)

Overview of RDS for Oracle CDBs


You can create an RDS for Oracle DB instance as a container database (CDB) when you run Oracle
Database 19c or higher. A CDB differs from a non-CDB because it can contain pluggable databases
(PDBs). A PDB is a portable collection of schemas and objects that appears to an application as a
separate database.

Starting with Oracle Database 21c, all databases are CDBs. If your DB instance runs Oracle Database 19c,
you can create either a CDB or a non-CDB. A non-CDB uses the traditional Oracle database architecture
and can't contain PDBs. You can convert an Oracle Database 19c non-CDB to a CDB, but you can't
convert a CDB to a non-CDB. You can only upgrade a CDB to a CDB.

Topics
• Single-tenant configuration (p. 1840)
• Creation and conversion options in a CDB (p. 1840)
• User accounts and privileges in a CDB (p. 1841)
• Parameter group families in a CDB (p. 1841)
• PDB portability in a CDB (p. 1841)

Single-tenant configuration
RDS for Oracle supports the single-tenant configuration of the Oracle multitenant architecture. This
means that an RDS for Oracle DB instance can contain only one PDB. You name the PDB when you create
your DB instance. The CDB name defaults to RDSCDB and can't be changed.

In RDS for Oracle, your client application interacts with the PDB rather than the CDB. Your experience
with a PDB is mostly identical to your experience with a non-CDB. You use the same Amazon RDS APIs in
the single-tenant configuration as you do in the non-CDB architecture. You can't access the CDB itself.

Creation and conversion options in a CDB


Although Oracle Database 21c supports only CDBs, Oracle Database 19c supports both CDBs and non-
CDBs. The following table shows the different options for creating and converting CDBs and non-CDBs.

Release Database creation Architecture conversion Major version upgrade


options options targets

Oracle Database 21c CDB only N/A N/A

1840
Amazon Relational Database Service User Guide
Configuring a CDB

Release Database creation Architecture conversion Major version upgrade


options options targets

Oracle Database 19c CDB or non-CDB Non-CDB to CDB (April 21c CDB (from 19c CDB
2021 RU or higher) only)

Oracle Database 12c Non-CDB only N/A 19c non-CDB


(desupported)

As shown in the preceding table, you can't directly upgrade a non-CDB to a CDB in a new major version.
But you can convert an Oracle Database 19c non-CDB to an Oracle Database 19c CDB, and then upgrade
the Oracle Database 19c CDB to an Oracle Database 21c CDB. For more information, see Converting an
RDS for Oracle non-CDB to a CDB (p. 1844).

User accounts and privileges in a CDB


In the Oracle multitenant architecture, all user accounts are either common users or local users. A CDB
common user is a database user whose single identity and password are known in the CDB root and in
every existing and future PDB. In contrast, a local user exists only in a single PDB.

The RDS master user is a local user account in the PDB, which you name when you create your DB
instance. If you create new user accounts, these users will also be local users residing in the PDB. You
can't use any user accounts to create new PDBs or modify the state of the existing PDB.

The rdsadmin user is a common user account. You can run RDS for Oracle packages that exist in this
account, but you can't log in as rdsadmin. For more information, see About Common Users and Local
Users in the Oracle documentation.

Parameter group families in a CDB


CDBs have their own parameter group families and default parameter values. The CDB parameter group
families are as follows:

• oracle-ee-cdb-21
• oracle-se2-cdb-21
• oracle-ee-cdb-19
• oracle-se2-cdb-19

You specify parameters at the CDB level rather than the PDB level. The PDB inherits parameter
settings from the CDB. For more information about setting parameters, see Working with parameter
groups (p. 347). For best practices, see Working with DB parameter groups (p. 297).

PDB portability in a CDB


Your CDB can contain only a single PDB. You can't unplug this PDB or plug in a different PDB. To move
data into or out of your CDB, use the same techniques as for a non-CDB. For more information about
migrating data, see Importing data into Oracle on Amazon RDS (p. 1947).

Configuring an RDS for Oracle CDB


Configuring a CDB is similar to configuring a non-CDB.

Topics
• Creating an RDS for Oracle CDB instance (p. 1842)
• Connecting to a PDB in your RDS for Oracle CDB (p. 1841)

1841
Amazon Relational Database Service User Guide
Configuring a CDB

Creating an RDS for Oracle CDB instance


In RDS for Oracle, creating a CDB is almost identical to creating a non-CDB. The difference is that you
choose the Multitenant architecture when creating your DB instance. To create a CDB, use the AWS
Management Console, the AWS CLI, or the RDS API.

Console

To create a CDB instance

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the upper-right corner of the Amazon RDS console, choose the AWS Region in which you want to
create the CDB instance.
3. In the navigation pane, choose Databases.
4. Choose Create database.
5. In Choose a database creation method, select Standard Create.
6. In Engine options, choose Oracle.
7. For Database management type, choose Amazon RDS.
8. For Architecture settings, choose Multitenant architecture.
9. Choose the settings that you want based on the options listed in Settings for DB instances (p. 308).
Note the following:

• For Master username, enter the name for a local user in your PDB. You can't use the master
username to log in to the CDB root.
• For Initial database name, enter the name of your PDB. You can't name the CDB, which has the
default name RDSCDB.
10. Choose Create database.

AWS CLI

To create a DB instance by using the AWS CLI, call the create-db-instance command with the following
parameters:

• --db-instance-identifier
• --db-instance-class
• --engine { oracle-ee-cdb | oracle-se2-cdb }
• --master-username
• --master-user-password
• --allocated-storage
• --backup-retention-period

For information about each setting, see Settings for DB instances (p. 308).

This following example creates an RDS for Oracle DB instance named my-cdb-inst. The PDB is named
mypdb.

Example

For Linux, macOS, or Unix:

aws rds create-db-instance \

1842
Amazon Relational Database Service User Guide
Configuring a CDB

--engine oracle-ee-cdb \
--db-instance-identifier my-cdb-inst \
--db-name mypdb \
--allocated-storage 250 \
--db-instance-class db.t3.large \
--master-username pdb_admin \
--master-user-password masteruserpassword \
--backup-retention-period 3

For Windows:

aws rds create-db-instance ^


--engine oracle-ee-cdb ^
--db-instance-identifier my-cdb-inst ^
--db-name mypdb ^
--allocated-storage 250 ^
--db-instance-class db.t3.large ^
--master-username pdb_admin ^
--master-user-password masteruserpassword ^
--backup-retention-period 3

Note
Specify a password other than the prompt shown here as a security best practice.

This command produces output similar to the following.

{
"DBInstance": {
"DBInstanceIdentifier": "my-cdb-inst",
"DBInstanceClass": "db.t3.large",
"Engine": "oracle-ee-cdb",
"DBInstanceStatus": "creating",
"MasterUsername": "pdb_user",
"DBName": "MYPDB",
"AllocatedStorage": 250,
"PreferredBackupWindow": "04:59-05:29",
"BackupRetentionPeriod": 3,
"DBSecurityGroups": [],
"VpcSecurityGroups": [
{
"VpcSecurityGroupId": "sg-0a2bcd3e",
"Status": "active"
}
],
"DBParameterGroups": [
{
"DBParameterGroupName": "default.oracle-ee-cdb-19",
"ParameterApplyStatus": "in-sync"
}
],
"DBSubnetGroup": {
"DBSubnetGroupName": "default",
"DBSubnetGroupDescription": "default",
"VpcId": "vpc-1234567a",
"SubnetGroupStatus": "Complete",
...

RDS API

To create a DB instance by using the Amazon RDS API, call the CreateDBInstance operation.

For information about each setting, see Settings for DB instances (p. 308).

1843
Amazon Relational Database Service User Guide
Backing up and restoring a CDB

Connecting to a PDB in your RDS for Oracle CDB


You can use a utility like SQL*Plus to connect to a PDB. To download Oracle Instant Client, which includes
a standalone version of SQL*Plus, see Oracle Instant Client Downloads.

To connect SQL*Plus to your PDB, you need the following information:

• Database user name and password


• Endpoint for your DB instance
• Port number

For information about finding the preceding information, see Finding the endpoint of your RDS for
Oracle DB instance (p. 1806).

Example To connect to your PDB using SQL*Plus


In the following examples, substitute your master user for master_user_name. Also, substitute the
endpoint for your DB instance, and then include the port number and the Oracle SID. The SID value is
the name of the PDB that you specified when you created your DB instance, and not the DB instance
identifier.

For Linux, macOS, or Unix:

sqlplus 'master_user_name@(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=endpoint)(PORT=port))
(CONNECT_DATA=(SID=pdb_name)))'

For Windows:

sqlplus master_user_name@(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=endpoint)(PORT=port))
(CONNECT_DATA=(SID=pdb_name)))

You should see output similar to the following.

SQL*Plus: Release 19.0.0.0.0 Production on Mon Aug 21 09:42:20 2021

After you enter the password for the user, the SQL prompt appears.

SQL>

Note
The shorter format connection string (Easy connect or EZCONNECT), such as sqlplus
username/password@LONGER-THAN-63-CHARS-RDS-ENDPOINT-HERE:1521/database-
identifier, might encounter a maximum character limit and should not be used to connect.

Backing up and restoring a CDB


DB snapshots work the same way in the multitenant and non-multitenant architecture. The only
difference is that when you restore a DB snapshot, you can only rename the PDB, not the CDB. The CDB
is always named RDSCDB. For more information, see Oracle Database considerations (p. 616).

Converting an RDS for Oracle non-CDB to a CDB


You can change the architecture of an Oracle database from the traditional non-CDB architecture to
the multitenant architecture. When you upgrade your database engine version, you can't change the

1844
Amazon Relational Database Service User Guide
Converting a non-CDB to a CDB

database architecture in the same operation. Therefore, to upgrade an Oracle Database 19c non-CDB to
an Oracle Database 21c CDB, you first need to convert the non-CDB to a CDB, and then upgrade the 19c
CDB to a 21c CDB.

The non-CDB conversion operation has the following requirements:

• Make sure that you specify oracle-ee-cdb or oracle-se2-cdb for the engine type. These are the
only supported values.
• Make sure that your DB engine runs Oracle Database 19c with an April 2021 or later RU.

The operation has the following limitations:

• You can't convert a CDB to a non-CDB. You can only convert a non-CDB to a CDB.
• You can't convert a primary or replica database that has Oracle Data Guard enabled.
• You can't upgrade the DB engine version and convert a non-CDB to a CDB in the same operation.
• The considerations for option and parameter groups are the same as for upgrading the DB engine. For
more information, see Considerations for Oracle DB upgrades (p. 2108).

Console
To convert a non-CDB to a CDB

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the upper-right corner of the Amazon RDS console, choose the AWS Region where your DB
instance resides.
3. In the navigation pane, choose Databases, and then choose the non-CDB instance that you want to
convert to a CDB instance.
4. Choose Modify.
5. For Architecture settings, select Multitenant architecture.
6. (Optional) For DB parameter group, choose a new parameter group for your CDB instance. The
same parameter group considerations apply when converting a DB instance as when upgrading a DB
instance. For more information, see Parameter group considerations (p. 2109).
7. (Optional) For Option group, choose a new option group for your CDB instance. The same option
group considerations apply when converting a DB instance as when upgrading a DB instance. For
more information, see Option group considerations (p. 2109).
8. When all the changes are as you want them, choose Continue and check the summary of
modifications.
9. (Optional) Choose Apply immediately to apply the changes immediately. Choosing this option
can cause downtime in some cases. For more information, see Using the Apply Immediately
setting (p. 402).
10. On the confirmation page, review your changes. If they are correct, choose Modify DB instance.

Or choose Back to edit your changes or Cancel to cancel your changes.

AWS CLI
To convert the non-CDB on your DB instance to a CDB, set --engine to oracle-ee-cdb or oracle-
se2-cdb in the AWS CLI command modify-db-instance.

The following example converts the DB instance named my-non-cdb and specifies a custom option
group and parameter group.

1845
Amazon Relational Database Service User Guide
Upgrading your CDB

Example

For Linux, macOS, or Unix:

aws rds modify-db-instance \


--db-instance-identifier my-non-cdb \
--engine oracle-ee-cdb \
--option-group-name custom-option-group \
--db-parameter-group-name custom-parameter-group

For Windows:

aws rds modify-db-instance ^


--db-instance-identifier my-non-cdb ^
--engine oracle-ee-cdb ^
--option-group-name custom-option-group ^
--db-parameter-group-name custom-parameter-group

RDS API
To convert a non-CDB to a CDB, specify Engine in the RDS API operation ModifyDBInstance.

Upgrading your CDB


You can upgrade a CDB to a different Oracle Database release. For example, you can upgrade an Oracle
Database 19c CDB to an Oracle Database 21c CDB. You can't change the database architecture during an
upgrade. Thus, you can't upgrade a non-CDB to a CDB or upgrade a CDB to a non-CDB.

The procedure for upgrading a CDB to a CDB is the same as for upgrading a non-CDB to a non-CDB. For
more information, see Upgrading the RDS for Oracle DB engine (p. 2103).

1846
Amazon Relational Database Service User Guide
Administering your Oracle DB

Administering your Oracle DB instance


Following are the common management tasks that you perform with an Amazon RDS DB instance. Some
tasks are the same for all RDS DB instances. Other tasks are specific to RDS for Oracle.

The following tasks are common to all RDS databases, but Oracle has special considerations. For
example, connect to an Oracle Database using the Oracle clients SQL*Plus and SQL Developer.

Task area Relevant documentation

Instance classes, storage, and PIOPS RDS for Oracle instance


classes (p. 1796)
If you are creating a production instance, learn how instance
classes, storage types, and Provisioned IOPS work in Amazon RDS. Amazon RDS storage
types (p. 101)

Multi-AZ deployments Configuring and managing a


Multi-AZ deployment (p. 492)
A production DB instance should use Multi-AZ deployments. Multi-
AZ deployments provide increased availability, data durability, and
fault tolerance for DB instances.

Amazon VPC Working with a DB instance in a


VPC (p. 2688)
If your AWS account has a default virtual private cloud (VPC), then
your DB instance is automatically created inside the default VPC.
If your account doesn't have a default VPC, and you want the DB
instance in a VPC, create the VPC and subnet groups before you
create the instance.

Security groups Controlling access with security


groups (p. 2680)
By default, DB instances use a firewall that prevents access. Make
sure that you create a security group with the correct IP addresses
and network configuration to access the DB instance.

Parameter groups Working with parameter


groups (p. 347)
If your DB instance is going to require specific database parameters,
create a parameter group before you create the DB instance.

Option groups Adding options to Oracle DB


instances (p. 1990)
If your DB instance requires specific database options, create an
option group before you create the DB instance.

Connecting to your DB instance Connecting to your RDS for


Oracle DB instance (p. 1806)
After creating a security group and associating it to a DB instance,
you can connect to the DB instance using any standard SQL client
application such as Oracle SQL*Plus.

Backup and restore Backing up and


restoring (p. 590)
You can configure your DB instance to take automated backups,
or take manual snapshots, and then restore instances from the
backups or snapshots.

1847
Amazon Relational Database Service User Guide
Administering your Oracle DB

Task area Relevant documentation

Monitoring Viewing metrics in the Amazon


RDS console (p. 696)
You can monitor an Oracle DB instance by using CloudWatch
Amazon RDS metrics, events, and enhanced monitoring. Viewing Amazon RDS
events (p. 852)

Log files Monitoring Amazon RDS log


files (p. 895)
You can access the log files for your Oracle DB instance.

Following, you can find a description for Amazon RDS–specific implementations of common DBA tasks
for RDS Oracle. To deliver a managed service experience, Amazon RDS doesn't provide shell access to
DB instances. Also, RDS restricts access to certain system procedures and tables that require advanced
privileges. In many of the tasks, you run the rdsadmin package, which is an Amazon RDS–specific tool
that enables you to administer your database.

The following are common DBA tasks for DB instances running Oracle:

• System tasks (p. 1855)

Disconnecting a Amazon RDS method: rdsadmin.rdsadmin_util.disconnect


session (p. 1855)
Oracle method: alter system disconnect session

Terminating a Amazon RDS method: rdsadmin.rdsadmin_util.kill


session (p. 1856)
Oracle method: alter system kill session

Canceling a SQL Amazon RDS method: rdsadmin.rdsadmin_util.cancel


statement in a
session (p. 1857) Oracle method: alter system cancel sql

Enabling and Amazon RDS method:


disabling restricted rdsadmin.rdsadmin_util.restricted_session
sessions (p. 1858)
Oracle method: alter system enable restricted session

Flushing the shared Amazon RDS method:


pool (p. 1858) rdsadmin.rdsadmin_util.flush_shared_pool

Oracle method: alter system flush shared_pool

Flushing the buffer Amazon RDS method:


cache (p. 1859) rdsadmin.rdsadmin_util.flush_buffer_cache

Oracle method: alter system flush buffer_cache

Granting SELECT or Amazon RDS method: rdsadmin.rdsadmin_util.grant_sys_object


EXECUTE privileges to
SYS objects (p. 1859) Oracle method: grant

Revoking SELECT or Amazon RDS method:


EXECUTE privileges on rdsadmin.rdsadmin_util.revoke_sys_object
SYS objects (p. 1861)
Oracle method: revoke

1848
Amazon Relational Database Service User Guide
Administering your Oracle DB

Granting privileges Amazon RDS method: grant


to non-master
users (p. 1861) Oracle method: grant

Creating custom Amazon RDS method:


functions to verify rdsadmin.rdsadmin_password_verify.create_verify_function
passwords (p. 1862)
Amazon RDS method:
rdsadmin.rdsadmin_password_verify.create_passthrough_verify_fcn

Setting up a custom —
DNS server (p. 1865)

Listing allowed Amazon RDS method:


system diagnostic rdsadmin.rdsadmin_util.list_allowed_system_events
events (p. 1867)
Oracle method: —

Setting system Amazon RDS method:


diagnostic rdsadmin.rdsadmin_util.set_allowed_system_events
events (p. 1867)
Oracle method: ALTER SYSTEM SET EVENTS 'set_event_clause'

Listing system Amazon RDS method:


diagnostic events that rdsadmin.rdsadmin_util.list_set_system_events
are set (p. 1868)
Oracle method: ALTER SESSION SET EVENTS 'IMMEDIATE
EVENTDUMP(SYSTEM)'

Unsetting system Amazon RDS method:


diagnostic rdsadmin.rdsadmin_util.unset_system_event
events (p. 1868)
Oracle method: ALTER SYSTEM SET EVENTS 'unset_event_clause'

• Database tasks (p. 1869)

Changing the global name of a Amazon RDS method:


database (p. 1870) rdsadmin.rdsadmin_util.rename_global_name

Oracle method: alter database rename

Creating and sizing Amazon RDS method: create tablespace


tablespaces (p. 1870)
Oracle method: alter database

Setting the default Amazon RDS method:


tablespace (p. 1871) rdsadmin.rdsadmin_util.alter_default_tablespace

Oracle method: alter database default tablespace

Setting the default temporary Amazon RDS method:


tablespace (p. 1871) rdsadmin.rdsadmin_util.alter_default_temp_tablespace

Oracle method: alter database default temporary


tablespace

1849
Amazon Relational Database Service User Guide
Administering your Oracle DB

Creating a temporary Amazon RDS method:


tablespace on the instance rdsadmin.rdsadmin_util.create_inst_store_tmp_tblspace
store (p. 1871)
Oracle method: create temporary tablespace

Checkpointing a Amazon RDS method: rdsadmin.rdsadmin_util.checkpoint


database (p. 1873)
Oracle method: alter system checkpoint

Setting distributed Amazon RDS method:


recovery (p. 1873) rdsadmin.rdsadmin_util.enable_distr_recovery

Oracle method: alter system enable distributed


recovery

Setting the database time Amazon RDS method:


zone (p. 1873) rdsadmin.rdsadmin_util.alter_db_time_zone

Oracle method: alter database set time_zone

Working with Oracle external —


tables (p. 1874)

Generating performance Amazon RDS method: rdsadmin.rdsadmin_diagnostic_util


reports with Automatic procedures
Workload Repository
(AWR) (p. 1875) Oracle method: dbms_workload_repository package

Adjusting database links for —


use with DB instances in a
VPC (p. 1879)

Setting the default edition for a Amazon RDS method:


DB instance (p. 1879) rdsadmin.rdsadmin_util.alter_default_edition

Oracle method: alter database default edition

Enabling auditing for the Amazon RDS method:


SYS.AUD$ table (p. 1880) rdsadmin.rdsadmin_master_util.audit_all_sys_aud_table

Oracle method: audit

Disabling auditing for the Amazon RDS method:


SYS.AUD$ table (p. 1880) rdsadmin.rdsadmin_master_util.noaudit_all_sys_aud_table

Oracle method: noaudit

Cleaning up interrupted online Amazon RDS method:


index builds (p. 1881) rdsadmin.rdsadmin_dbms_repair.online_index_clean

Oracle method: dbms_repair.online_index_clean

Skipping corrupt Amazon RDS method: Several


blocks (p. 1881) rdsadmin.rdsadmin_dbms_repair procedures

Oracle method: dbms_repair package

1850
Amazon Relational Database Service User Guide
Administering your Oracle DB

Resizing tablespaces, data files, Amazon RDS method:


and temp files (p. 1883) rdsadmin.rdsadmin_util.resize_temp_tablespace,
rdsadmin.rdsadmin_util.resize_tempfile, or
rdsadmin.rdsadmin_util.autoextend_tempfile
procedures

rdsadmin.rdsadmin_util.resize_datafile or
rdsadmin.rdsadmin_util.autoextend_datafile procedure

Oracle method: —

Purging the recycle Amazon RDS method: EXEC


bin (p. 1886) rdsadmin.rdsadmin_util.purge_dba_recyclebin

Oracle method: purge dba_recyclebin

Setting the default Amazon RDS method: EXEC


displayed values for full rdsadmin.rdsadmin_util.dbms_redact_upd_full_rdct_val
redaction (p. 1887)
Oracle method: exec
dbms_redact.UPDATE_FULL_REDACTION_VALUES

• Log tasks (p. 1888)

Setting force logging (p. 1889) Amazon RDS method:


rdsadmin.rdsadmin_util.force_loggin

Oracle method: alter


database force logging

Setting supplemental logging (p. 1889) Amazon RDS method:


rdsadmin.rdsadmin_util.alter_supple

Oracle method: alter


database add
supplemental log

Switching online log files (p. 1890) Amazon RDS method:


rdsadmin.rdsadmin_util.switch_logfi

Oracle method: alter system


switch logfile

Adding online redo logs (p. 1890) Amazon RDS method:


rdsadmin.rdsadmin_util.add_logfile

Dropping online redo logs (p. 1891) Amazon RDS method:


rdsadmin.rdsadmin_util.drop_logfile

Resizing online redo logs (p. 1891) —

Retaining archived redo logs (p. 1893) Amazon RDS method:


rdsadmin.rdsadmin_util.set_configur

1851
Amazon Relational Database Service User Guide
Administering your Oracle DB

Downloading archived redo logs from Amazon S3 (p. 1895) Amazon RDS method:
rdsadmin.rdsadmin_archive_log_downl

Amazon RDS method:


rdsadmin.rdsadmin_archive_log_downl

Accessing online and archived redo logs (p. 1894) Amazon RDS method:
rdsadmin.rdsadmin_master_util.creat

Amazon RDS method:


rdsadmin.rdsadmin_master_util.creat

• RMAN tasks (p. 1897)

Validating DB instance files (p. 1900) Amazon RDS method:


rdsadmin_rman_util.procedure

Oracle method: RMAN


VALIDATE

Enabling and disabling block change tracking (p. 1903) Amazon RDS method:
rdsadmin_rman_util.procedure

Oracle method: ALTER


DATABASE

Crosschecking archived redo logs (p. 1904) Amazon RDS method:


rdsadmin_rman_util.crosscheck_archi

Oracle method: RMAN BACKUP

Backing up archived redo logs (p. 1905) Amazon RDS method:


rdsadmin_rman_util.procedure

Oracle method: RMAN BACKUP

Performing a full database backup (p. 1910) Amazon RDS method:


rdsadmin_rman_util.backup_database_

Oracle method: RMAN BACKUP

Performing an incremental database backup (p. 1911) Amazon RDS method:


rdsadmin_rman_util.backup_database_

Oracle method: RMAN BACKUP

Backing up a tablespace (p. 1912) Amazon RDS method:


rdsadmin_rman_util.backup_database_

Oracle method: RMAN BACKUP

• Oracle Scheduler tasks (p. 1914)

1852
Amazon Relational Database Service User Guide
Administering your Oracle DB

Modifying DBMS_SCHEDULER jobs (p. 1915) Amazon RDS method:


dbms_scheduler.set_attribute

Oracle method:
dbms_scheduler.set_attribute

Modifying AutoTask maintenance windows (p. 1915) Amazon RDS method:


dbms_scheduler.set_attribute

Oracle method:
dbms_scheduler.set_attribute

Setting the time zone for Oracle Scheduler jobs (p. 1916) Amazon RDS method:
dbms_scheduler.set_scheduler_attrib

Oracle method:
dbms_scheduler.set_scheduler_attrib

Turning off Oracle Scheduler jobs owned by SYS (p. 1917) Amazon RDS method:
rdsadmin.rdsadmin_dbms_scheduler.di

Oracle method:
dbms_scheduler.disable

Turning on Oracle Scheduler jobs owned by SYS (p. 1917) Amazon RDS method:
rdsadmin.rdsadmin_dbms_scheduler.en

Oracle method:
dbms_scheduler.enable

Modifying the Oracle Scheduler repeat interval for jobs of Amazon RDS method:
CALENDAR type (p. 1918) rdsadmin.rdsadmin_dbms_scheduler.se

Oracle method:
dbms_scheduler.set_attribute

Modifying the Oracle Scheduler repeat interval for jobs of NAMED Amazon RDS method:
type (p. 1918) rdsadmin.rdsadmin_dbms_scheduler.se

Oracle method:
dbms_scheduler.set_attribute

Turning off autocommit for Oracle Scheduler job Amazon RDS method:


creation (p. 1919) rdsadmin.rdsadmin_dbms_scheduler.se

Oracle method:
dbms_isched.set_no_commit_flag

• Diagnostic tasks (p. 1919)

Listing incidents (p. 1920) Amazon RDS method:


rdsadmin.rdsadmin_adrci_util.list_a

Oracle method: ADRCI


command show incident

1853
Amazon Relational Database Service User Guide
Administering your Oracle DB

Listing problems (p. 1922) Amazon RDS method:


rdsadmin.rdsadmin_adrci_util.list_a

Oracle method: ADRCI


command show problem

Creating incident packages (p. 1923) Amazon RDS method:


rdsadmin.rdsadmin_adrci_util.create

Oracle method: ADRCI


command ips create
package

Showing trace files (p. 1925) Amazon RDS method:


rdsadmin.rdsadmin_adrci_util.show_a

Oracle method: ADRCI


command show tracefile

• Other tasks (p. 1926)

Creating and dropping directories in the main data storage Amazon RDS method:
space (p. 1926) rdsadmin.rdsadmin_util.create_direc

Oracle method: CREATE


DIRECTORY

Amazon RDS method:


rdsadmin.rdsadmin_util.drop_directo

Oracle method: DROP


DIRECTORY

Listing files in a DB instance directory (p. 1927) Amazon RDS method:


rdsadmin.rds_file_util.listdir

Oracle method: —

Reading files in a DB instance directory (p. 1927) Amazon RDS method:


rdsadmin.rds_file_util.read_text_fi

Oracle method: —

Accessing Opatch files (p. 1928) Amazon RDS method:


rdsadmin.rds_file_util.read_text_fi
or
rdsadmin.tracefile_listing

Oracle method: opatch

Setting parameters for advisor tasks (p. 1930) Amazon RDS method:
rdsadmin.rdsadmin_util.advisor_task

Oracle method: Various stored


package procedures

1854
Amazon Relational Database Service User Guide
System tasks

Disabling AUTO_STATS_ADVISOR_TASK (p. 1931) Amazon RDS method:


rdsadmin.rdsadmin_util.advisor_task

Oracle method: —

Re-enabling AUTO_STATS_ADVISOR_TASK (p. 1932) Amazon RDS method:


rdsadmin.rdsadmin_util.dbms_stats_i

Oracle method: —

You can also use Amazon RDS procedures for Amazon S3 integration with Oracle and for running OEM
Management Agent database tasks. For more information, see Amazon S3 integration (p. 1992) and
Performing database tasks with the Management Agent (p. 2045).

Performing common system tasks for Oracle DB


instances
Following, you can find how to perform certain common DBA tasks related to the system on your
Amazon RDS DB instances running Oracle. To deliver a managed service experience, Amazon RDS doesn't
provide shell access to DB instances, and restricts access to certain system procedures and tables that
require advanced privileges.

Topics
• Disconnecting a session (p. 1855)
• Terminating a session (p. 1856)
• Canceling a SQL statement in a session (p. 1857)
• Enabling and disabling restricted sessions (p. 1858)
• Flushing the shared pool (p. 1858)
• Flushing the buffer cache (p. 1859)
• Flushing the database smart flash cache (p. 1859)
• Granting SELECT or EXECUTE privileges to SYS objects (p. 1859)
• Revoking SELECT or EXECUTE privileges on SYS objects (p. 1861)
• Granting privileges to non-master users (p. 1861)
• Creating custom functions to verify passwords (p. 1862)
• Setting up a custom DNS server (p. 1865)
• Setting and unsetting system diagnostic events (p. 1866)

Disconnecting a session
To disconnect the current session by ending the dedicated server process, use the Amazon RDS
procedure rdsadmin.rdsadmin_util.disconnect. The disconnect procedure has the following
parameters.

Parameter name Data type Default Required Description

sid number — Yes The session identifier.

serial number — Yes The serial number of the


session.

1855
Amazon Relational Database Service User Guide
System tasks

Parameter name Data type Default Required Description

method varchar 'IMMEDIATE' No Valid values are


'IMMEDIATE' or
'POST_TRANSACTION'.

The following example disconnects a session.

begin
rdsadmin.rdsadmin_util.disconnect(
sid => sid,
serial => serial_number);
end;
/

To get the session identifier and the session serial number, query the V$SESSION view. The following
example gets all sessions for the user AWSUSER.

SELECT SID, SERIAL#, STATUS FROM V$SESSION WHERE USERNAME = 'AWSUSER';

The database must be open to use this method. For more information about disconnecting a session, see
ALTER SYSTEM in the Oracle documentation.

Terminating a session
To terminate a session, use the Amazon RDS procedure rdsadmin.rdsadmin_util.kill. The kill
procedure has the following parameters.

Parameter name Data type Default Required Description

sid number — Yes The session identifier.

serial number — Yes The serial number of the


session.

method varchar null No Valid values are


'IMMEDIATE' or
'PROCESS'. If you specify
IMMEDIATE, it has the
same effect as running the
following statement:

ALTER SYSTEM KILL


SESSION 'sid,serial#'
IMMEDIATE

If you specify PROCESS, you


terminate the processes
associated with a session.
Only specify PROCESS if
terminating the session
using IMMEDIATE was
unsuccessful.

1856
Amazon Relational Database Service User Guide
System tasks

To get the session identifier and the session serial number, query the V$SESSION view. The following
example gets all sessions for the user AWSUSER.

SELECT SID, SERIAL#, STATUS FROM V$SESSION WHERE USERNAME = 'AWSUSER';

The following example terminates a session.

BEGIN
rdsadmin.rdsadmin_util.kill(
sid => sid,
serial => serial_number,
method => 'IMMEDIATE');
END;
/

The following example terminates the processes associated with a session.

BEGIN
rdsadmin.rdsadmin_util.kill(
sid => sid,
serial => serial_number,
method => 'PROCESS');
END;
/

Canceling a SQL statement in a session


To cancel a SQL statement in a session, use the Amazon RDS procedure
rdsadmin.rdsadmin_util.cancel.
Note
This procedure is supported for Oracle Database 19c (19.0.0) and all higher major and minor
versions of RDS for Oracle.

The cancel procedure has the following parameters.

Parameter name Data type Default Required Description

sid number — Yes The session identifier.

serial number — Yes The serial number of the


session.

sql_id varchar2 null No The SQL identifier of the


SQL statement.

The following example cancels a SQL statement in a session.

begin
rdsadmin.rdsadmin_util.cancel(
sid => sid,
serial => serial_number,
sql_id => sql_id);
end;
/

To get the session identifier, the session serial number, and the SQL identifier of a SQL statement, query
the V$SESSION view. The following example gets all sessions and SQL identifiers for the user AWSUSER.

1857
Amazon Relational Database Service User Guide
System tasks

select SID, SERIAL#, SQL_ID, STATUS from V$SESSION where USERNAME = 'AWSUSER';

Enabling and disabling restricted sessions


To enable and disable restricted sessions, use the Amazon RDS procedure
rdsadmin.rdsadmin_util.restricted_session. The restricted_session procedure has the
following parameters.

Parameter name Data type Default Yes Description

p_enable boolean true No Set to true to enable


restricted sessions, false
to disable restricted
sessions.

The following example shows how to enable and disable restricted sessions.

/* Verify that the database is currently unrestricted. */

SELECT LOGINS FROM V$INSTANCE;

LOGINS
-------
ALLOWED

/* Enable restricted sessions */

EXEC rdsadmin.rdsadmin_util.restricted_session(p_enable => true);

/* Verify that the database is now restricted. */

SELECT LOGINS FROM V$INSTANCE;

LOGINS
----------
RESTRICTED

/* Disable restricted sessions */

EXEC rdsadmin.rdsadmin_util.restricted_session(p_enable => false);

/* Verify that the database is now unrestricted again. */

SELECT LOGINS FROM V$INSTANCE;

LOGINS
-------
ALLOWED

Flushing the shared pool


To flush the shared pool, use the Amazon RDS procedure
rdsadmin.rdsadmin_util.flush_shared_pool. The flush_shared_pool procedure has no
parameters.

1858
Amazon Relational Database Service User Guide
System tasks

The following example flushes the shared pool.

EXEC rdsadmin.rdsadmin_util.flush_shared_pool;

Flushing the buffer cache


To flush the buffer cache, use the Amazon RDS procedure
rdsadmin.rdsadmin_util.flush_buffer_cache. The flush_buffer_cache procedure has no
parameters.

The following example flushes the buffer cache.

EXEC rdsadmin.rdsadmin_util.flush_buffer_cache;

Flushing the database smart flash cache


To flush the database smart flash cache, use the Amazon RDS procedure
rdsadmin.rdsadmin_util.flush_flash_cache. The flush_flash_cache procedure has no
parameters. The following example flushes the database smart flash cache.

EXEC rdsadmin.rdsadmin_util.flush_flash_cache;

For more information about using the database smart flash cache with RDS for Oracle, see Storing
temporary data in an RDS for Oracle instance store (p. 1936).

Granting SELECT or EXECUTE privileges to SYS objects


Usually you transfer privileges by using roles, which can contain many objects. To grant privileges to a
single object, use the Amazon RDS procedure rdsadmin.rdsadmin_util.grant_sys_object. The
procedure grants only privileges that the master user has already been granted through a role or direct
grant.

The grant_sys_object procedure has the following parameters.


Important
For all parameter values, use uppercase unless you created the user with a case-sensitive
identifier. For example, if you run CREATE USER myuser or CREATE USER MYUSER, the data
dictionary stores MYUSER. However, if you use double quotes in CREATE USER "MyUser", the
data dictionary stores MyUser.

Parameter name Data type Default Required Description

p_obj_name varchar2 — Yes The name of the object to


grant privileges for. The
object can be a directory,
function, package,
procedure, sequence, table,
or view. Object names must
be spelled exactly as they
appear in DBA_OBJECTS.
Most system objects are
defined in uppercase, so we

1859
Amazon Relational Database Service User Guide
System tasks

Parameter name Data type Default Required Description


recommend that you try
that first.

p_grantee varchar2 — Yes The name of the object


to grant privileges to. The
object can be a schema or a
role.

p_privilege varchar2 null Yes —

p_grant_option boolean false No Set to true to use the


with grant option. The
p_grant_option
parameter is supported for
12.1.0.2.v4 and later, all
12.2.0.1 versions, and all
19.0.0 versions.

The following example grants select privileges on an object named V_$SESSION to a user named USER1.

begin
rdsadmin.rdsadmin_util.grant_sys_object(
p_obj_name => 'V_$SESSION',
p_grantee => 'USER1',
p_privilege => 'SELECT');
end;
/

The following example grants select privileges on an object named V_$SESSION to a user named USER1
with the grant option.

begin
rdsadmin.rdsadmin_util.grant_sys_object(
p_obj_name => 'V_$SESSION',
p_grantee => 'USER1',
p_privilege => 'SELECT',
p_grant_option => true);
end;
/

To be able to grant privileges on an object, your account must have those privileges granted to it directly
with the grant option, or via a role granted using with admin option. In the most common case, you
may want to grant SELECT on a DBA view that has been granted to the SELECT_CATALOG_ROLE role. If
that role isn't already directly granted to your user using with admin option, then you can't transfer
the privilege. If you have the DBA privilege, then you can grant the role directly to another user.

The following example grants the SELECT_CATALOG_ROLE and EXECUTE_CATALOG_ROLE to USER1.


Since the with admin option is used, USER1 can now grant access to SYS objects that have been
granted to SELECT_CATALOG_ROLE.

GRANT SELECT_CATALOG_ROLE TO USER1 WITH ADMIN OPTION;


GRANT EXECUTE_CATALOG_ROLE to USER1 WITH ADMIN OPTION;

Objects already granted to PUBLIC do not need to be re-granted. If you use the grant_sys_object
procedure to re-grant access, the procedure call succeeds.

1860
Amazon Relational Database Service User Guide
System tasks

Revoking SELECT or EXECUTE privileges on SYS objects


To revoke privileges on a single object, use the Amazon RDS procedure
rdsadmin.rdsadmin_util.revoke_sys_object. The procedure only revokes privileges that the
master account has already been granted through a role or direct grant.

The revoke_sys_object procedure has the following parameters.

Parameter name Data type Default Required Description

p_obj_name varchar2 — Yes The name of the object to


revoke privileges for. The
object can be a directory,
function, package,
procedure, sequence, table,
or view. Object names must
be spelled exactly as they
appear in DBA_OBJECTS.
Most system objects are
defined in upper case, so
we recommend you try that
first.

p_revokee varchar2 — Yes The name of the object to


revoke privileges for. The
object can be a schema or a
role.

p_privilege varchar2 null Yes —

The following example revokes select privileges on an object named V_$SESSION from a user named
USER1.

begin
rdsadmin.rdsadmin_util.revoke_sys_object(
p_obj_name => 'V_$SESSION',
p_revokee => 'USER1',
p_privilege => 'SELECT');
end;
/

Granting privileges to non-master users


You can grant select privileges for many objects in the SYS schema by using the
SELECT_CATALOG_ROLE role. The SELECT_CATALOG_ROLE role gives users SELECT privileges on
data dictionary views. The following example grants the role SELECT_CATALOG_ROLE to a user named
user1.

GRANT SELECT_CATALOG_ROLE TO user1;

You can grant EXECUTE privileges for many objects in the SYS schema by using the
EXECUTE_CATALOG_ROLE role. The EXECUTE_CATALOG_ROLE role gives users EXECUTE privileges
for packages and procedures in the data dictionary. The following example grants the role
EXECUTE_CATALOG_ROLE to a user named user1.

1861
Amazon Relational Database Service User Guide
System tasks

GRANT EXECUTE_CATALOG_ROLE TO user1;

The following example gets the permissions that the roles SELECT_CATALOG_ROLE and
EXECUTE_CATALOG_ROLE allow.

SELECT *
FROM ROLE_TAB_PRIVS
WHERE ROLE IN ('SELECT_CATALOG_ROLE','EXECUTE_CATALOG_ROLE')
ORDER BY ROLE, TABLE_NAME ASC;

The following example creates a non-master user named user1, grants the CREATE SESSION privilege,
and grants the SELECT privilege on a database named sh.sales.

CREATE USER user1 IDENTIFIED BY PASSWORD;


GRANT CREATE SESSION TO user1;
GRANT SELECT ON sh.sales TO user1;

Creating custom functions to verify passwords


You can create a custom password verification function in the following ways:

• To use standard verification logic, and to store your function in the SYS schema, use the
create_verify_function procedure.
• To use custom verification logic, or to avoid storing your function in the SYS schema, use the
create_passthrough_verify_fcn procedure.

The create_verify_function procedure


You can create a custom function to verify passwords by using the Amazon RDS
procedure rdsadmin.rdsadmin_password_verify.create_verify_function. The
create_verify_function procedure is supported for version 12.1.0.2.v5 and all higher major and
minor versions of RDS for Oracle.

The create_verify_function procedure has the following parameters.

Parameter name Data type Default Required Description

p_verify_function_name varchar2 — Yes The name for your custom


function. This function is
created for you in the SYS
schema. You assign this
function to user profiles.

p_min_length number 8 No The minimum number of


characters required.

p_max_length number 256 No The maximum number of


characters allowed.

p_min_letters number 1 No The minimum number of


letters required.

p_min_uppercase number 0 No The minimum number of


uppercase letters required.

1862
Amazon Relational Database Service User Guide
System tasks

Parameter name Data type Default Required Description

p_min_lowercase number 0 No The minimum number of


lowercase letters required.

p_min_digits number 1 No The minimum number of


digits required.

p_min_special number 0 No The minimum number of


special characters required.

p_min_different_chars number 3 No The minimum number


of different characters
required between the old
and new password.

p_disallow_username boolean true No Set to true to disallow the


user name in the password.

p_disallow_reverse boolean true No Set to true to disallow the


reverse of the user name in
the password.

p_disallow_db_name boolean true No Set to true to disallow the


database or server name in
the password.

boolean
p_disallow_simple_strings true No Set to true to disallow
simple strings as the
password.

p_disallow_whitespace boolean false No Set to true to disallow


white space characters in
the password.

p_disallow_at_sign boolean false No Set to true to disallow


the @ character in the
password.

You can create multiple password verification functions.

There are restrictions on the name of your custom function. Your custom function can't have the same
name as an existing system object. The name can be no more than 30 characters long. Also, the name
must include one of the following strings: PASSWORD, VERIFY, COMPLEXITY, ENFORCE, or STRENGTH.

The following example creates a function named CUSTOM_PASSWORD_FUNCTION. The function requires
that a password has at least 12 characters, 2 uppercase characters, 1 digit, and 1 special character, and
that the password disallows the @ character.

begin
rdsadmin.rdsadmin_password_verify.create_verify_function(
p_verify_function_name => 'CUSTOM_PASSWORD_FUNCTION',
p_min_length => 12,
p_min_uppercase => 2,
p_min_digits => 1,
p_min_special => 1,
p_disallow_at_sign => true);
end;
/

1863
Amazon Relational Database Service User Guide
System tasks

To see the text of your verification function, query DBA_SOURCE. The following example gets the text of
a custom password function named CUSTOM_PASSWORD_FUNCTION.

COL TEXT FORMAT a150

SELECT TEXT
FROM DBA_SOURCE
WHERE OWNER = 'SYS'
AND NAME = 'CUSTOM_PASSWORD_FUNCTION'
ORDER BY LINE;

To associate your verification function with a user profile, use alter profile. The following example
associates a verification function with the DEFAULT user profile.

ALTER PROFILE DEFAULT LIMIT PASSWORD_VERIFY_FUNCTION CUSTOM_PASSWORD_FUNCTION;

To see what user profiles are associated with what verification functions, query DBA_PROFILES. The
following example gets the profiles that are associated with the custom verification function named
CUSTOM_PASSWORD_FUNCTION.

SELECT * FROM DBA_PROFILES WHERE RESOURCE_NAME = 'PASSWORD' AND LIMIT =


'CUSTOM_PASSWORD_FUNCTION';

PROFILE RESOURCE_NAME RESOURCE LIMIT


------------------------- -------------------------------- --------
------------------------
DEFAULT PASSWORD_VERIFY_FUNCTION PASSWORD
CUSTOM_PASSWORD_FUNCTION

The following example gets all profiles and the password verification functions that they are associated
with.

SELECT * FROM DBA_PROFILES WHERE RESOURCE_NAME = 'PASSWORD_VERIFY_FUNCTION';

PROFILE RESOURCE_NAME RESOURCE LIMIT


------------------------- -------------------------------- --------
------------------------
DEFAULT PASSWORD_VERIFY_FUNCTION PASSWORD
CUSTOM_PASSWORD_FUNCTION
RDSADMIN PASSWORD_VERIFY_FUNCTION PASSWORD NULL

The create_passthrough_verify_fcn procedure


The create_passthrough_verify_fcn procedure is supported for version 12.1.0.2.v7 and all higher
major and minor versions of RDS for Oracle.

You can create a custom function to verify passwords by using the Amazon RDS procedure
rdsadmin.rdsadmin_password_verify.create_passthrough_verify_fcn. The
create_passthrough_verify_fcn procedure has the following parameters.

Parameter name Data type Default Required Description

p_verify_function_name varchar2 — Yes The name for your custom


verification function. This
is a wrapper function that
is created for you in the

1864
Amazon Relational Database Service User Guide
System tasks

Parameter name Data type Default Required Description


SYS schema, and it doesn't
contain any verification
logic. You assign this
function to user profiles.

p_target_owner varchar2 — Yes The schema owner for


your custom verification
function.

p_target_function_name varchar2 — Yes The name of your existing


custom function that
contains the verification
logic. Your custom function
must return a boolean.
Your function should return
true if the password is
valid and false if the
password is invalid.

The following example creates a password verification function that uses the logic from the function
named PASSWORD_LOGIC_EXTRA_STRONG.

begin
rdsadmin.rdsadmin_password_verify.create_passthrough_verify_fcn(
p_verify_function_name => 'CUSTOM_PASSWORD_FUNCTION',
p_target_owner => 'TEST_USER',
p_target_function_name => 'PASSWORD_LOGIC_EXTRA_STRONG');
end;
/

To associate the verification function with a user profile, use alter profile. The following example
associates the verification function with the DEFAULT user profile.

ALTER PROFILE DEFAULT LIMIT PASSWORD_VERIFY_FUNCTION CUSTOM_PASSWORD_FUNCTION;

Setting up a custom DNS server


Amazon RDS supports outbound network access on your DB instances running Oracle. For more
information about outbound network access, including prerequisites, see Configuring UTL_HTTP access
using certificates and an Oracle wallet (p. 1832).

Amazon RDS Oracle allows Domain Name Service (DNS) resolution from a custom DNS server owned by
the customer. You can resolve only fully qualified domain names from your Amazon RDS DB instance
through your custom DNS server.

After you set up your custom DNS name server, it takes up to 30 minutes to propagate the changes to
your DB instance. After the changes are propagated to your DB instance, all outbound network traffic
requiring a DNS lookup queries your DNS server over port 53.

To set up a custom DNS server for your Amazon RDS for Oracle DB instance, do the following:

• From the DHCP options set attached to your virtual private cloud (VPC), set the domain-name-
servers option to the IP address of your DNS name server. For more information, see DHCP options
sets.

1865
Amazon Relational Database Service User Guide
System tasks

Note
The domain-name-servers option accepts up to four values, but your Amazon RDS DB
instance uses only the first value.
• Ensure that your DNS server can resolve all lookup queries, including public DNS names, Amazon EC2
private DNS names, and customer-specific DNS names. If the outbound network traffic contains any
DNS lookups that your DNS server can't handle, your DNS server must have appropriate upstream DNS
providers configured.
• Configure your DNS server to produce User Datagram Protocol (UDP) responses of 512 bytes or less.
• Configure your DNS server to produce Transmission Control Protocol (TCP) responses of 1024 bytes or
less.
• Configure your DNS server to allow inbound traffic from your Amazon RDS DB instances over port 53.
If your DNS server is in an Amazon VPC, the VPC must have a security group that contains inbound
rules that permit UDP and TCP traffic on port 53. If your DNS server is not in an Amazon VPC, it must
have appropriate firewall allow-listing to permit UDP and TCP inbound traffic on port 53.

For more information, see Security groups for your VPC and Adding and removing rules.
• Configure the VPC of your Amazon RDS DB instance to allow outbound traffic over port 53. Your VPC
must have a security group that contains outbound rules that allow UDP and TCP traffic on port 53.

For more information, see Security groups for your VPC and Adding and removing rules.
• The routing path between the Amazon RDS DB instance and the DNS server has to be configured
correctly to allow DNS traffic.
• If the Amazon RDS DB instance and the DNS server are not in the same VPC, a peering connection
has to be set up between them. For more information, see What is VPC peering?

Setting and unsetting system diagnostic events


To set and unset diagnostic events at the session level, you can use the Oracle SQL statement ALTER
SESSION SET EVENTS. However, to set events at the system level you can't use Oracle SQL. Instead,
use the system event procedures in the rdsadmin.rdsadmin_util package. The system event
procedures are available in the following engine versions:

• All Oracle Database 21c versions


• 19.0.0.0.ru-2020-10.rur-2020-10.r1 and higher Oracle Database 19c versions

For more information, see Version 19.0.0.0.ru-2020-10.rur-2020-10.r1 in the Amazon RDS for Oracle
Release Notes.
• 12.2.0.1.ru-2020-10.rur-2020-10.r1 and higher Oracle Database 12c Release 2 (12.2.0.1) versions

For more information, see Version 12.2.0.1.ru-2020-10.rur-2020-10.r1 in the Amazon RDS for Oracle
Release Notes.
• 12.1.0.2.V22 and higher Oracle Database 12c Release 1 (12.1.0.2) versions

For more information, see Version 12.1.0.2.v22 in the Amazon RDS for Oracle Release Notes.

para
Important
Internally, the rdsadmin.rdsadmin_util package sets events by using the ALTER SYSTEM
SET EVENTS statement. This ALTER SYSTEM statement isn't documented in the Oracle
Database documentation. Some system diagnostic events can generate large amounts of
tracing information, cause contention, or affect database availability. We recommend that you
test specific diagnostic events in your nonproduction database, and only set events in your
production database under guidance of Oracle Support.

1866
Amazon Relational Database Service User Guide
System tasks

Listing allowed system diagnostic events


To list the system events that you can set, use the Amazon RDS procedure
rdsadmin.rdsadmin_util.list_allowed_system_events. This procedure accepts no parameters.

The following example lists all system events that you can set.

SET SERVEROUTPUT ON
EXEC rdsadmin.rdsadmin_util.list_allowed_system_events;

The following sample output lists event numbers and their descriptions. Use the Amazon RDS procedures
set_system_event to set these events and unset_system_event to unset them.

604 - error occurred at recursive SQL level


942 - table or view does not exist
1401 - inserted value too large for column
1403 - no data found
1410 - invalid ROWID
1422 - exact fetch returns more than requested number of rows
1426 - numeric overflow
1427 - single-row subquery returns more than one row
1476 - divisor is equal to zero
1483 - invalid length for DATE or NUMBER bind variable
1489 - result of string concatenation is too long
1652 - unable to extend temp segment by in tablespace
1858 - a non-numeric character was found where a numeric was expected
4031 - unable to allocate bytes of shared memory ("","","","")
6502 - PL/SQL: numeric or value error
10027 - Specify Deadlock Trace Information to be Dumped
10046 - enable SQL statement timing
10053 - CBO Enable optimizer trace
10173 - Dynamic Sampling time-out error
10442 - enable trace of kst for ORA-01555 diagnostics
12008 - error in materialized view refresh path
12012 - error on auto execute of job
12504 - TNS:listener was not given the SERVICE_NAME in CONNECT_DATA
14400 - inserted partition key does not map to any partition
31693 - Table data object failed to load/unload and is being skipped due to error:

Note
The list of the allowed system events can change over time. To
make sure that you have the most recent list of eligible events, use
rdsadmin.rdsadmin_util.list_allowed_system_events.

Setting system diagnostic events


To set a system event, use the Amazon RDS procedure
rdsadmin.rdsadmin_util.set_system_event. You can only set events listed in the output of
rdsadmin.rdsadmin_util.list_allowed_system_events. The set_system_event procedure
accepts the following parameters.

Parameter name Data type Default Required Description

p_event number — Yes The system event


number. The value must
be one of the event
numbers reported by
list_allowed_system_events.

1867
Amazon Relational Database Service User Guide
System tasks

Parameter name Data type Default Required Description

p_level number — Yes The event level. See


the Oracle Database
documentation or Oracle
Support for descriptions of
different level values.

The procedure set_system_event constructs and runs the required ALTER SYSTEM SET EVENTS
statements according to the following principles:

• The event type (context or errorstack) is determined automatically.


• A statement in the form ALTER SYSTEM SET EVENTS 'event LEVEL event_level' sets the
context events. This notation is equivalent to ALTER SYSTEM SET EVENTS 'event TRACE NAME
CONTEXT FOREVER, LEVEL event_level'.
• A statement in the form ALTER SYSTEM SET EVENTS 'event ERRORSTACK (event_level)'
sets the error stack events. This notation is equivalent to ALTER SYSTEM SET EVENTS 'event
TRACE NAME ERRORSTACK LEVEL event_level'.

The following example sets event 942 at level 3, and event 10442 at level 10. Sample output is included.

SQL> SET SERVEROUTPUT ON


SQL> EXEC rdsadmin.rdsadmin_util.set_system_event(942,3);
Setting system event 942 with: alter system set events '942 errorstack (3)'

PL/SQL procedure successfully completed.

SQL> EXEC rdsadmin.rdsadmin_util.set_system_event(10442,10);


Setting system event 10442 with: alter system set events '10442 level 10'

PL/SQL procedure successfully completed.

Listing system diagnostic events that are set


To list the system events that are currently set, use the Amazon RDS procedure
rdsadmin.rdsadmin_util.list_set_system_events. This procedure reports only events set at
system level by set_system_event.

The following example lists the active system events.

SET SERVEROUTPUT ON
EXEC rdsadmin.rdsadmin_util.list_set_system_events;

The following sample output shows the list of events, the event type, the level at which the events are
currently set, and the time when the event was set.

942 errorstack (3) - set at 2020-11-03 11:42:27


10442 level 10 - set at 2020-11-03 11:42:41

PL/SQL procedure successfully completed.

Unsetting system diagnostic events


To unset a system event, use the Amazon RDS procedure
rdsadmin.rdsadmin_util.unset_system_event. You can only unset events listed in the output of

1868
Amazon Relational Database Service User Guide
Database tasks

rdsadmin.rdsadmin_util.list_allowed_system_events. The unset_system_event procedure


accepts the following parameter.

Parameter name Data type Default Required Description

p_event number — Yes The system event


number. The value must
be one of the event
numbers reported by
list_allowed_system_events.

The following example unsets events 942 and 10442. Sample output is included.

SQL> SET SERVEROUTPUT ON


SQL> EXEC rdsadmin.rdsadmin_util.unset_system_event(942);
Unsetting system event 942 with: alter system set events '942 off'

PL/SQL procedure successfully completed.

SQL> EXEC rdsadmin.rdsadmin_util.unset_system_event(10442);


Unsetting system event 10442 with: alter system set events '10442 off'

PL/SQL procedure successfully completed.

Performing common database tasks for Oracle DB


instances
Following, you can find how to perform certain common DBA tasks related to databases on your Amazon
RDS DB instances running Oracle. To deliver a managed service experience, Amazon RDS doesn't provide
shell access to DB instances. Amazon RDS also restricts access to some system procedures and tables that
require advanced privileges.

Topics
• Changing the global name of a database (p. 1870)
• Creating and sizing tablespaces (p. 1870)
• Setting the default tablespace (p. 1871)
• Setting the default temporary tablespace (p. 1871)
• Creating a temporary tablespace on the instance store (p. 1871)
• Adding a tempfile to the instance store on a read replica (p. 1872)
• Dropping tempfiles on a read replica (p. 1872)
• Checkpointing a database (p. 1873)
• Setting distributed recovery (p. 1873)
• Setting the database time zone (p. 1873)
• Working with Oracle external tables (p. 1874)
• Generating performance reports with Automatic Workload Repository (AWR) (p. 1875)
• Adjusting database links for use with DB instances in a VPC (p. 1879)
• Setting the default edition for a DB instance (p. 1879)
• Enabling auditing for the SYS.AUD$ table (p. 1880)
• Disabling auditing for the SYS.AUD$ table (p. 1880)
• Cleaning up interrupted online index builds (p. 1881)

1869
Amazon Relational Database Service User Guide
Database tasks

• Skipping corrupt blocks (p. 1881)


• Resizing tablespaces, data files, and temp files (p. 1883)
• Purging the recycle bin (p. 1886)
• Setting the default displayed values for full redaction (p. 1887)

Changing the global name of a database


To change the global name of a database, use the Amazon RDS procedure
rdsadmin.rdsadmin_util.rename_global_name. The rename_global_name procedure has the
following parameters.

Parameter name Data type Default Required Description

p_new_global_name varchar2 — Yes The new global name for


the database.

The database must be open for the name change to occur. For more information about changing the
global name of a database, see ALTER DATABASE in the Oracle documentation.

The following example changes the global name of a database to new_global_name.

EXEC rdsadmin.rdsadmin_util.rename_global_name(p_new_global_name => 'new_global_name');

Creating and sizing tablespaces


Amazon RDS only supports Oracle Managed Files (OMF) for data files, log files, and control files. When
you create data files and log files, you can't specify the physical file names.

By default, if you don't specify a data file size, tablespaces are created with the default of AUTOEXTEND
ON, and no maximum size. In the following example, the tablespace users1 is autoextensible.

CREATE TABLESPACE users1;

Because of these default settings, tablespaces can grow to consume all allocated storage. We
recommend that you specify an appropriate maximum size on permanent and temporary tablespaces,
and that you carefully monitor space usage.

The following example creates a tablespace named users2 with a starting size of 1 gigabyte. Because a
data file size is specified, but AUTOEXTEND ON isn't specified, the tablespace isn't autoextensible.

CREATE TABLESPACE users2 DATAFILE SIZE 1G;

The following example creates a tablespace named users3 with a starting size of 1 gigabyte,
autoextend turned on, and a maximum size of 10 gigabytes.

CREATE TABLESPACE users3 DATAFILE SIZE 1G AUTOEXTEND ON MAXSIZE 10G;

The following example creates a temporary tablespace named temp01.

CREATE TEMPORARY TABLESPACE temp01;

1870
Amazon Relational Database Service User Guide
Database tasks

We recommend that you don't use smallfile tablespaces because you can't resize smallfile tablespaces
with RDS for Oracle. However, you can add a data file to a smallfile tablespace. To determine whether a
tablespace is bigfile or smallfile, query DBA_TABLESPACES as follows.

SELECT TABLESPACE NAME, BIGFILE FROM DBA_TABLESPACE;

You can resize a bigfile tablespace by using ALTER TABLESPACE. You can specify the size in kilobytes
(K), megabytes (M), gigabytes (G), or terabytes (T). The following example resizes a bigfile tablespace
named users_bf to 200 MB.

ALTER TABLESPACE users_bf RESIZE 200M;

The following example adds an additional data file to a smallfile tablespace named users_sf.

ALTER TABLESPACE users_sf ADD DATAFILE SIZE 100000M AUTOEXTEND ON NEXT 250m
MAXSIZE UNLIMITED;

Setting the default tablespace


To set the default tablespace, use the Amazon RDS procedure
rdsadmin.rdsadmin_util.alter_default_tablespace. The alter_default_tablespace
procedure has the following parameters.

Parameter name Data type Default Required Description

tablespace_name varchar — Yes The name of the default


tablespace.

The following example sets the default tablespace to users2:

EXEC rdsadmin.rdsadmin_util.alter_default_tablespace(tablespace_name => 'users2');

Setting the default temporary tablespace


To set the default temporary tablespace, use the Amazon RDS procedure
rdsadmin.rdsadmin_util.alter_default_temp_tablespace. The
alter_default_temp_tablespace procedure has the following parameters.

Parameter name Data type Default Required Description

tablespace_name varchar — Yes The name of the default


temporary tablespace.

The following example sets the default temporary tablespace to temp01.

EXEC rdsadmin.rdsadmin_util.alter_default_temp_tablespace(tablespace_name => 'temp01');

Creating a temporary tablespace on the instance store


To create a temporary tablespace on the instance store, use the Amazon RDS
procedure rdsadmin.rdsadmin_util.create_inst_store_tmp_tblspace. The
create_inst_store_tmp_tblspace procedure has the following parameters.

1871
Amazon Relational Database Service User Guide
Database tasks

Parameter name Data type Default Required Description

p_tablespace_name varchar — Yes The name of the temporary


tablespace.

The following example creates the temporary tablespace temp01 in the instance store.

EXEC rdsadmin.rdsadmin_util.create_inst_store_tmp_tblspace(p_tablespace_name => 'temp01');

Important
When you run rdsadmin_util.create_inst_store_tmp_tblspace, the newly created
temporary tablespace is not automatically set as the default temporary tablespace. To set it as
the default, see Setting the default temporary tablespace (p. 1871).

For more information, see Storing temporary data in an RDS for Oracle instance store (p. 1936).

Adding a tempfile to the instance store on a read replica


When you create a temporary tablespace on a primary DB instance, the read replica doesn't create
tempfiles. Assume that an empty temporary tablespace exists on your read replica for either of the
following reasons:

• You dropped a tempfile from the tablespace on your read replica. For more information, see Dropping
tempfiles on a read replica (p. 1872).
• You created a new temporary tablespace on the primary DB instance. In this case, RDS for Oracle
synchronizes the metadata to the read replica.

You can add a tempfile to the empty temporary tablespace, and store the tempfile in the
instance store. To create a tempfile in the instance store, use the Amazon RDS procedure
rdsadmin.rdsadmin_util.add_inst_store_tempfile. You can use this procedure only on a read
replica. The procedure has the following parameters.

Parameter name Data type Default Required Description

p_tablespace_name varchar — Yes The name of the temporary


tablespace on your read
replica.

In the following example, the empty temporary tablespace temp01 exists on your read replica. Run the
following command to create a tempfile for this tablespace, and store it in the instance store.

EXEC rdsadmin.rdsadmin_util.add_inst_store_tempfile(p_tablespace_name => 'temp01');

For more information, see Storing temporary data in an RDS for Oracle instance store (p. 1936).

Dropping tempfiles on a read replica


You can't drop an existing temporary tablespace on a read replica. You can change the tempfile storage
on a read replica from Amazon EBS to the instance store, or from the instance store to Amazon EBS. To
achieve these goals, do the following:

1. Drop the current tempfiles in the temporary tablespace on the read replica.
2. Create new tempfiles on different storage.

1872
Amazon Relational Database Service User Guide
Database tasks

To drop the tempfiles, use the Amazon RDS procedure rdsadmin.rdsadmin_util.


drop_replica_tempfiles. You can use this procedure only on read replicas. The
drop_replica_tempfiles procedure has the following parameters.

Parameter name Data type Default Required Description

p_tablespace_name varchar — Yes The name of the temporary


tablespace on your read
replica.

Assume that a temporary tablespace named temp01 resides in the instance store on your read replica.
Drop all tempfiles in this tablespace by running the following command.

EXEC rdsadmin.rdsadmin_util.drop_replica_tempfiles(p_tablespace_name => 'temp01');

For more information, see Storing temporary data in an RDS for Oracle instance store (p. 1936).

Checkpointing a database
To checkpoint the database, use the Amazon RDS procedure rdsadmin.rdsadmin_util.checkpoint.
The checkpoint procedure has no parameters.

The following example checkpoints the database.

EXEC rdsadmin.rdsadmin_util.checkpoint;

Setting distributed recovery


To set distributed recovery, use the Amazon RDS procedures
rdsadmin.rdsadmin_util.enable_distr_recovery and disable_distr_recovery. The
procedures have no parameters.

The following example enables distributed recovery.

EXEC rdsadmin.rdsadmin_util.enable_distr_recovery;

The following example disables distributed recovery.

EXEC rdsadmin.rdsadmin_util.disable_distr_recovery;

Setting the database time zone


You can set the time zone of your Amazon RDS Oracle database in the following ways:

• The Timezone option

The Timezone option changes the time zone at the host level and affects all date columns and values
such as SYSDATE. For more information, see Oracle time zone (p. 2087).
• The Amazon RDS procedure rdsadmin.rdsadmin_util.alter_db_time_zone

The alter_db_time_zone procedure changes the time zone for only certain data types, and doesn't
change SYSDATE. There are additional restrictions on setting the time zone listed in the Oracle
documentation.

1873
Amazon Relational Database Service User Guide
Database tasks

Note
You can also set the default time zone for Oracle Scheduler. For more information, see Setting
the time zone for Oracle Scheduler jobs (p. 1916).

The alter_db_time_zone procedure has the following parameters.

Parameter name Data type Default Required Description

p_new_tz varchar2 — Yes The new time zone as


a named region or an
absolute offset from
Coordinated Universal Time
(UTC). Valid offsets range
from -12:00 to +14:00.

The following example changes the time zone to UTC plus three hours.

EXEC rdsadmin.rdsadmin_util.alter_db_time_zone(p_new_tz => '+3:00');

The following example changes the time zone to the Africa/Algiers time zone.

EXEC rdsadmin.rdsadmin_util.alter_db_time_zone(p_new_tz => 'Africa/Algiers');

After you alter the time zone by using the alter_db_time_zone procedure, reboot your DB instance
for the change to take effect. For more information, see Rebooting a DB instance (p. 436). For
information about upgrading time zones, see Time zone considerations (p. 2110).

Working with Oracle external tables


Oracle external tables are tables with data that is not in the database. Instead, the data is in external
files that the database can access. By using external tables, you can access data without loading it into
the database. For more information about external tables, see Managing external tables in the Oracle
documentation.

With Amazon RDS, you can store external table files in directory objects. You can create a directory
object, or you can use one that is predefined in the Oracle database, such as the DATA_PUMP_DIR
directory. For information about creating directory objects, see Creating and dropping directories in the
main data storage space (p. 1926). You can query the ALL_DIRECTORIES view to list the directory objects
for your Amazon RDS Oracle DB instance.
Note
Directory objects point to the main data storage space (Amazon EBS volume) used by your
instance. The space used—along with data files, redo logs, audit, trace, and other files—counts
against allocated storage.

You can move an external data file from one Oracle database to another by using the
DBMS_FILE_TRANSFER package or the UTL_FILE package. The external data file is moved from a
directory on the source database to the specified directory on the destination database. For information
about using DBMS_FILE_TRANSFER, see Importing using Oracle Data Pump (p. 1948).

After you move the external data file, you can create an external table with it. The following example
creates an external table that uses the emp_xt_file1.txt file in the USER_DIR1 directory.

CREATE TABLE emp_xt (


emp_id NUMBER,
first_name VARCHAR2(50),
last_name VARCHAR2(50),

1874
Amazon Relational Database Service User Guide
Database tasks

user_name VARCHAR2(20)
)
ORGANIZATION EXTERNAL (
TYPE ORACLE_LOADER
DEFAULT DIRECTORY USER_DIR1
ACCESS PARAMETERS (
RECORDS DELIMITED BY NEWLINE
FIELDS TERMINATED BY ','
MISSING FIELD VALUES ARE NULL
(emp_id,first_name,last_name,user_name)
)
LOCATION ('emp_xt_file1.txt')
)
PARALLEL
REJECT LIMIT UNLIMITED;

Suppose that you want to move data that is in an Amazon RDS Oracle DB instance into an external data
file. In this case, you can populate the external data file by creating an external table and selecting the
data from the table in the database. For example, the following SQL statement creates the orders_xt
external table by querying the orders table in the database.

CREATE TABLE orders_xt


ORGANIZATION EXTERNAL
(
TYPE ORACLE_DATAPUMP
DEFAULT DIRECTORY DATA_PUMP_DIR
LOCATION ('orders_xt.dmp')
)
AS SELECT * FROM orders;

In this example, the data is populated in the orders_xt.dmp file in the DATA_PUMP_DIR directory.

Generating performance reports with Automatic Workload


Repository (AWR)
To gather performance data and generate reports, Oracle recommends Automatic Workload Repository
(AWR). AWR requires Oracle Database Enterprise Edition and a license for the Diagnostics and Tuning
packs. To enable AWR, set the CONTROL_MANAGEMENT_PACK_ACCESS initialization parameter to either
DIAGNOSTIC or DIAGNOSTIC+TUNING.

Working with AWR reports in RDS


To generate AWR reports, you can run scripts such as awrrpt.sql. These scripts are installed on the
database host server. In Amazon RDS, you don't have direct access to the host. However, you can get
copies of SQL scripts from another installation of Oracle Database.

You can also use AWR by running procedures in the SYS.DBMS_WORKLOAD_REPOSITORY


PL/SQL package. You can use this package to manage baselines and snapshots, and also to
display ASH and AWR reports. For example, to generate an AWR report in text format run the
DBMS_WORKLOAD_REPOSITORY.AWR_REPORT_TEXT procedure. However, you can't reach these AWR
reports from the AWS Management Console.

When working with AWR, we recommend using the rdsadmin.rdsadmin_diagnostic_util


procedures. You can use these procedures to generate the following:

• AWR reports
• Active Session History (ASH) reports
• Automatic Database Diagnostic Monitor (ADDM) reports
• Oracle Data Pump Export dump files of AWR data

1875
Amazon Relational Database Service User Guide
Database tasks

The rdsadmin_diagnostic_util procedures save the reports to the DB instance file


system. You can access these reports from the console. You can also access reports using the
rdsadmin.rds_file_util procedures, and you can access reports that are copied to Amazon S3 using
the S3 Integration option. For more information, see Reading files in a DB instance directory (p. 1927)
and Amazon S3 integration (p. 1992).

You can use the rdsadmin_diagnostic_util procedures in the following Amazon RDS for Oracle DB
engine versions:

• All Oracle Database 21c versions


• 19.0.0.0.ru-2020-04.rur-2020-04.r1 and higher Oracle Database 19c versions
• 12.2.0.1.ru-2020-04.rur-2020-04.r1 and higher Oracle Database 12c Release 2 (12.2) versions
• 12.1.0.2.v20 and higher Oracle Database 12c Release 1 (12.1) versions

For a blog that explains how to work with diagnostic reports in a replication scenario, see Generate AWR
reports for Amazon RDS for Oracle read replicas.

Common parameters for the diagnostic utility package


You typically use the following parameters when managing AWR and ADDM with the
rdsadmin_diagnostic_util package.

Parameter Data DefaultRequired


Description
type

begin_snap_idNUMBER — Yes The ID of the beginning snapshot.

end_snap_id NUMBER — Yes The ID of the ending snapshot.

dump_directory BDUMP No
VARCHAR2 The directory to write the report or export file to. If you
specify a nondefault directory, the user that runs the
rdsadmin_diagnostic_util procedures must have write
permissions for the directory.

p_tag —
VARCHAR2 No A string that can be used to distinguish between backups
to indicate the purpose or usage of backups, such as
incremental or daily.

You can specify up to 30 characters. Valid characters are a-


z, A-Z, 0-9, an underscore (_), a dash (-), and a period (.).
The tag is not case-sensitive. RMAN always stores tags in
uppercase, regardless of the case used when entering them.

Tags don't need to be unique, so multiple backups can


have the same tag. If you don't specify a tag, RMAN
assigns a default tag automatically using the format
TAGYYYYMMDDTHHMMSS, where YYYY is the year, MM is the
month, DD is the day, HH is the hour (in 24-hour format),
MM is the minutes, and SS is the seconds. The date and
time indicate when RMAN started the backup. For example,
a backup with the default tagTAG20190927T214517
indicates a backup that started on 2019-09-27 at 21:45:17.

The p_tag parameter is supported for the following RDS for


Oracle DB engine versions:

• Oracle Database 21c (21.0.0)

1876
Amazon Relational Database Service User Guide
Database tasks

Parameter Data DefaultRequired


Description
type
• Oracle Database 19c (19.0.0), using
19.0.0.0.ru-2021-10.rur-2021-10.r1 and higher
• Oracle Database 12c Release 2 (12.2), using
12.2.0.1.ru-2021-10.rur-2021-10.r1 and higher
• Oracle Database 12c Release 1 (12.1), using 12.1.0.2.V26
and higher

report_type VARCHAR2
HTML No The format of the report. Valid values are TEXT and HTML.

dbid NUMBER — No A valid database identifier (DBID) shown in the


DBA_HIST_DATABASE_INSTANCE view for Oracle. If this
parameter is not specified, RDS uses the current DBID, which
is shown in the V$DATABASE.DBID view.

You typically use the following parameters when managing ASH with the rdsadmin_diagnostic_util
package.

Parameter Data type Default RequiredDescription

begin_time DATE — Yes The beginning time of the ASH analysis.

end_time DATE — Yes The ending time of the ASH analysis.

slot_width NUMBER 0 No The duration of the slots (in seconds) used in the "Top
Activity" section of the ASH report. If this parameter
isn't specified, the time interval between begin_time
and end_time uses no more than 10 slots.

sid NUMBER Null No The session ID.

sql_id VARCHAR2 Null No The SQL ID.

wait_class VARCHAR2 Null No The wait class name.

service_hash NUMBER Null No The service name hash.

module_name VARCHAR2 Null No The module name.

action_name VARCHAR2 Null No The action name.

client_id VARCHAR2 Null No The application-specific ID of the database session.

plsql_entry VARCHAR2 Null No The PL/SQL entry point.

Generating an AWR report


To generate an AWR report, use the rdsadmin.rdsadmin_diagnostic_util.awr_report
procedure.

The following example generates a AWR report for the snapshot range 101–106. The output text file is
named awrrpt_101_106.txt. You can access this report from the AWS Management Console.

EXEC rdsadmin.rdsadmin_diagnostic_util.awr_report(101,106,'TEXT');

1877
Amazon Relational Database Service User Guide
Database tasks

The following example generates an HTML report for the snapshot range 63–65. The output HTML file
is named awrrpt_63_65.html. The procedure writes the report to the nondefault database directory
named AWR_RPT_DUMP.

EXEC rdsadmin.rdsadmin_diagnostic_util.awr_report(63,65,'HTML','AWR_RPT_DUMP');

Extracting AWR data into a dump file


To extract AWR data into a dump file, use the
rdsadmin.rdsadmin_diagnostic_util.awr_extract procedure.

The following example extracts the snapshot range 101–106. The output dump file is named
awrextract_101_106.dmp. You can access this file through the console.

EXEC rdsadmin.rdsadmin_diagnostic_util.awr_extract(101,106);

The following example extracts the snapshot range 63–65. The output dump file is named
awrextract_63_65.dmp. The file is stored in the nondefault database directory named
AWR_RPT_DUMP.

EXEC rdsadmin.rdsadmin_diagnostic_util.awr_extract(63,65,'AWR_RPT_DUMP');

Generating an ADDM report


To generate an ADDM report, use the rdsadmin.rdsadmin_diagnostic_util.addm_report
procedure.

The following example generates an ADDM report for the snapshot range 101–106. The output text file
is named addmrpt_101_106.txt. You can access the report through the console.

EXEC rdsadmin.rdsadmin_diagnostic_util.addm_report(101,106);

The following example generates an ADDM report for the snapshot range 63–65. The output text
file is named addmrpt_63_65.txt. The file is stored in the nondefault database directory named
ADDM_RPT_DUMP.

EXEC rdsadmin.rdsadmin_diagnostic_util.addm_report(63,65,'ADDM_RPT_DUMP');

Generating an ASH report


To generate an ASH report, use the rdsadmin.rdsadmin_diagnostic_util.ash_report
procedure.

The following example generates an ASH report that includes the data from 14 minutes ago until the
current time. The name of the output file uses the format ashrptbegin_timeend_time.txt, where
begin_time and end_time use the format YYYYMMDDHH24MISS. You can access the file through the
console.

BEGIN
rdsadmin.rdsadmin_diagnostic_util.ash_report(
begin_time => SYSDATE-14/1440,
end_time => SYSDATE,
report_type => 'TEXT');

1878
Amazon Relational Database Service User Guide
Database tasks

END;
/

The following example generates an ASH report that includes the data from November 18, 2019,
at 6:07 PM through November 18, 2019, at 6:15 PM. The name of the output HTML report is
ashrpt_20190918180700_20190918181500.html. The report is stored in the nondefault database
directory named AWR_RPT_DUMP.

BEGIN
rdsadmin.rdsadmin_diagnostic_util.ash_report(
begin_time => TO_DATE('2019-09-18 18:07:00', 'YYYY-MM-DD HH24:MI:SS'),
end_time => TO_DATE('2019-09-18 18:15:00', 'YYYY-MM-DD HH24:MI:SS'),
report_type => 'html',
dump_directory => 'AWR_RPT_DUMP');
END;
/

Accessing AWR reports from the console or CLI


To access AWR reports or export dump files, you can use the AWS Management Console or AWS CLI. For
more information, see Downloading a database log file (p. 896).

Adjusting database links for use with DB instances in a VPC


To use Oracle database links with Amazon RDS DB instances inside the same virtual private cloud (VPC)
or peered VPCs, the two DB instances should have a valid route between them. Verify the valid route
between the DB instances by using your VPC routing tables and network access control list (ACL).

The security group of each DB instance must allow ingress to and egress from the other DB instance. The
inbound and outbound rules can refer to security groups from the same VPC or a peered VPC. For more
information, see Updating your security groups to reference peered VPC security groups.

If you have configured a custom DNS server using the DHCP Option Sets in your VPC, your custom DNS
server must be able to resolve the name of the database link target. For more information, see Setting
up a custom DNS server (p. 1865).

For more information about using database links with Oracle Data Pump, see Importing using Oracle
Data Pump (p. 1948).

Setting the default edition for a DB instance


You can redefine database objects in a private environment called an edition. You can use edition-based
redefinition to upgrade an application's database objects with minimal downtime.

You can set the default edition of an Amazon RDS Oracle DB instance using the Amazon RDS procedure
rdsadmin.rdsadmin_util.alter_default_edition.

The following example sets the default edition for the Amazon RDS Oracle DB instance to RELEASE_V1.

EXEC rdsadmin.rdsadmin_util.alter_default_edition('RELEASE_V1');

The following example sets the default edition for the Amazon RDS Oracle DB instance back to the
Oracle default.

EXEC rdsadmin.rdsadmin_util.alter_default_edition('ORA$BASE');

1879
Amazon Relational Database Service User Guide
Database tasks

For more information about Oracle edition-based redefinition, see About editions and edition-based
redefinition in the Oracle documentation.

Enabling auditing for the SYS.AUD$ table


To enable auditing on the database audit trail table SYS.AUD$, use the Amazon RDS procedure
rdsadmin.rdsadmin_master_util.audit_all_sys_aud_table. The only supported audit
property is ALL. You can't audit or not audit individual statements or operations.

Enabling auditing is supported for Oracle DB instances running the following versions:

• Oracle Database 21c (21.0.0)


• Oracle Database 19c (19.0.0)
• Oracle Database 12c Release 2 (12.2)
• Oracle Database 12c Release 1 (12.1.0.2.v14) and later

The audit_all_sys_aud_table procedure has the following parameters.

Parameter name Data type Default Required Description

p_by_access boolean true No Set to true to audit BY


ACCESS. Set to false to
audit BY SESSION.

Note
In a single-tenant CDB, the following operations work, but no customer-visible mechanism can
detect the current status of the operations. Auditing information isn't available from within the
PDB. For more information, see Limitations of a single-tenant CDB (p. 1805).

The following query returns the current audit configuration for SYS.AUD$ for a database.

SELECT * FROM DBA_OBJ_AUDIT_OPTS WHERE OWNER='SYS' AND OBJECT_NAME='AUD$';

The following commands enable audit of ALL on SYS.AUD$ BY ACCESS.

EXEC rdsadmin.rdsadmin_master_util.audit_all_sys_aud_table;

EXEC rdsadmin.rdsadmin_master_util.audit_all_sys_aud_table(p_by_access => true);

The following command enables audit of ALL on SYS.AUD$ BY SESSION.

EXEC rdsadmin.rdsadmin_master_util.audit_all_sys_aud_table(p_by_access => false);

For more information, see AUDIT (traditional auditing) in the Oracle documentation.

Disabling auditing for the SYS.AUD$ table


To disable auditing on the database audit trail table SYS.AUD$, use the Amazon RDS procedure
rdsadmin.rdsadmin_master_util.noaudit_all_sys_aud_table. This procedure takes no
parameters.

The following query returns the current audit configuration for SYS.AUD$ for a database:

1880
Amazon Relational Database Service User Guide
Database tasks

SELECT * FROM DBA_OBJ_AUDIT_OPTS WHERE OWNER='SYS' AND OBJECT_NAME='AUD$';

The following command disables audit of ALL on SYS.AUD$.

EXEC rdsadmin.rdsadmin_master_util.noaudit_all_sys_aud_table;

For more information, see NOAUDIT (traditional auditing) in the Oracle documentation.

Cleaning up interrupted online index builds


To clean up failed online index builds, use the Amazon RDS procedure
rdsadmin.rdsadmin_dbms_repair.online_index_clean.

The online_index_clean procedure has the following parameters.

Parameter name Data type Default Required Description

object_id binary_integer ALL_INDEX_IDNo The object ID of the index.


Typically, you can use
the object ID from the
ORA-08104 error text.

wait_for_lock binary_integer rdsadmin.rdsadmin_dbms_repair.lock_wait


No Specify
rdsadmin.rdsadmin_dbms_repair.lo
the default, to try to get
a lock on the underlying
object and retry until an
internal limit is reached if
the lock fails.

Specify
rdsadmin.rdsadmin_dbms_repair.lo
to try to get a lock on the
underlying object but not
retry if the lock fails.

The following example cleans up a failed online index build:

declare
is_clean boolean;
begin
is_clean := rdsadmin.rdsadmin_dbms_repair.online_index_clean(
object_id => 1234567890,
wait_for_lock => rdsadmin.rdsadmin_dbms_repair.lock_nowait
);
end;
/

For more information, see ONLINE_INDEX_CLEAN function in the Oracle documentation.

Skipping corrupt blocks


To skip corrupt blocks during index and table scans, use the rdsadmin.rdsadmin_dbms_repair
package.

1881
Amazon Relational Database Service User Guide
Database tasks

The following procedures wrap the functionality of the sys.dbms_repair.admin_table procedure


and take no parameters:

• rdsadmin.rdsadmin_dbms_repair.create_repair_table
• rdsadmin.rdsadmin_dbms_repair.create_orphan_keys_table
• rdsadmin.rdsadmin_dbms_repair.drop_repair_table
• rdsadmin.rdsadmin_dbms_repair.drop_orphan_keys_table
• rdsadmin.rdsadmin_dbms_repair.purge_repair_table
• rdsadmin.rdsadmin_dbms_repair.purge_orphan_keys_table

The following procedures take the same parameters as their counterparts in the DBMS_REPAIR package
for Oracle databases:

• rdsadmin.rdsadmin_dbms_repair.check_object
• rdsadmin.rdsadmin_dbms_repair.dump_orphan_keys
• rdsadmin.rdsadmin_dbms_repair.fix_corrupt_blocks
• rdsadmin.rdsadmin_dbms_repair.rebuild_freelists
• rdsadmin.rdsadmin_dbms_repair.segment_fix_status
• rdsadmin.rdsadmin_dbms_repair.skip_corrupt_blocks

For more information about handling database corruption, see DBMS_REPAIR in the Oracle
documentation.

Example Responding to corrupt blocks

This example shows the basic workflow for responding to corrupt blocks. Your steps will depend on the
location and nature of your block corruption.
Important
Before attempting to repair corrupt blocks, review the DBMS_REPAIR documentation carefully.

To skip corrupt blocks during index and table scans

1. Run the following procedures to create repair tables if they don't already exist.

EXEC rdsadmin.rdsadmin_dbms_repair.create_repair_table;
EXEC rdsadmin.rdsadmin_dbms_repair.create_orphan_keys_table;

2. Run the following procedures to check for existing records and purge them if appropriate.

SELECT COUNT(*) FROM SYS.REPAIR_TABLE;


SELECT COUNT(*) FROM SYS.ORPHAN_KEY_TABLE;
SELECT COUNT(*) FROM SYS.DBA_REPAIR_TABLE;
SELECT COUNT(*) FROM SYS.DBA_ORPHAN_KEY_TABLE;

EXEC rdsadmin.rdsadmin_dbms_repair.purge_repair_table;
EXEC rdsadmin.rdsadmin_dbms_repair.purge_orphan_keys_table;

3. Run the following procedure to check for corrupt blocks.

SET SERVEROUTPUT ON
DECLARE v_num_corrupt INT;
BEGIN
v_num_corrupt := 0;
rdsadmin.rdsadmin_dbms_repair.check_object (

1882
Amazon Relational Database Service User Guide
Database tasks

schema_name => '&corruptionOwner',


object_name => '&corruptionTable',
corrupt_count => v_num_corrupt
);
dbms_output.put_line('number corrupt: '||to_char(v_num_corrupt));
END;
/

COL CORRUPT_DESCRIPTION FORMAT a30


COL REPAIR_DESCRIPTION FORMAT a30

SELECT OBJECT_NAME, BLOCK_ID, CORRUPT_TYPE, MARKED_CORRUPT,


CORRUPT_DESCRIPTION, REPAIR_DESCRIPTION
FROM SYS.REPAIR_TABLE;

SELECT SKIP_CORRUPT
FROM DBA_TABLES
WHERE OWNER = '&corruptionOwner'
AND TABLE_NAME = '&corruptionTable';

4. Use the skip_corrupt_blocks procedure to enable or disable corruption skipping for affected
tables. Depending on the situation, you may also need to extract data to a new table, and then drop
the table containing the corrupt block.

Run the following procedure to enable corruption skipping for affected tables.

begin
rdsadmin.rdsadmin_dbms_repair.skip_corrupt_blocks (
schema_name => '&corruptionOwner',
object_name => '&corruptionTable',
object_type => rdsadmin.rdsadmin_dbms_repair.table_object,
flags => rdsadmin.rdsadmin_dbms_repair.skip_flag);
end;
/
select skip_corrupt from dba_tables where owner = '&corruptionOwner' and table_name =
'&corruptionTable';

Run the following procedure to disable corruption skipping.

begin
rdsadmin.rdsadmin_dbms_repair.skip_corrupt_blocks (
schema_name => '&corruptionOwner',
object_name => '&corruptionTable',
object_type => rdsadmin.rdsadmin_dbms_repair.table_object,
flags => rdsadmin.rdsadmin_dbms_repair.noskip_flag);
end;
/

select skip_corrupt from dba_tables where owner = '&corruptionOwner' and table_name =


'&corruptionTable';

5. When you have completed all repair work, run the following procedures to drop the repair tables.

EXEC rdsadmin.rdsadmin_dbms_repair.drop_repair_table;
EXEC rdsadmin.rdsadmin_dbms_repair.drop_orphan_keys_table;

Resizing tablespaces, data files, and temp files


By default, Oracle tablespaces are created with auto-extend turned on and no maximum size. Because
of these default settings, tablespaces can sometimes grow too large. We recommend that you specify

1883
Amazon Relational Database Service User Guide
Database tasks

an appropriate maximum size on permanent and temporary tablespaces, and that you carefully monitor
space usage.

Resizing permanent tablespaces


To resize a permanent tablespace in an RDS for Oracle DB instance, use any of the following Amazon RDS
procedures:

• rdsadmin.rdsadmin_util.resize_datafile
• rdsadmin.rdsadmin_util.autoextend_datafile

The resize_datafile procedure has the following parameters.

Parameter name Data type Default Required Description

p_data_file_id number — Yes The identifier of the data


file to resize.

p_size varchar2 — Yes The size of the data file.


Specify the size in bytes
(the default), kilobytes
(K), megabytes (M), or
gigabytes (G).

The autoextend_datafile procedure has the following parameters.

Parameter name Data type Default Required Description

p_data_file_id number — Yes The identifier of the data


file to resize.

p_autoextend_state varchar2 — Yes The state of the


autoextension feature.
Specify ON to extend the
data file automatically
and OFF to turn off
autoextension.

p_next varchar2 — No The size of the next data


file increment. Specify the
size in bytes (the default),
kilobytes (K), megabytes
(M), or gigabytes (G).

p_maxsize varchar2 — No The maximum disk space


allowed for automatic
extension. Specify the
size in bytes (the default),
kilobytes (K), megabytes
(M), or gigabytes (G). You
can specify UNLIMITED to
remove the file size limit.

The following example resizes data file 4 to 500 MB.

1884
Amazon Relational Database Service User Guide
Database tasks

EXEC rdsadmin.rdsadmin_util.resize_datafile(4,'500M');

The following example turns off autoextension for data file 4. It also turns on autoextension for data file
5, with an increment of 128 MB and no maximum size.

EXEC rdsadmin.rdsadmin_util.autoextend_datafile(4,'OFF');
EXEC rdsadmin.rdsadmin_util.autoextend_datafile(5,'ON','128M','UNLIMITED');

Resizing temporary tablespaces


To resize a temporary tablespaces in an RDS for Oracle DB instance, including a read replica, use any of
the following Amazon RDS procedures:

• rdsadmin.rdsadmin_util.resize_temp_tablespace
• rdsadmin.rdsadmin_util.resize_tempfile
• rdsadmin.rdsadmin_util.autoextend_tempfile

The resize_temp_tablespace procedure has the following parameters.

Parameter name Data type Default Required Description

p_temp_tablespace_name varchar2 — Yes The name of the temporary


tablespace to resize.

p_size varchar2 — Yes The size of the tablespace.


Specify the size in bytes
(the default), kilobytes
(K), megabytes (M), or
gigabytes (G).

The resize_tempfile procedure has the following parameters.

Parameter name Data type Default Required Description

p_temp_file_id number — Yes The identifier of the temp


file to resize.

p_size varchar2 — Yes The size of the temp file.


Specify the size in bytes
(the default), kilobytes
(K), megabytes (M), or
gigabytes (G).

The autoextend_tempfile procedure has the following parameters.

Parameter name Data type Default Required Description

p_temp_file_id number — Yes The identifier of the temp


file to resize.

1885
Amazon Relational Database Service User Guide
Database tasks

Parameter name Data type Default Required Description

p_autoextend_state varchar2 — Yes The state of the


autoextension feature.
Specify ON to extend the
temp file automatically
and OFF to turn off
autoextension.

p_next varchar2 — No The size of the next temp


file increment. Specify the
size in bytes (the default),
kilobytes (K), megabytes
(M), or gigabytes (G).

p_maxsize varchar2 — No The maximum disk space


allowed for automatic
extension. Specify the
size in bytes (the default),
kilobytes (K), megabytes
(M), or gigabytes (G). You
can specify UNLIMITED to
remove the file size limit.

The following examples resize a temporary tablespace named TEMP to the size of 4 GB.

EXEC rdsadmin.rdsadmin_util.resize_temp_tablespace('TEMP','4G');

EXEC rdsadmin.rdsadmin_util.resize_temp_tablespace('TEMP','4096000000');

The following example resizes a temporary tablespace based on the temp file with the file identifier 1 to
the size of 2 MB.

EXEC rdsadmin.rdsadmin_util.resize_tempfile(1,'2M');

The following example turns off autoextension for temp file 1. It also sets the maximum autoextension
size of temp file 2 to 10 GB, with an increment of 100 MB.

EXEC rdsadmin.rdsadmin_util.autoextend_tempfile(1,'OFF');
EXEC rdsadmin.rdsadmin_util.autoextend_tempfile(2,'ON','100M','10G');

For more information about read replicas for Oracle DB instances see Working with read replicas for
Amazon RDS for Oracle (p. 1973).

Purging the recycle bin


When you drop a table, your Oracle database doesn't immediately remove its storage space. The
database renames the table and places it and any associated objects in a recycle bin. Purging the recycle
bin removes these items and releases their storage space.

To purge the entire recycle bin, use the Amazon RDS procedure
rdsadmin.rdsadmin_util.purge_dba_recyclebin. However, this procedure can't purge the
recycle bin of SYS and RDSADMIN objects. If you need to purge these objects, contact AWS Support.

The following example purges the entire recycle bin.

1886
Amazon Relational Database Service User Guide
Database tasks

EXEC rdsadmin.rdsadmin_util.purge_dba_recyclebin;

Setting the default displayed values for full redaction


To change the default displayed values for full redaction on your Amazon RDS Oracle instance, use the
Amazon RDS procedure rdsadmin.rdsadmin_util.dbms_redact_upd_full_rdct_val. Note
that you create a redaction policy with the DBMS_REDACT PL/SQL package, as explained in the Oracle
Database documentation. The dbms_redact_upd_full_rdct_val procedure specifies the characters
to display for different data types affected by an existing policy.

The dbms_redact_upd_full_rdct_val procedure has the following parameters.

Parameter name Data type Default Required Description

p_number_val number Null No Modifies the default value


for columns of the NUMBER
data type.

p_binfloat_val binary_float Null No Modifies the default


value for columns of the
BINARY_FLOAT data type.

p_bindouble_val binary_double Null No Modifies the default


value for columns of the
BINARY_DOUBLE data type.

p_char_val char Null No Modifies the default value


for columns of the CHAR
data type.

p_varchar_val varchar2 Null No Modifies the default


value for columns of the
VARCHAR2 data type.

p_nchar_val nchar Null No Modifies the default value


for columns of the NCHAR
data type.

p_nvarchar_val nvarchar2 Null No Modifies the default


value for columns of the
NVARCHAR2 data type.

p_date_val date Null No Modifies the default value


for columns of the DATE
data type.

p_ts_val timestamp Null No Modifies the default


value for columns of the
TIMESTAMP data type.

p_tswtz_val timestamp Null No Modifies the default


with time value for columns of the
zone TIMESTAMP WITH TIME
ZONE data type.

p_blob_val blob Null No Modifies the default value


for columns of the BLOB
data type.

1887
Amazon Relational Database Service User Guide
Log tasks

Parameter name Data type Default Required Description

p_clob_val clob Null No Modifies the default value


for columns of the CLOB
data type.

p_nclob_val nclob Null No Modifies the default value


for columns of the NCLOB
data type.

The following example changes the default redacted value to * for the CHAR data type:

EXEC rdsadmin.rdsadmin_util.dbms_redact_upd_full_rdct_val(p_char_val => '*');

The following example changes the default redacted values for NUMBER, DATE, and CHAR data types:

BEGIN
rdsadmin.rdsadmin_util.dbms_redact_upd_full_rdct_val(
p_number_val=>1,
p_date_val=>to_date('1900-01-01','YYYY-MM-DD'),
p_varchar_val=>'X');
END;
/

After you alter the default values for full redaction with the dbms_redact_upd_full_rdct_val
procedure, reboot your DB instance for the change to take effect. For more information, see Rebooting a
DB instance (p. 436).

Performing common log-related tasks for Oracle DB


instances
Following, you can find how to perform certain common DBA tasks related to logging on your Amazon
RDS DB instances running Oracle. To deliver a managed service experience, Amazon RDS doesn't provide
shell access to DB instances, and restricts access to certain system procedures and tables that require
advanced privileges.

For more information, see Oracle database log files (p. 924).

Topics
• Setting force logging (p. 1889)
• Setting supplemental logging (p. 1889)
• Switching online log files (p. 1890)
• Adding online redo logs (p. 1890)
• Dropping online redo logs (p. 1891)
• Resizing online redo logs (p. 1891)
• Retaining archived redo logs (p. 1893)
• Accessing online and archived redo logs (p. 1894)
• Downloading archived redo logs from Amazon S3 (p. 1895)

1888
Amazon Relational Database Service User Guide
Log tasks

Setting force logging


In force logging mode, Oracle logs all changes to the database except changes in temporary tablespaces
and temporary segments (NOLOGGING clauses are ignored). For more information, see Specifying FORCE
LOGGING mode in the Oracle documentation.

To set force logging, use the Amazon RDS procedure rdsadmin.rdsadmin_util.force_logging.


The force_logging procedure has the following parameters.

Parameter name Data type Default Yes Description

p_enable boolean true No Set to true to put the


database in force logging
mode, false to remove
the database from force
logging mode.

The following example puts the database in force logging mode.

EXEC rdsadmin.rdsadmin_util.force_logging(p_enable => true);

Setting supplemental logging


If you enable supplemental logging, LogMiner has the necessary information to support chained rows
and clustered tables. For more information, see Supplemental logging in the Oracle documentation.

Oracle Database doesn't enable supplemental logging by default. To


enable and disable supplemental logging, use the Amazon RDS procedure
rdsadmin.rdsadmin_util.alter_supplemental_logging. For more information about how
Amazon RDS manages the retention of archived redo logs for Oracle DB instances, see Retaining archived
redo logs (p. 1893).

The alter_supplemental_logging procedure has the following parameters.

Parameter name Data type Default Required Description

p_action varchar2 — Yes 'ADD' to add supplemental


logging, 'DROP' to drop
supplemental logging.

p_type varchar2 null No The type of supplemental


logging. Valid values
are 'ALL', 'FOREIGN
KEY', 'PRIMARY
KEY', 'UNIQUE', or
PROCEDURAL.

The following example enables supplemental logging.

begin
rdsadmin.rdsadmin_util.alter_supplemental_logging(
p_action => 'ADD');

1889
Amazon Relational Database Service User Guide
Log tasks

end;
/

The following example enables supplemental logging for all fixed-length maximum size columns.

begin
rdsadmin.rdsadmin_util.alter_supplemental_logging(
p_action => 'ADD',
p_type => 'ALL');
end;
/

The following example enables supplemental logging for primary key columns.

begin
rdsadmin.rdsadmin_util.alter_supplemental_logging(
p_action => 'ADD',
p_type => 'PRIMARY KEY');
end;
/

Switching online log files


To switch log files, use the Amazon RDS procedure rdsadmin.rdsadmin_util.switch_logfile. The
switch_logfile procedure has no parameters.

The following example switches log files.

EXEC rdsadmin.rdsadmin_util.switch_logfile;

Adding online redo logs


An Amazon RDS DB instance running Oracle starts with four online redo logs, 128 MB each. To add
additional redo logs, use the Amazon RDS procedure rdsadmin.rdsadmin_util.add_logfile.

The add_logfile procedure has the following parameters.


Note
The parameters are mutually exclusive.

Parameter name Data type Default Required Description

bytes positive null No The size of the log file in


bytes.

p_size varchar2 — Yes The size of the log file.


You can specify the size in
kilobytes (K), megabytes
(M), or gigabytes (G).

The following command adds a 100 MB log file.

EXEC rdsadmin.rdsadmin_util.add_logfile(p_size => '100M');

1890
Amazon Relational Database Service User Guide
Log tasks

Dropping online redo logs


To drop redo logs, use the Amazon RDS procedure rdsadmin.rdsadmin_util.drop_logfile. The
drop_logfile procedure has the following parameters.

Parameter name Data type Default Required Description

grp positive — Yes The group number of the


log.

The following example drops the log with group number 3.

EXEC rdsadmin.rdsadmin_util.drop_logfile(grp => 3);

You can only drop logs that have a status of unused or inactive. The following example gets the statuses
of the logs.

SELECT GROUP#, STATUS FROM V$LOG;

GROUP# STATUS
---------- ----------------
1 CURRENT
2 INACTIVE
3 INACTIVE
4 UNUSED

Resizing online redo logs


An Amazon RDS DB instance running Oracle starts with four online redo logs, 128 MB each. The
following example shows how you can use Amazon RDS procedures to resize your logs from 128 MB each
to 512 MB each.

/* Query V$LOG to see the logs. */


/* You start with 4 logs of 128 MB each. */

SELECT GROUP#, BYTES, STATUS FROM V$LOG;

GROUP# BYTES STATUS


---------- ---------- ----------------
1 134217728 INACTIVE
2 134217728 CURRENT
3 134217728 INACTIVE
4 134217728 INACTIVE

/* Add four new logs that are each 512 MB */

EXEC rdsadmin.rdsadmin_util.add_logfile(bytes => 536870912);


EXEC rdsadmin.rdsadmin_util.add_logfile(bytes => 536870912);
EXEC rdsadmin.rdsadmin_util.add_logfile(bytes => 536870912);
EXEC rdsadmin.rdsadmin_util.add_logfile(bytes => 536870912);

/* Query V$LOG to see the logs. */


/* Now there are 8 logs. */

1891
Amazon Relational Database Service User Guide
Log tasks

SELECT GROUP#, BYTES, STATUS FROM V$LOG;

GROUP# BYTES STATUS


---------- ---------- ----------------
1 134217728 INACTIVE
2 134217728 CURRENT
3 134217728 INACTIVE
4 134217728 INACTIVE
5 536870912 UNUSED
6 536870912 UNUSED
7 536870912 UNUSED
8 536870912 UNUSED

/* Drop each inactive log using the group number. */

EXEC rdsadmin.rdsadmin_util.drop_logfile(grp => 1);


EXEC rdsadmin.rdsadmin_util.drop_logfile(grp => 3);
EXEC rdsadmin.rdsadmin_util.drop_logfile(grp => 4);

/* Query V$LOG to see the logs. */


/* Now there are 5 logs. */

select GROUP#, BYTES, STATUS from V$LOG;

GROUP# BYTES STATUS


---------- ---------- ----------------
2 134217728 CURRENT
5 536870912 UNUSED
6 536870912 UNUSED
7 536870912 UNUSED
8 536870912 UNUSED

/* Switch logs so that group 2 is no longer current. */

EXEC rdsadmin.rdsadmin_util.switch_logfile;

/* Query V$LOG to see the logs. */


/* Now one of the new logs is current. */

SQL>SELECT GROUP#, BYTES, STATUS FROM V$LOG;

GROUP# BYTES STATUS


---------- ---------- ----------------
2 134217728 ACTIVE
5 536870912 CURRENT
6 536870912 UNUSED
7 536870912 UNUSED
8 536870912 UNUSED

/* If the status of log 2 is still "ACTIVE", issue a checkpoint to clear it to "INACTIVE".


*/

EXEC rdsadmin.rdsadmin_util.checkpoint;

/* Query V$LOG to see the logs. */


/* Now the final original log is inactive. */

select GROUP#, BYTES, STATUS from V$LOG;

GROUP# BYTES STATUS

1892
Amazon Relational Database Service User Guide
Log tasks

---------- ---------- ----------------


2 134217728 INACTIVE
5 536870912 CURRENT
6 536870912 UNUSED
7 536870912 UNUSED
8 536870912 UNUSED

# Drop the final inactive log.

EXEC rdsadmin.rdsadmin_util.drop_logfile(grp => 2);

/* Query V$LOG to see the logs. */


/* Now there are four 512 MB logs. */

SELECT GROUP#, BYTES, STATUS FROM V$LOG;

GROUP# BYTES STATUS


---------- ---------- ----------------
5 536870912 CURRENT
6 536870912 UNUSED
7 536870912 UNUSED
8 536870912 UNUSED

Retaining archived redo logs


You can retain archived redo logs locally on your DB instance for use with products like Oracle LogMiner
(DBMS_LOGMNR). After you have retained the redo logs, you can use LogMiner to analyze the logs. For
more information, see Using LogMiner to analyze redo log files in the Oracle documentation.

To retain archived redo logs, use the Amazon RDS procedure


rdsadmin.rdsadmin_util.set_configuration. The set_configuration procedure has the
following parameters.

Parameter name Data type Default Required Description

name varchar — Yes The name of the


configuration to update.

value varchar — Yes The value for the


configuration.

The following example retains 24 hours of redo logs.

begin
rdsadmin.rdsadmin_util.set_configuration(
name => 'archivelog retention hours',
value => '24');
end;
/
commit;

Note
The commit is required for the change to take effect.

To view how long archived redo logs are kept for your DB instance, use the Amazon RDS procedure
rdsadmin.rdsadmin_util.show_configuration.

1893
Amazon Relational Database Service User Guide
Log tasks

The following example shows the log retention time.

set serveroutput on
EXEC rdsadmin.rdsadmin_util.show_configuration;

The output shows the current setting for archivelog retention hours. The following output shows
that archived redo logs are kept for 48 hours.

NAME:archivelog retention hours


VALUE:48
DESCRIPTION:ArchiveLog expiration specifies the duration in hours before archive/redo log
files are automatically deleted.

Because the archived redo logs are retained on your DB instance, ensure that your DB instance has
enough allocated storage for the retained logs. To determine how much space your DB instance has used
in the last X hours, you can run the following query, replacing X with the number of hours.

SELECT SUM(BLOCKS * BLOCK_SIZE) bytes


FROM V$ARCHIVED_LOG
WHERE FIRST_TIME >= SYSDATE-(X/24) AND DEST_ID=1;

RDS for Oracle only generates archived redo logs when the backup retention period of your DB instance
is greater than zero. By default the backup retention period is greater than zero.

When the archived log retention period expires, RDS for Oracle removes the archived redo logs from your
DB instance. To support restoring your DB instance to a point in time, Amazon RDS retains the archived
redo logs outside of your DB instance based on the backup retention period. To modify the backup
retention period, see Modifying an Amazon RDS DB instance (p. 401).
Note
In some cases, you might be using JDBC on Linux to download archived redo logs and
experience long latency times and connection resets. In such cases, the issues might be caused
by the default random number generator setting on your Java client. We recommend setting
your JDBC drivers to use a nonblocking random number generator.

Accessing online and archived redo logs


You might want to access your online and archived redo log files for mining with external tools such as
GoldenGate, Attunity, Informatica, and others. To access these files, do the following:

1. Create directory objects that provide read-only access to the physical file paths.

Use rdsadmin.rdsadmin_master_util.create_archivelog_dir and


rdsadmin.rdsadmin_master_util.create_onlinelog_dir.
2. Read the files using PL/SQL.

You can read the files by using PL/SQL. For more information about reading files from directory
objects, see Listing files in a DB instance directory (p. 1927) and Reading files in a DB instance
directory (p. 1927).

Accessing transaction logs is supported for the following releases:

• Oracle Database 21c


• Oracle Database 19c
• Oracle Database 12c Release 2 (12.2.0.1)

1894
Amazon Relational Database Service User Guide
Log tasks

• Oracle Database 12c Release 1 (12.1)

The following code creates directories that provide read-only access to your online and archived redo log
files:
Important
This code also revokes the DROP ANY DIRECTORY privilege.

EXEC rdsadmin.rdsadmin_master_util.create_archivelog_dir;
EXEC rdsadmin.rdsadmin_master_util.create_onlinelog_dir;

The following code drops the directories for your online and archived redo log files.

EXEC rdsadmin.rdsadmin_master_util.drop_archivelog_dir;
EXEC rdsadmin.rdsadmin_master_util.drop_onlinelog_dir;

The following code grants and revokes the DROP ANY DIRECTORY privilege.

EXEC rdsadmin.rdsadmin_master_util.revoke_drop_any_directory;
EXEC rdsadmin.rdsadmin_master_util.grant_drop_any_directory;

Downloading archived redo logs from Amazon S3


You can download archived redo logs on your DB instance using the
rdsadmin.rdsadmin_archive_log_download package. If archived redo logs are no longer on your
DB instance, you might want to download them again from Amazon S3. Then you can mine the logs or
use them to recover or replicate your database.
Note
You can't download archived redo logs on read replica instances.

Downloading archived redo logs: basic steps


The availability of your archived redo logs depends on the following retention policies:

• Backup retention policy – Logs inside of this policy are available in Amazon S3. Logs outside of this
policy are removed.
• Archived log retention policy – Logs inside of this policy are available on your DB instance. Logs
outside of this policy are removed.

If logs aren't on your instance but are protected by your backup retention period, use
rdsadmin.rdsadmin_archive_log_download to download them again. RDS for Oracle saves the
logs to the /rdsdbdata/log/arch directory on your DB instance.

To download archived redo logs from Amazon S3

1. Configure your retention period to ensure your downloaded archived redo logs are retained for the
duration you need them. Make sure to COMMIT your change.

RDS retains your downloaded logs according to the archived log retention policy, starting from the
time the logs were downloaded. To learn how to set the retention policy, see Retaining archived redo
logs (p. 1893).
2. Wait up to 5 minutes for the archived log retention policy change to take effect.

1895
Amazon Relational Database Service User Guide
Log tasks

3. Download the archived redo logs from Amazon S3 using


rdsadmin.rdsadmin_archive_log_download.

For more information, see Downloading a single archived redo log (p. 1896) and Downloading a
series of archived redo logs (p. 1896).
Note
RDS automatically checks the available storage before downloading. If the requested logs
consume a high percentage of space, you receive an alert.
4. Confirm that the logs were downloaded from Amazon S3 successfully.

You can view the status of your download task in a bdump file. The bdump files have the path name
/rdsdbdata/log/trace/dbtask-task-id.log. In the preceding download step, you run a
SELECT statement that returns the task ID in a VARCHAR2 data type. For more information, see
similar examples in Monitoring the status of a file transfer (p. 2007).

Downloading a single archived redo log


To download a single archived redo log to the /rdsdbdata/log/arch directory, use
rdsadmin.rdsadmin_archive_log_download.download_log_with_seqnum. This procedure has
the following parameter.

Parameter name Data type Default Required Description

seqnum number — Yes The sequence number of


the archived redo log.

The following example downloads the log with sequence number 20.

SELECT rdsadmin.rdsadmin_archive_log_download.download_log_with_seqnum(seqnum => 20)


AS TASK_ID
FROM DUAL;

Downloading a series of archived redo logs


To download a series of archived redo logs to the /rdsdbdata/log/arch directory, use
download_logs_in_seqnum_range. Your download is limited to 300 logs per request. The
download_logs_in_seqnum_range procedure has the following parameters.

Parameter name Data type Default Required Description

start_seq number — Yes The starting sequence


number for the series.

end_seq number — Yes The ending sequence


number for the series.

The following example downloads the logs from sequence 50 to 100.

SELECT rdsadmin.rdsadmin_archive_log_download.download_logs_in_seqnum_range(start_seq =>


50, end_seq => 100)
AS TASK_ID

1896
Amazon Relational Database Service User Guide
RMAN tasks

FROM DUAL;

Performing common RMAN tasks for Oracle DB


instances
In the following section, you can find how you can perform Oracle Recovery Manager (RMAN) DBA tasks
on your Amazon RDS DB instances running Oracle. To deliver a managed service experience, Amazon
RDS doesn't provide shell access to DB instances. It also restricts access to certain system procedures and
tables that require advanced privileges.

Use the Amazon RDS package rdsadmin.rdsadmin_rman_util to perform RMAN backups of your
Amazon RDS for Oracle database to disk. The rdsadmin.rdsadmin_rman_util package supports full
and incremental database file backups, tablespace backups, and archived redo log backups.

After an RMAN backup has finished, you can copy the backup files off the Amazon RDS for Oracle DB
instance host. You might do this for the purpose of restoring to a non-RDS host or for long-term storage
of backups. For example, you can copy the backup files to an Amazon S3 bucket. For more information,
see using Amazon S3 integration (p. 1992).

The backup files for RMAN backups remain on the Amazon RDS DB instance host until you remove them
manually. You can use the UTL_FILE.FREMOVE Oracle procedure to remove files from a directory. For
more information, see FREMOVE procedure in the Oracle Database documentation.

You can't use the RMAN to restore RDS for Oracle DB instances. However, you can use RMAN to restore a
backup to an on-premises or Amazon EC2 instance. For more information, see the blog article Restore an
Amazon RDS for Oracle instance to a self-managed instance.
Note
For backing up and restoring to another Amazon RDS for Oracle DB instance, you can continue
to use the Amazon RDS backup and restore features. For more information, see Backing up and
restoring (p. 590).

Topics
• Prerequisites for RMAN backups (p. 1897)
• Common parameters for RMAN procedures (p. 1898)
• Validating DB instance files (p. 1900)
• Enabling and disabling block change tracking (p. 1903)
• Crosschecking archived redo logs (p. 1904)
• Backing up archived redo logs (p. 1905)
• Performing a full database backup (p. 1910)
• Performing an incremental database backup (p. 1911)
• Backing up a tablespace (p. 1912)
• Backing up a control file (p. 1913)

Prerequisites for RMAN backups


Before backing up your database using the rdsadmin.rdsadmin_rman_util package, make sure that
you meet the following prerequisites:

• Make sure that your RDS for Oracle database is in ARCHIVELOG mode. To enable this mode, set the
backup retention period to a non-zero value.

1897
Amazon Relational Database Service User Guide
RMAN tasks

• When backing up archived redo logs or performing a full or incremental backup that includes archived
redo logs, and when backing up the database, make sure that redo log retention is set to a nonzero
value. Archived redo logs are required to make database files consistent during recovery. For more
information, see Retaining archived redo logs (p. 1893).
• Make sure that your DB instance has sufficient free space to hold the backups. When back up your
database, you specify an Oracle directory object as a parameter in the procedure call. RMAN places the
files in the specified directory. You can use default directories, such as DATA_PUMP_DIR, or create a
new directory. For more information, see Creating and dropping directories in the main data storage
space (p. 1926).

You can monitor the current free space in an RDS for Oracle instance using the CloudWatch metric
FreeStorageSpace. We recommend that your free space exceeds the current size of the database,
though RMAN backs up only formatted blocks and supports compression.

Common parameters for RMAN procedures


You can use procedures in the Amazon RDS package rdsadmin.rdsadmin_rman_util to perform
tasks with RMAN. Several parameters are common to the procedures in the package. The package has
the following common parameters.

Parameter name Data Valid values Default Required Description


type

p_directory_namevarchar2 A valid — Yes The name of the directory to contain the


database backup files.
directory
name.

p_label varchar2 a-z, A-Z, 0-9, — No A unique string that is included in the backup
'_', '-', '.' file names.
Note
The limit is 30 characters.

p_owner varchar2 A valid — Yes The owner of the directory to contain the
owner of the backup files.
directory
specified in
p_directory_name.

p_tag varchar2 a-z, A-Z, 0-9, NULL No A string that can be used to distinguish
'_', '-', '.' between backups to indicate the purpose or
usage of backups, such as daily, weekly, or
incremental-level backups.

The limit is 30 characters. The tag is not case-


sensitive. Tags are always stored in uppercase,
regardless of the case used when entering
them.

Tags don't need to be unique, so multiple


backups can have the same tag.

If you don't specify a tag, then RMAN assigns


a default tag automatically using the format
TAGYYYYMMDDTHHMMSS, where YYYY is the
year, MM is the month, DD is the day, HH is the

1898
Amazon Relational Database Service User Guide
RMAN tasks

Parameter name Data Valid values Default Required Description


type
hour (in 24-hour format), MM is the minutes,
and SS is the seconds. The date and time refer
to when RMAN started the backup.

For example, a backup might receive a tag


TAG20190927T214517 for a backup that
started on 2019-09-27 at 21:45:17.

The p_tag parameter is supported for the


following Amazon RDS for Oracle DB engine
versions:

• Oracle Database 21c (21.0.0)


• Oracle Database 19c (19.0.0), using
19.0.0.0.ru-2021-10.rur-2021-10.r1 or
higher
• Oracle Database 12c Release 2 (12.2), using
12.2.0.1.ru-2021-10.rur-2021-10.r1 or
higher
• Oracle Database 12c Release 1 (12.1), using
12.1.0.2.V26 or higher

p_compress boolean TRUE, FALSE FALSE No Specify TRUE to enable BASIC backup
compression.

Specify FALSE to disable BASIC backup


compression.

boolean TRUE, FALSE


p_include_archive_logs FALSE No Specify TRUE to include archived redo logs in
the backup.

Specify FALSE to exclude archived redo logs


from the backup.

If you include archived redo logs in the backup,


set retention to one hour or greater using the
rdsadmin.rdsadmin_util.set_configuration
procedure. Also, call the
rdsadmin.rdsadmin_rman_util.crosscheck_archivel
procedure immediately before running the
backup. Otherwise, the backup might fail due
to missing archived redo log files that have
been deleted by Amazon RDS management
procedures.

boolean TRUE, FALSE


p_include_controlfile FALSE No Specify TRUE to include the control file in the
backup.

Specify FALSE to exclude the control file from


the backup.

1899
Amazon Relational Database Service User Guide
RMAN tasks

Parameter name Data Valid values Default Required Description


type

p_optimize boolean TRUE, FALSE TRUE No Specify TRUE to enable backup optimization,
if archived redo logs are included, to reduce
backup size.

Specify FALSE to disable backup optimization.

p_parallel number A valid integer 1 No Number of channels.


between
1 and 254
for Oracle
Database
Enterprise
Edition (EE)

1 for other
Oracle
Database
editions

boolean TRUE, FALSE


p_rman_to_dbms_output FALSE No When TRUE, the RMAN output is sent to the
DBMS_OUTPUT package in addition to a file
in the BDUMP directory. In SQL*Plus, use SET
SERVEROUTPUT ON to see the output.

When FALSE, the RMAN output is only sent to


a file in the BDUMP directory.

number
p_section_size_mb A valid integer NULL No The section size in megabytes (MB).

Validates in parallel by dividing each file into


the specified section size.

When NULL, the parameter is ignored.

varchar2 'PHYSICAL',
p_validation_type No
'PHYSICAL' The level of corruption detection.
'PHYSICAL
+LOGICAL' Specify 'PHYSICAL' to check for physical
corruption. An example of physical corruption
is a block with a mismatch in the header and
footer.

Specify 'PHYSICAL+LOGICAL' to check for


logical inconsistencies in addition to physical
corruption. An example of logical corruption is
a corrupt block.

Validating DB instance files


You can use the Amazon RDS package rdsadmin.rdsadmin_rman_util to validate Amazon RDS for
Oracle DB instance files, such as data files, tablespaces, control files, or server parameter files (SPFILEs).

For more information about RMAN validation, see Validating database files and backups and VALIDATE
in the Oracle documentation.

Topics

1900
Amazon Relational Database Service User Guide
RMAN tasks

• Validating a DB instance (p. 1901)


• Validating a tablespace (p. 1901)
• Validating a control file (p. 1902)
• Validating an SPFILE (p. 1902)
• Validating a data file (p. 1902)

Validating a DB instance
To validate all of the relevant files used by an Amazon RDS Oracle DB instance, use the Amazon RDS
procedure rdsadmin.rdsadmin_rman_util.validate_database.

This procedure uses the following common parameters for RMAN tasks:

• p_validation_type
• p_parallel
• p_section_size_mb
• p_rman_to_dbms_output

For more information, see Common parameters for RMAN procedures (p. 1898).

The following example validates the DB instance using the default values for the parameters.

EXEC rdsadmin.rdsadmin_rman_util.validate_database;

The following example validates the DB instance using the specified values for the parameters.

BEGIN
rdsadmin.rdsadmin_rman_util.validate_database(
p_validation_type => 'PHYSICAL+LOGICAL',
p_parallel => 4,
p_section_size_mb => 10,
p_rman_to_dbms_output => FALSE);
END;
/

When the p_rman_to_dbms_output parameter is set to FALSE, the RMAN output is written to a file in
the BDUMP directory.

To view the files in the BDUMP directory, run the following SELECT statement.

SELECT * FROM table(rdsadmin.rds_file_util.listdir('BDUMP')) order by mtime;

To view the contents of a file in the BDUMP directory, run the following SELECT statement.

SELECT text FROM table(rdsadmin.rds_file_util.read_text_file('BDUMP','rds-rman-


validate-nnn.txt'));

Replace the file name with the name of the file you want to view.

Validating a tablespace
To validate the files associated with a tablespace, use the Amazon RDS procedure
rdsadmin.rdsadmin_rman_util.validate_tablespace.

1901
Amazon Relational Database Service User Guide
RMAN tasks

This procedure uses the following common parameters for RMAN tasks:

• p_validation_type
• p_parallel
• p_section_size_mb
• p_rman_to_dbms_output

For more information, see Common parameters for RMAN procedures (p. 1898).

This procedure also uses the following additional parameter.

Parameter name Data type Valid Default Required Description


values

p_tablespace_name varchar2 A valid — Yes The name of the


tablespace tablespace.
name

Validating a control file


To validate only the control file used by an Amazon RDS Oracle DB instance, use the Amazon RDS
procedure rdsadmin.rdsadmin_rman_util.validate_current_controlfile.

This procedure uses the following common parameter for RMAN tasks:

• p_validation_type
• p_rman_to_dbms_output

For more information, see Common parameters for RMAN procedures (p. 1898).

Validating an SPFILE
To validate only the server parameter file (SPFILE) used by an Amazon RDS Oracle DB instance, use the
Amazon RDS procedure rdsadmin.rdsadmin_rman_util.validate_spfile.

This procedure uses the following common parameter for RMAN tasks:

• p_validation_type
• p_rman_to_dbms_output

For more information, see Common parameters for RMAN procedures (p. 1898).

Validating a data file


To validate a data file, use the Amazon RDS procedure
rdsadmin.rdsadmin_rman_util.validate_datafile.

This procedure uses the following common parameters for RMAN tasks:

• p_validation_type
• p_parallel
• p_section_size_mb
• p_rman_to_dbms_output

1902
Amazon Relational Database Service User Guide
RMAN tasks

For more information, see Common parameters for RMAN procedures (p. 1898).

This procedure also uses the following additional parameters.

Parameter name Data type Valid Default Required Description


values

p_datafile varchar2 A valid — Yes The datafile ID


datafile ID number (from v
number $datafile.file#)
or a valid or the full datafile
datafile name including
name the path (from v
including $datafile.name).
complete
path

p_from_block number A valid NULL No The number of the


integer block where the
validation starts within
the data file. When this
is NULL, 1 is used.

p_to_block number A valid NULL No The number of the


integer block where the
validation ends within
the data file. When this
is NULL, the maximum
block in the data file is
used.

Enabling and disabling block change tracking


Block changing tracking records changed blocks in a tracking file. This technique can improve the
performance of RMAN incremental backups. For more information, see Using Block Change Tracking to
Improve Incremental Backup Performance in the Oracle Database documentation.

RMAN features aren't supported in a read replica. However, as part of your high availability
strategy, you might choose to enable block tracking in a read-only replica using the procedure
rdsadmin.rdsadmin_rman_util.enable_block_change_tracking. If you promote this read-only
replica to a source DB instance, block change tracking is enabled for the new source instance. Thus, your
instance can benefit from fast incremental backups.

Block change tracking procedures are supported in Enterprise Edition only for the following DB engine
versions:

• Oracle Database 21c (21.0.0)


• Oracle Database 19c (19.0.0)
• Oracle Database 12c Release 2 (12.2), using 12.2.0.1.ru-2019-01.rur-2019-01.r1 or higher (deprecated)
• Oracle Database 12c Release 1 (12.1), using 12.1.0.2.v15 or higher (deprecated)

Note
In a single-tenant CDB, the following operations work, but no customer-visible mechanism
can detect the current status of the operations. See also Limitations of a single-tenant
CDB (p. 1805).

1903
Amazon Relational Database Service User Guide
RMAN tasks

To enable block change tracking for a DB instance, use the Amazon RDS procedure
rdsadmin.rdsadmin_rman_util.enable_block_change_tracking. To disable block change
tracking, use disable_block_change_tracking. These procedures take no parameters.

To determine whether block change tracking is enabled for your DB instance, run the following query.

SELECT STATUS, FILENAME FROM V$BLOCK_CHANGE_TRACKING;

The following example enables block change tracking for a DB instance.

EXEC rdsadmin.rdsadmin_rman_util.enable_block_change_tracking;

The following example disables block change tracking for a DB instance.

EXEC rdsadmin.rdsadmin_rman_util.disable_block_change_tracking;

Crosschecking archived redo logs


You can crosscheck archived redo logs using the Amazon RDS procedure
rdsadmin.rdsadmin_rman_util.crosscheck_archivelog.

You can use this procedure to crosscheck the archived redo logs registered in the control file and
optionally delete the expired logs records. When RMAN makes a backup, it creates a record in the control
file. Over time, these records increase the size of the control file. We recommend that you remove
expired records periodically.
Note
Standard Amazon RDS backups don't use RMAN and therefore don't create records in the
control file.

This procedure uses the common parameter p_rman_to_dbms_output for RMAN tasks.

For more information, see Common parameters for RMAN procedures (p. 1898).

This procedure also uses the following additional parameter.

Parameter name Data type Valid Default Required Description


values

p_delete_expired boolean TRUE, TRUE No When TRUE, delete


FALSE expired archived redo
log records from the
control file.

When FALSE, retain


the expired archived
redo log records in the
control file.

This procedure is supported for the following Amazon RDS for Oracle DB engine versions:

• Oracle Database 21c (21.0.0)


• Oracle Database 19c (19.0.0)
• Oracle Database 12c Release 2 (12.2), using 12.2.0.1.ru-2019-01.rur-2019-01.r1 or higher

1904
Amazon Relational Database Service User Guide
RMAN tasks

• Oracle Database 12c Release 1 (12.1), using 12.1.0.2.v15 or higher

The following example marks archived redo log records in the control file as expired, but does not delete
the records.

BEGIN
rdsadmin.rdsadmin_rman_util.crosscheck_archivelog(
p_delete_expired => FALSE,
p_rman_to_dbms_output => FALSE);
END;
/

The following example deletes expired archived redo log records from the control file.

BEGIN
rdsadmin.rdsadmin_rman_util.crosscheck_archivelog(
p_delete_expired => TRUE,
p_rman_to_dbms_output => FALSE);
END;
/

Backing up archived redo logs


You can use the Amazon RDS package rdsadmin.rdsadmin_rman_util to back up archived redo logs
for an Amazon RDS Oracle DB instance.

The procedures for backing up archived redo logs are supported for the following Amazon RDS for
Oracle DB engine versions:

• Oracle Database 21c (21.0.0)


• Oracle Database 19c (19.0.0)
• Oracle Database 12c Release 2 (12.2), using 12.2.0.1.ru-2019-01.rur-2019-01.r1 or higher
• Oracle Database 12c Release 1 (12.1), using 12.1.0.2.v15 or higher

Topics
• Backing up all archived redo logs (p. 1905)
• Backing up an archived redo log from a date range (p. 1906)
• Backing up an archived redo log from an SCN range (p. 1907)
• Backing up an archived redo log from a sequence number range (p. 1909)

Backing up all archived redo logs


To back up all of the archived redo logs for an Amazon RDS Oracle DB instance, use the Amazon RDS
procedure rdsadmin.rdsadmin_rman_util.backup_archivelog_all.

This procedure uses the following common parameters for RMAN tasks:

• p_owner
• p_directory_name
• p_label
• p_parallel
• p_compress
• p_rman_to_dbms_output

1905
Amazon Relational Database Service User Guide
RMAN tasks

• p_tag

For more information, see Common parameters for RMAN procedures (p. 1898).

The following example backs up all archived redo logs for the DB instance.

BEGIN
rdsadmin.rdsadmin_rman_util.backup_archivelog_all(
p_owner => 'SYS',
p_directory_name => 'MYDIRECTORY',
p_parallel => 4,
p_tag => 'MY_LOG_BACKUP',
p_rman_to_dbms_output => FALSE);
END;
/

Backing up an archived redo log from a date range


To back up specific archived redo logs for an Amazon RDS Oracle DB instance by specifying a date range,
use the Amazon RDS procedure rdsadmin.rdsadmin_rman_util.backup_archivelog_date. The
date range specifies which archived redo logs to back up.

This procedure uses the following common parameters for RMAN tasks:

• p_owner
• p_directory_name
• p_label
• p_parallel
• p_compress
• p_rman_to_dbms_output
• p_tag

For more information, see Common parameters for RMAN procedures (p. 1898).

This procedure also uses the following additional parameters.

Parameter name Data type Valid Default Required Description


values

p_from_date date A date — Yes The starting date


that is for the archived log
between backups.
the
start_date
and
next_date
of an
archived
redo log
that exists
on disk.
The value
must be

1906
Amazon Relational Database Service User Guide
RMAN tasks

Parameter name Data type Valid Default Required Description


values
less than
or equal to
the value
specified
for
p_to_date.

p_to_date date A date — Yes The ending date for the


that is archived log backups.
between
the
start_date
and
next_date
of an
archived
redo log
that exists
on disk.
The value
must be
greater
than or
equal to
the value
specified
for
p_from_date.

The following example backs up archived redo logs in the date range for the DB instance.

BEGIN
rdsadmin.rdsadmin_rman_util.backup_archivelog_date(
p_owner => 'SYS',
p_directory_name => 'MYDIRECTORY',
p_from_date => '03/01/2019 00:00:00',
p_to_date => '03/02/2019 00:00:00',
p_parallel => 4,
p_tag => 'MY_LOG_BACKUP',
p_rman_to_dbms_output => FALSE);
END;
/

Backing up an archived redo log from an SCN range


To back up specific archived redo logs for an Amazon RDS Oracle DB instance by
specifying a system change number (SCN) range, use the Amazon RDS procedure
rdsadmin.rdsadmin_rman_util.backup_archivelog_scn. The SCN range specifies which
archived redo logs to back up.

This procedure uses the following common parameters for RMAN tasks:

• p_owner

1907
Amazon Relational Database Service User Guide
RMAN tasks

• p_directory_name
• p_label
• p_parallel
• p_compress
• p_rman_to_dbms_output
• p_tag

For more information, see Common parameters for RMAN procedures (p. 1898).

This procedure also uses the following additional parameters.

Parameter name Data type Valid Default Required Description


values

p_from_scn number An SCN — Yes The starting SCN


of an for the archived log
archived backups.
redo log
that exists
on disk.
The value
must be
less than
or equal to
the value
specified
for
p_to_scn.

p_to_scn number An SCN — Yes The ending SCN for the


of an archived log backups.
archived
redo log
that exists
on disk.
The value
must be
greater
than or
equal to
the value
specified
for
p_from_scn.

The following example backs up archived redo logs in the SCN range for the DB instance.

BEGIN
rdsadmin.rdsadmin_rman_util.backup_archivelog_scn(
p_owner => 'SYS',
p_directory_name => 'MYDIRECTORY',
p_from_scn => 1533835,
p_to_scn => 1892447,

1908
Amazon Relational Database Service User Guide
RMAN tasks

p_parallel => 4,
p_tag => 'MY_LOG_BACKUP',
p_rman_to_dbms_output => FALSE);
END;
/

Backing up an archived redo log from a sequence number range


To back up specific archived redo logs for an Amazon RDS Oracle DB instance
by specifying a sequence number range, use the Amazon RDS procedure
rdsadmin.rdsadmin_rman_util.backup_archivelog_sequence. The sequence number range
specifies which archived redo logs to back up.

This procedure uses the following common parameters for RMAN tasks:

• p_owner
• p_directory_name
• p_label
• p_parallel
• p_compress
• p_rman_to_dbms_output
• p_tag

For more information, see Common parameters for RMAN procedures (p. 1898).

This procedure also uses the following additional parameters.

Parameter name Data type Valid Default Required Description


values

p_from_sequence number A — Yes The starting sequence


sequence number for the archived
number an log backups.
archived
redo log
that exists
on disk.
The value
must be
less than
or equal to
the value
specified
for
p_to_sequence.

p_to_sequence number A — Yes The ending sequence


sequence number for the archived
number log backups.
of an
archived
redo log
that exists
on disk.

1909
Amazon Relational Database Service User Guide
RMAN tasks

Parameter name Data type Valid Default Required Description


values
The value
must be
greater
than or
equal to
the value
specified
for
p_from_sequence.

The following example backs up archived redo logs in the sequence number range for the DB instance.

BEGIN
rdsadmin.rdsadmin_rman_util.backup_archivelog_sequence(
p_owner => 'SYS',
p_directory_name => 'MYDIRECTORY',
p_from_sequence => 11160,
p_to_sequence => 11160,
p_parallel => 4,
p_tag => 'MY_LOG_BACKUP',
p_rman_to_dbms_output => FALSE);
END;
/

Performing a full database backup


You can perform a backup of all blocks of data files included in the backup using Amazon RDS procedure
rdsadmin.rdsadmin_rman_util.backup_database_full.

This procedure uses the following common parameters for RMAN tasks:

• p_owner
• p_directory_name
• p_label
• p_parallel
• p_section_size_mb
• p_include_archive_logs
• p_optimize
• p_compress
• p_rman_to_dbms_output
• p_tag

For more information, see Common parameters for RMAN procedures (p. 1898).

This procedure is supported for the following Amazon RDS for Oracle DB engine versions:

• Oracle Database 21c (21.0.0)


• Oracle Database 19c (19.0.0)
• Oracle Database 12c Release 2 (12.2), using 12.2.0.1.ru-2019-01.rur-2019-01.r1 or higher

1910
Amazon Relational Database Service User Guide
RMAN tasks

• Oracle Database 12c Release 1 (12.1), using 12.1.0.2.v15 or higher

The following example performs a full backup of the DB instance using the specified values for the
parameters.

BEGIN
rdsadmin.rdsadmin_rman_util.backup_database_full(
p_owner => 'SYS',
p_directory_name => 'MYDIRECTORY',
p_parallel => 4,
p_section_size_mb => 10,
p_tag => 'FULL_DB_BACKUP',
p_rman_to_dbms_output => FALSE);
END;
/

Performing an incremental database backup


You can perform an incremental backup of your DB instance using the Amazon RDS procedure
rdsadmin.rdsadmin_rman_util.backup_database_incremental.

For more information about incremental backups, see Incremental backups in the Oracle documentation.

This procedure uses the following common parameters for RMAN tasks:

• p_owner
• p_directory_name
• p_label
• p_parallel
• p_section_size_mb
• p_include_archive_logs
• p_include_controlfile
• p_optimize
• p_compress
• p_rman_to_dbms_output
• p_tag

For more information, see Common parameters for RMAN procedures (p. 1898).

This procedure is supported for the following Amazon RDS for Oracle DB engine versions:

• Oracle Database 21c (21.0.0)


• Oracle Database 19c (19.0.0)
• Oracle Database 12c Release 2 (12.2), using 12.2.0.1.ru-2019-01.rur-2019-01.r1 or higher
• Oracle Database 12c Release 1 (12.1), using 12.1.0.2.v15 or higher

This procedure also uses the following additional parameter.

Parameter name Data type Valid Default Required Description


values

p_level number 0, 1 0 No Specify 0 to enable a


full incremental backup.

1911
Amazon Relational Database Service User Guide
RMAN tasks

Parameter name Data type Valid Default Required Description


values
Specify 1 to enable
a non-cumulative
incremental backup.

The following example performs an incremental backup of the DB instance using the specified values for
the parameters.

BEGIN
rdsadmin.rdsadmin_rman_util.backup_database_incremental(
p_owner => 'SYS',
p_directory_name => 'MYDIRECTORY',
p_level => 1,
p_parallel => 4,
p_section_size_mb => 10,
p_tag => 'MY_INCREMENTAL_BACKUP',
p_rman_to_dbms_output => FALSE);
END;
/

Backing up a tablespace
You can back up a tablespace using the Amazon RDS procedure
rdsadmin.rdsadmin_rman_util.backup_tablespace.

This procedure uses the following common parameters for RMAN tasks:

• p_owner
• p_directory_name
• p_label
• p_parallel
• p_section_size_mb
• p_include_archive_logs
• p_include_controlfile
• p_optimize
• p_compress
• p_rman_to_dbms_output
• p_tag

For more information, see Common parameters for RMAN procedures (p. 1898).

This procedure also uses the following additional parameter.

Parameter name Data type Valid Default Required Description


values

p_tablespace_name varchar2 A valid — Yes The name of the


tablespace tablespace to back up.
name.

This procedure is supported for the following Amazon RDS for Oracle DB engine versions:

1912
Amazon Relational Database Service User Guide
RMAN tasks

• Oracle Database 21c (21.0.0)


• Oracle Database 19c (19.0.0)
• Oracle Database 12c Release 2 (12.2), using 12.2.0.1.ru-2019-01.rur-2019-01.r1 or higher
• Oracle Database 12c Release 1 (12.1), using 12.1.0.2.v15 or higher

The following example performs a tablespace backup using the specified values for the parameters.

BEGIN
rdsadmin.rdsadmin_rman_util.backup_tablespace(
p_owner => 'SYS',
p_directory_name => 'MYDIRECTORY',
p_tablespace_name => MYTABLESPACE,
p_parallel => 4,
p_section_size_mb => 10,
p_tag => 'MYTABLESPACE_BACKUP',
p_rman_to_dbms_output => FALSE);
END;
/

Backing up a control file


You can back up a control file using the Amazon RDS procedure
rdsadmin.rdsadmin_rman_util.backup_current_controlfile.

This procedure uses the following common parameters for RMAN tasks:

• p_owner
• p_directory_name
• p_label
• p_compress
• p_rman_to_dbms_output
• p_tag

For more information, see Common parameters for RMAN procedures (p. 1898).

This procedure is supported for the following Amazon RDS for Oracle DB engine versions:

• Oracle Database 21c (21.0.0)


• Oracle Database 19c (19.0.0)
• Oracle Database 12c Release 2 (12.2), using 12.2.0.1.ru-2019-01.rur-2019-01.r1 or higher
• Oracle Database 12c Release 1 (12.1), using 12.1.0.2.v15 or higher

The following example backs up a control file using the specified values for the parameters.

BEGIN
rdsadmin.rdsadmin_rman_util.backup_current_controlfile(
p_owner => 'SYS',
p_directory_name => 'MYDIRECTORY',
p_tag => 'CONTROL_FILE_BACKUP',
p_rman_to_dbms_output => FALSE);
END;
/

1913
Amazon Relational Database Service User Guide
Oracle Scheduler tasks

Performing common scheduling tasks for Oracle DB


instances
Some scheduler jobs owned by SYS can interfere with normal database operations. Oracle Support
recommends you disable these jobs or modify the schedule. To perform tasks for Oracle Scheduler jobs
owned by SYS, use the Amazon RDS package rdsadmin.rdsadmin_dbms_scheduler.

The rdsadmin.rdsadmin_dbms_scheduler procedures are supported for the following Amazon RDS
for Oracle DB engine versions:

• Oracle Database 21c (21.0.0)


• Oracle Database 19c
• Oracle Database 12c Release 2 (12.2) on 12.2.0.2.ru-2019-07.rur-2019-07.r1 or higher 12.2 versions
• Oracle Database 12c Release 1 (12.1) on 12.1.0.2.v17 or higher 12.1 versions

Common parameters for Oracle Scheduler procedures


To perform tasks with Oracle Scheduler, use procedures in the Amazon RDS package
rdsadmin.rdsadmin_dbms_scheduler. Several parameters are common to the procedures in the
package. The package has the following common parameters.

Parameter name Data type Valid Default Required Description


values

name varchar2 — Yes The name of the job to


'SYS.BSLN_MAINTAIN_STATS_JOB','SYS.CLEANUP_ONLINE_IND_BUILD'
modify.
Note
Currently,
you can
only modify
SYS.CLEANUP_ONLINE_IND_BUI
and
SYS.BSLN_MAINTAIN_STATS_JO
jobs.

attribute varchar2 – Yes Attribute to modify.


'REPEAT_INTERVAL','SCHEDULE_NAME'

To modify the
repeat interval for
the job, specify
'REPEAT_INTERVAL'.

To modify the
schedule name for
the job, specify
'SCHEDULE_NAME'.

value varchar2 A valid – Yes The new value of the


schedule attribute.
interval or
schedule
name,
depending
on

1914
Amazon Relational Database Service User Guide
Oracle Scheduler tasks

Parameter name Data type Valid Default Required Description


values
attribute
used.

Modifying DBMS_SCHEDULER jobs


To modify certain components of Oracle Scheduler, use the Oracle procedure
dbms_scheduler.set_attribute. For more information, see DBMS_SCHEDULER and
SET_ATTRIBUTE procedure in the Oracle documentation.

When working with Amazon RDS DB instances, prepend the schema name SYS to the object name. The
following example sets the resource plan attribute for the Monday window object.

BEGIN
DBMS_SCHEDULER.SET_ATTRIBUTE(
name => 'SYS.MONDAY_WINDOW',
attribute => 'RESOURCE_PLAN',
value => 'resource_plan_1');
END;
/

Modifying AutoTask maintenance windows


Amazon RDS for Oracle instances are created with default settings for maintenance windows. Automated
maintenance tasks such as optimizer statistics collection run during these windows. By default, the
maintenance windows turn on Oracle Database Resource Manager.

To modify the window, use the DBMS_SCHEDULER package. You might need to modify the maintenance
window settings for the following reasons:

• You want maintenance jobs to run at a different time, with different settings, or not at all. For
example, might want to modify the window duration, or change the repeat time and interval.
• You want to avoid the performance impact of enabling Resource Manager during maintenance. For
example, if the default maintenance plan is specified, and if the maintenance window opens while the
database is under load, you might see wait events such as resmgr:cpu quantum. This wait event is
related to Database Resource Manager. You have the following options:
• Ensure that maintenance windows are active during off-peak times for your DB instance.
• Disable the default maintenance plan by setting the resource_plan attribute to an empty string.
• Set the resource_manager_plan parameter to FORCE: in your parameter group. If your instance
uses Enterprise Edition, this setting prevents Database Resource Manager plans from activating.

To modify your maintenance window settings

1. Connect to your database using an Oracle SQL client.


2. Query the current configuration for a scheduler window.

The following example queries the configuration for MONDAY_WINDOW.

SELECT ENABLED, RESOURCE_PLAN, DURATION, REPEAT_INTERVAL


FROM DBA_SCHEDULER_WINDOWS
WHERE WINDOW_NAME='MONDAY_WINDOW';

The following output shows that the window is using the default values.

1915
Amazon Relational Database Service User Guide
Oracle Scheduler tasks

ENABLED RESOURCE_PLAN DURATION REPEAT_INTERVAL


--------------- ------------------------------ ----------------
------------------------------
TRUE DEFAULT_MAINTENANCE_PLAN +000 04:00:00
freq=daily;byday=MON;byhour=22
;byminute=0; bysecond=0

3. Modify the window using the DBMS_SCHEDULER package.

The following example sets the resource plan to null so that the Resource Manager won't run during
the maintenance window.

BEGIN
-- disable the window to make changes
DBMS_SCHEDULER.DISABLE(name=>'"SYS"."MONDAY_WINDOW"',force=>TRUE);

-- specify the empty string to use no plan


DBMS_SCHEDULER.SET_ATTRIBUTE(name=>'"SYS"."MONDAY_WINDOW"',
attribute=>'RESOURCE_PLAN', value=>'');

-- re-enable the window


DBMS_SCHEDULER.ENABLE(name=>'"SYS"."MONDAY_WINDOW"');
END;
/

The following example sets the maximum duration of the window to 2 hours.

BEGIN
DBMS_SCHEDULER.DISABLE(name=>'"SYS"."MONDAY_WINDOW"',force=>TRUE);
DBMS_SCHEDULER.SET_ATTRIBUTE(name=>'"SYS"."MONDAY_WINDOW"', attribute=>'DURATION',
value=>'0 2:00:00');
DBMS_SCHEDULER.ENABLE(name=>'"SYS"."MONDAY_WINDOW"');
END;
/

The following example sets the repeat interval to every Monday at 10 AM.

BEGIN
DBMS_SCHEDULER.DISABLE(name=>'"SYS"."MONDAY_WINDOW"',force=>TRUE);
DBMS_SCHEDULER.SET_ATTRIBUTE(name=>'"SYS"."MONDAY_WINDOW"',
attribute=>'REPEAT_INTERVAL',
value=>'freq=daily;byday=MON;byhour=10;byminute=0;bysecond=0');
DBMS_SCHEDULER.ENABLE(name=>'"SYS"."MONDAY_WINDOW"');
END;
/

Setting the time zone for Oracle Scheduler jobs


To modify the time zone for Oracle Scheduler, you can use the Oracle procedure
dbms_scheduler.set_scheduler_attribute. For more information about the dbms_scheduler
package, see DBMS_SCHEDULER and SET_SCHEDULER_ATTRIBUTE in the Oracle documentation.

To modify the current time zone setting

1. Connect to the database using a client such as SQL Developer. For more information, see Connecting
to your DB instance using Oracle SQL developer (p. 1808).
2. Set the default time zone as following, substituting your time zone for time_zone_name.

1916
Amazon Relational Database Service User Guide
Oracle Scheduler tasks

BEGIN
DBMS_SCHEDULER.SET_SCHEDULER_ATTRIBUTE(
attribute => 'default_timezone',
value => 'time_zone_name'
);
END;
/

In the following example, you change the time zone to Asia/Shanghai.

Start by querying the current time zone, as shown following.

SELECT VALUE FROM DBA_SCHEDULER_GLOBAL_ATTRIBUTE WHERE ATTRIBUTE_NAME='DEFAULT_TIMEZONE';

The output shows that the current time zone is ETC/UTC.

VALUE
-------
Etc/UTC

Then you set the time zone to Asia/Shanghai.

BEGIN
DBMS_SCHEDULER.SET_SCHEDULER_ATTRIBUTE(
attribute => 'default_timezone',
value => 'Asia/Shanghai'
);
END;
/

For more information about changing the system time zone, see Oracle time zone (p. 2087).

Turning off Oracle Scheduler jobs owned by SYS


To disable an Oracle Scheduler job owned by the SYS user, use the
rdsadmin.rdsadmin_dbms_scheduler.disable procedure.

This procedure uses the name common parameter for Oracle Scheduler tasks. For more information, see
Common parameters for Oracle Scheduler procedures (p. 1914).

The following example disables the SYS.CLEANUP_ONLINE_IND_BUILD Oracle Scheduler job.

BEGIN
rdsadmin.rdsadmin_dbms_scheduler.disable('SYS.CLEANUP_ONLINE_IND_BUILD');
END;
/

Turning on Oracle Scheduler jobs owned by SYS


To turn on an Oracle Scheduler job owned by SYS, use the
rdsadmin.rdsadmin_dbms_scheduler.enable procedure.

This procedure uses the name common parameter for Oracle Scheduler tasks. For more information, see
Common parameters for Oracle Scheduler procedures (p. 1914).

The following example enables the SYS.CLEANUP_ONLINE_IND_BUILD Oracle Scheduler job.

1917
Amazon Relational Database Service User Guide
Oracle Scheduler tasks

BEGIN
rdsadmin.rdsadmin_dbms_scheduler.enable('SYS.CLEANUP_ONLINE_IND_BUILD');
END;
/

Modifying the Oracle Scheduler repeat interval for jobs of


CALENDAR type
To modify the repeat interval to modify a SYS-owned Oracle Scheduler job of CALENDAR type, use the
rdsadmin.rdsadmin_dbms_scheduler.disable procedure.

This procedure uses the following common parameters for Oracle Scheduler tasks:

• name
• attribute
• value

For more information, see Common parameters for Oracle Scheduler procedures (p. 1914).

The following example modifies the repeat interval of the SYS.CLEANUP_ONLINE_IND_BUILD Oracle
Scheduler job.

BEGIN
rdsadmin.rdsadmin_dbms_scheduler.set_attribute(
name => 'SYS.CLEANUP_ONLINE_IND_BUILD',
attribute => 'repeat_interval',
value => 'freq=daily;byday=FRI,SAT;byhour=20;byminute=0;bysecond=0');
END;
/

Modifying the Oracle Scheduler repeat interval for jobs of


NAMED type
Some Oracle Scheduler jobs use a schedule name instead of an interval. For this type of
jobs, you must create a new named schedule in the master user schema. Use the standard
Oracle sys.dbms_scheduler.create_schedule procedure to do this. Also, use the
rdsadmin.rdsadmin_dbms_scheduler.set_attribute procedure to assign the new named
schedule to the job.

This procedure uses the following common parameter for Oracle Scheduler tasks:

• name
• attribute
• value

For more information, see Common parameters for Oracle Scheduler procedures (p. 1914).

The following example modifies the repeat interval of the SYS.BSLN_MAINTAIN_STATS_JOB Oracle
Scheduler job.

BEGIN
DBMS_SCHEDULER.CREATE_SCHEDULE (
schedule_name => 'rds_master_user.new_schedule',
start_date => SYSTIMESTAMP,

1918
Amazon Relational Database Service User Guide
Diagnostic tasks

repeat_interval =>
'freq=daily;byday=MON,TUE,WED,THU,FRI;byhour=0;byminute=0;bysecond=0',
end_date => NULL,
comments => 'Repeats daily forever');
END;
/

BEGIN
rdsadmin.rdsadmin_dbms_scheduler.set_attribute (
name => 'SYS.BSLN_MAINTAIN_STATS_JOB',
attribute => 'schedule_name',
value => 'rds_master_user.new_schedule');
END;
/

Turning off autocommit for Oracle Scheduler job creation


When DBMS_SCHEDULER.CREATE_JOB creates Oracle Scheduler jobs, it creates the jobs immediately
and commits the changes. You might need to incorporate the creation of Oracle Scheduler jobs in the
user transaction to do the following:

• Roll back the Oracle Schedule job when the user transaction is rolled back.
• Create the Oracle Scheduler job when the main user transaction is committed.

You can use the procedure rdsadmin.rdsadmin_dbms_scheduler.set_no_commit_flag to turn


on this behavior. This procedure takes no parameters. You can use this procedure in the following RDS
for Oracle releases:

• 21.0.0.0.ru-2022-07.rur-2022-07.r1 and higher


• 19.0.0.0.ru-2022-07.rur-2022-07.r1 and higher

The following example turns off autocommit for Oracle Scheduler, creates an Oracle Scheduler job,
and then rolls back the transaction. Because autocommit is turned off, the database also rolls back the
creation of the Oracle Scheduler job.

BEGIN
rdsadmin.rdsadmin_dbms_scheduler.set_no_commit_flag;
DBMS_SCHEDULER.CREATE_JOB(job_name => 'EMPTY_JOB',
job_type => 'PLSQL_BLOCK',
job_action => 'begin null; end;',
auto_drop => false);
ROLLBACK;
END;
/

PL/SQL procedure successfully completed.

SELECT * FROM DBA_SCHEDULER_JOBS WHERE JOB_NAME='EMPTY_JOB';

no rows selected

Performing common diagnostic tasks for Oracle DB


instances
Oracle Database includes a fault diagnosability infrastructure that you can use to investigate database
problems. In Oracle terminology, a problem is a critical error such as a code bug or data corruption. An
incident is the occurrence of a problem. If the same error occurs three times, then the infrastructure

1919
Amazon Relational Database Service User Guide
Diagnostic tasks

shows three incidents of this problem. For more information, see Diagnosing and resolving problems in
the Oracle Database documentation.

The Automatic Diagnostic Repository Command Interpreter (ADRCI) utility is an Oracle command-line
tool that you use to manage diagnostic data. For example, you can use this tool to investigate problems
and package diagnostic data. An incident package includes diagnostic data for an incident or all incidents
that reference a specific problem. You can upload an incident package, which is implemented as a .zip
file, to Oracle Support.

To deliver a managed service experience, Amazon RDS doesn't provide shell access to ADRCI.
To perform diagnostic tasks for your Oracle instance, instead use the Amazon RDS package
rdsadmin.rdsadmin_adrci_util.

By using the functions in rdsadmin_adrci_util, you can list and package problems and incidents,
and also show trace files. All functions return a task ID. This ID forms part of the name of log file that
contains the ADRCI output, as in dbtask-task_id.log. The log file resides in the BDUMP directory.

Common parameters for diagnostic procedures


To perform diagnostic tasks, use functions in the Amazon RDS package
rdsadmin.rdsadmin_adrci_util. The package has the following common parameters.

Parameter name Data type Valid Default Required Description


values

incident_id number A valid Null No If the value is null,


incident ID the function shows all
or null incidents. If the value
isn't null and represents
a valid incident ID, the
function shows the
specified incident.

problem_id number A valid Null No If the value is null,


problem the function shows all
ID or null problems. If the value
isn't null and represents
a valid problem ID, the
function shows the
specified problem.

last number A valid Null No If the value is null, then


integer the function displays
greater at most 50 items. If
than 0 or the value isn't null, the
null function displays the
specified number.

Listing incidents
To list diagnostic incidents for Oracle, use the Amazon RDS function
rdsadmin.rdsadmin_adrci_util.list_adrci_incidents. You can list incidents in either basic or
detailed mode. By default, the function lists the 50 most recent incidents.

This function uses the following common parameters:

• incident_id

1920
Amazon Relational Database Service User Guide
Diagnostic tasks

• problem_id
• last

If you specify incident_id and problem_id, then incident_id overrides problem_id. For more
information, see Common parameters for diagnostic procedures (p. 1920).

This function uses the following additional parameter.

Parameter name Data type Valid Default Required Description


values

detail boolean TRUE or FALSE No If TRUE, the function


FALSE lists incidents in detail
mode. If FALSE, the
function lists incidents
in basic mode.

To list all incidents, query the rdsadmin.rdsadmin_adrci_util.list_adrci_incidents function


without any arguments. The query returns the task ID.

SQL> SELECT rdsadmin.rdsadmin_adrci_util.list_adrci_incidents AS task_id FROM DUAL;

TASK_ID
------------------
1590786706158-3126

Or call the rdsadmin.rdsadmin_adrci_util.list_adrci_incidents function without any


arguments and store the output in a SQL client variable. You can use the variable in other statements.

SQL> VAR task_id VARCHAR2(80);


SQL> EXEC :task_id := rdsadmin.rdsadmin_adrci_util.list_adrci_incidents;

PL/SQL procedure successfully completed.

To read the log file, call the Amazon RDS procedure rdsadmin.rds_file_util.read_text_file.
Supply the task ID as part of the file name. The following output shows three incidents: 53523, 53522,
and 53521.

SQL> SELECT * FROM TABLE(rdsadmin.rds_file_util.read_text_file('BDUMP',


'dbtask-'||:task_id||'.log'));

TEXT
-------------------------------------------------------------------------------------------------------
2020-05-29 21:11:46.193 UTC [INFO ] Listing ADRCI incidents.
2020-05-29 21:11:46.256 UTC [INFO ]
ADR Home = /rdsdbdata/log/diag/rdbms/orcl_a/ORCL:
*************************************************************************
INCIDENT_ID PROBLEM_KEY CREATE_TIME
----------- -----------------------------------------------------------
----------------------------------------
53523 ORA 700 [EVENT_CREATED_INCIDENT] [942] [SIMULATED_ERROR_003 2020-05-29
20:15:20.928000 +00:00
53522 ORA 700 [EVENT_CREATED_INCIDENT] [942] [SIMULATED_ERROR_002 2020-05-29
20:15:15.247000 +00:00
53521 ORA 700 [EVENT_CREATED_INCIDENT] [942] [SIMULATED_ERROR_001 2020-05-29
20:15:06.047000 +00:00
3 rows fetched

1921
Amazon Relational Database Service User Guide
Diagnostic tasks

2020-05-29 21:11:46.256 UTC [INFO ] The ADRCI incidents were successfully listed.
2020-05-29 21:11:46.256 UTC [INFO ] The task finished successfully.

14 rows selected.

To list a particular incident, specify its ID using the incident_id parameter. In the following example,
you query the log file for incident 53523 only.

SQL> EXEC :task_id :=


rdsadmin.rdsadmin_adrci_util.list_adrci_incidents(incident_id=>53523);

PL/SQL procedure successfully completed.

SQL> SELECT * FROM TABLE(rdsadmin.rds_file_util.read_text_file('BDUMP',


'dbtask-'||:task_id||'.log'));

TEXT
-------------------------------------------------------------------------------------------------------
2020-05-29 21:15:25.358 UTC [INFO ] Listing ADRCI incidents.
2020-05-29 21:15:25.426 UTC [INFO ]
ADR Home = /rdsdbdata/log/diag/rdbms/orcl_a/ORCL:
*************************************************************************
INCIDENT_ID PROBLEM_KEY
CREATE_TIME
-------------------- -----------------------------------------------------------
---------------------------------
53523 ORA 700 [EVENT_CREATED_INCIDENT] [942] [SIMULATED_ERROR_003 2020-05-29
20:15:20.928000 +00:00
1 rows fetched

2020-05-29 21:15:25.427 UTC [INFO ] The ADRCI incidents were successfully listed.
2020-05-29 21:15:25.427 UTC [INFO ] The task finished successfully.

12 rows selected.

Listing problems
To list diagnostic problems for Oracle, use the Amazon RDS function
rdsadmin.rdsadmin_adrci_util.list_adrci_problems.

By default, the function lists the 50 most recent problems.

This function uses the common parameters problem_id and last. For more information, see Common
parameters for diagnostic procedures (p. 1920).

To get the task ID for all problems, call the


rdsadmin.rdsadmin_adrci_util.list_adrci_problems function without any arguments, and
store the output in a SQL client variable.

SQL> EXEC :task_id := rdsadmin.rdsadmin_adrci_util.list_adrci_problems;

PL/SQL procedure successfully completed.

To read the log file, call the rdsadmin.rds_file_util.read_text_file function, supplying the
task ID as part of the file name. In the following output, the log file shows three problems: 1, 2, and 3.

SQL> SELECT * FROM TABLE(rdsadmin.rds_file_util.read_text_file('BDUMP',


'dbtask-'||:task_id||'.log'));

1922
Amazon Relational Database Service User Guide
Diagnostic tasks

TEXT
-------------------------------------------------------------------------------------------------------
2020-05-29 21:18:50.764 UTC [INFO ] Listing ADRCI problems.
2020-05-29 21:18:50.829 UTC [INFO ]
ADR Home = /rdsdbdata/log/diag/rdbms/orcl_a/ORCL:
*************************************************************************
PROBLEM_ID PROBLEM_KEY LAST_INCIDENT
LASTINC_TIME
---------- ----------------------------------------------------------- -------------
---------------------------------
2 ORA 700 [EVENT_CREATED_INCIDENT] [942] [SIMULATED_ERROR_003 53523
2020-05-29 20:15:20.928000 +00:00
3 ORA 700 [EVENT_CREATED_INCIDENT] [942] [SIMULATED_ERROR_002 53522
2020-05-29 20:15:15.247000 +00:00
1 ORA 700 [EVENT_CREATED_INCIDENT] [942] [SIMULATED_ERROR_001 53521
2020-05-29 20:15:06.047000 +00:00
3 rows fetched

2020-05-29 21:18:50.829 UTC [INFO ] The ADRCI problems were successfully listed.
2020-05-29 21:18:50.829 UTC [INFO ] The task finished successfully.

14 rows selected.

In the following example, you list problem 3 only.

SQL> EXEC :task_id := rdsadmin.rdsadmin_adrci_util.list_adrci_problems(problem_id=>3);

PL/SQL procedure successfully completed.

To read the log file for problem 3, call rdsadmin.rds_file_util.read_text_file. Supply the task
ID as part of the file name.

SQL> SELECT * FROM TABLE(rdsadmin.rds_file_util.read_text_file('BDUMP',


'dbtask-'||:task_id||'.log'));

TEXT
-------------------------------------------------------------------------
2020-05-29 21:19:42.533 UTC [INFO ] Listing ADRCI problems.
2020-05-29 21:19:42.599 UTC [INFO ]
ADR Home = /rdsdbdata/log/diag/rdbms/orcl_a/ORCL:
*************************************************************************
PROBLEM_ID PROBLEM_KEY LAST_INCIDENT
LASTINC_TIME
---------- ----------------------------------------------------------- -------------
---------------------------------
3 ORA 700 [EVENT_CREATED_INCIDENT] [942] [SIMULATED_ERROR_002 53522
2020-05-29 20:15:15.247000 +00:00
1 rows fetched

2020-05-29 21:19:42.599 UTC [INFO ] The ADRCI problems were successfully listed.
2020-05-29 21:19:42.599 UTC [INFO ] The task finished successfully.

12 rows selected.

Creating incident packages


You can create incident packages using the Amazon RDS function
rdsadmin.rdsadmin_adrci_util.create_adrci_package. The output is a .zip file that you can
supply to Oracle Support.

1923
Amazon Relational Database Service User Guide
Diagnostic tasks

This function uses the following common parameters:

• problem_id
• incident_id

Make sure to specify one of the preceding parameters. If you specify both parameters, incident_id
overrides problem_id. For more information, see Common parameters for diagnostic
procedures (p. 1920).

To create a package for a specific incident, call the Amazon RDS function
rdsadmin.rdsadmin_adrci_util.create_adrci_package with the incident_id parameter. The
following example creates a package for incident 53523.

SQL> EXEC :task_id :=


rdsadmin.rdsadmin_adrci_util.create_adrci_package(incident_id=>53523);

PL/SQL procedure successfully completed.

To read the log file, call the rdsadmin.rds_file_util.read_text_file. You can supply
the task ID as part of the file name. The output shows that you generated incident package
ORA700EVE_20200529212043_COM_1.zip.

SQL> SELECT * FROM TABLE(rdsadmin.rds_file_util.read_text_file('BDUMP',


'dbtask-'||:task_id||'.log'));

TEXT
-------------------------------------------------------------------------------------------------------
2020-05-29 21:20:43.031 UTC [INFO ] The ADRCI package is being created.
2020-05-29 21:20:47.641 UTC [INFO ] Generated package 1 in file /rdsdbdata/log/trace/
ORA700EVE_20200529212043_COM_1.zip, mode complete
2020-05-29 21:20:47.642 UTC [INFO ] The ADRCI package was successfully created.
2020-05-29 21:20:47.642 UTC [INFO ] The task finished successfully.

To package diagnostic data for a particular problem, specify its ID using the problem_id parameter. In
the following example, you package data for problem 3 only.

SQL> EXEC :task_id := rdsadmin.rdsadmin_adrci_util.create_adrci_package(problem_id=>3);

PL/SQL procedure successfully completed.

To read the task output, call rdsadmin.rds_file_util.read_text_file, supplying


the task ID as part of the file name. The output shows that you generated incident package
ORA700EVE_20200529212111_COM_1.zip.

SQL> SELECT * FROM TABLE(rdsadmin.rds_file_util.read_text_file('BDUMP',


'dbtask-'||:task_id||'.log'));

TEXT
-------------------------------------------------------------------------------------------------------
2020-05-29 21:21:11.050 UTC [INFO ] The ADRCI package is being created.
2020-05-29 21:21:15.646 UTC [INFO ] Generated package 2 in file /rdsdbdata/log/trace/
ORA700EVE_20200529212111_COM_1.zip, mode complete
2020-05-29 21:21:15.646 UTC [INFO ] The ADRCI package was successfully created.
2020-05-29 21:21:15.646 UTC [INFO ] The task finished successfully.

1924
Amazon Relational Database Service User Guide
Diagnostic tasks

Showing trace files


You can use the Amazon RDS function rdsadmin.rdsadmin_adrci_util.show_adrci_tracefile
to list trace files under the trace directory and all incident directories under the current ADR home. You
can also show the contents of trace files and incident trace files.

This function uses the following parameter.

Parameter name Data type Valid Default Required Description


values

filename varchar2 A valid Null No If the value is null, the


trace file function shows all trace
name files. If it isn't null, the
function shows the
specified file.

To show the trace file, call the Amazon RDS function


rdsadmin.rdsadmin_adrci_util.show_adrci_tracefile.

SQL> EXEC :task_id := rdsadmin.rdsadmin_adrci_util.show_adrci_tracefile;

PL/SQL procedure successfully completed.

To list the trace file names, call the Amazon RDS procedure
rdsadmin.rds_file_util.read_text_file, supplying the task ID as part of the file name.

SQL> SELECT * FROM TABLE(rdsadmin.rds_file_util.read_text_file('BDUMP',


'dbtask-'||:task_id||'.log')) WHERE TEXT LIKE '%/alert_%';

TEXT
---------------------------------------------------------------
diag/rdbms/orcl_a/ORCL/trace/alert_ORCL.log.2020-05-28
diag/rdbms/orcl_a/ORCL/trace/alert_ORCL.log.2020-05-27
diag/rdbms/orcl_a/ORCL/trace/alert_ORCL.log.2020-05-26
diag/rdbms/orcl_a/ORCL/trace/alert_ORCL.log.2020-05-25
diag/rdbms/orcl_a/ORCL/trace/alert_ORCL.log.2020-05-24
diag/rdbms/orcl_a/ORCL/trace/alert_ORCL.log.2020-05-23
diag/rdbms/orcl_a/ORCL/trace/alert_ORCL.log.2020-05-22
diag/rdbms/orcl_a/ORCL/trace/alert_ORCL.log.2020-05-21
diag/rdbms/orcl_a/ORCL/trace/alert_ORCL.log

9 rows selected.

In the following example, you generate output for alert_ORCL.log.

SQL> EXEC :task_id := rdsadmin.rdsadmin_adrci_util.show_adrci_tracefile('diag/rdbms/orcl_a/


ORCL/trace/alert_ORCL.log');

PL/SQL procedure successfully completed.

To read the log file, call rdsadmin.rds_file_util.read_text_file. Supply the task ID as part of
the file name. The output shows the first 10 lines of alert_ORCL.log.

SQL> SELECT * FROM TABLE(rdsadmin.rds_file_util.read_text_file('BDUMP',


'dbtask-'||:task_id||'.log')) WHERE ROWNUM <= 10;

TEXT

1925
Amazon Relational Database Service User Guide
Other tasks

-----------------------------------------------------------------------------------------
2020-05-29 21:24:02.083 UTC [INFO ] The trace files are being displayed.
2020-05-29 21:24:02.128 UTC [INFO ] Thu May 28 23:59:10 2020
Thread 1 advanced to log sequence 2048 (LGWR switch)
Current log# 3 seq# 2048 mem# 0: /rdsdbdata/db/ORCL_A/onlinelog/o1_mf_3_hbl2p8xs_.log
Thu May 28 23:59:10 2020
Archived Log entry 2037 added for thread 1 sequence 2047 ID 0x5d62ce43 dest 1:
Fri May 29 00:04:10 2020
Thread 1 advanced to log sequence 2049 (LGWR switch)
Current log# 4 seq# 2049 mem# 0: /rdsdbdata/db/ORCL_A/onlinelog/o1_mf_4_hbl2qgmh_.log
Fri May 29 00:04:10 2020

10 rows selected.

Performing miscellaneous tasks for Oracle DB


instances
Following, you can find how to perform miscellaneous DBA tasks on your Amazon RDS DB instances
running Oracle. To deliver a managed service experience, Amazon RDS doesn't provide shell access to DB
instances, and restricts access to certain system procedures and tables that require advanced privileges.

Topics
• Creating and dropping directories in the main data storage space (p. 1926)
• Listing files in a DB instance directory (p. 1927)
• Reading files in a DB instance directory (p. 1927)
• Accessing Opatch files (p. 1928)
• Managing advisor tasks (p. 1930)
• Transporting tablespaces (p. 1932)

Creating and dropping directories in the main data storage


space
To create directories, use the Amazon RDS procedure
rdsadmin.rdsadmin_util.create_directory. You can create up to 10,000 directories,
all located in your main data storage space. To drop directories, use the Amazon RDS procedure
rdsadmin.rdsadmin_util.drop_directory.

The create_directory and drop_directory procedures have the following required parameter.

Parameter name Data type Default Required Description

p_directory_name varchar2 — Yes The name of the directory.

The following example creates a new directory named PRODUCT_DESCRIPTIONS.

EXEC rdsadmin.rdsadmin_util.create_directory(p_directory_name => 'product_descriptions');

The data dictionary stores the directory name in uppercase. You can list the directories by querying
DBA_DIRECTORIES. The system chooses the actual host pathname automatically. The following
example gets the directory path for the directory named PRODUCT_DESCRIPTIONS:

SELECT DIRECTORY_PATH

1926
Amazon Relational Database Service User Guide
Other tasks

FROM DBA_DIRECTORIES
WHERE DIRECTORY_NAME='PRODUCT_DESCRIPTIONS';

DIRECTORY_PATH
----------------------------------------
/rdsdbdata/userdirs/01

The master user name for the DB instance has read and write privileges in the new directory, and can
grant access to other users. EXECUTE privileges are not available for directories on a DB instance.
Directories are created in your main data storage space and will consume space and I/O bandwidth.

The following example drops the directory named PRODUCT_DESCRIPTIONS.

EXEC rdsadmin.rdsadmin_util.drop_directory(p_directory_name => 'product_descriptions');

Note
You can also drop a directory by using the Oracle SQL command DROP DIRECTORY.

Dropping a directory doesn't remove its contents. Because the


rdsadmin.rdsadmin_util.create_directory procedure can reuse pathnames, files in dropped
directories can appear in a newly created directory. Before you drop a directory, we recommend that
you use UTL_FILE.FREMOVE to remove files from the directory. For more information, see FREMOVE
procedure in the Oracle documentation.

Listing files in a DB instance directory


To list the files in a directory, use the Amazon RDS procedure rdsadmin.rds_file_util.listdir.
The listdir procedure has the following parameters.

Parameter name Data type Default Required Description

p_directory varchar2 — Yes The name of the directory


to list.

The following example grants read/write privileges on the directory PRODUCT_DESCRIPTIONS to user
rdsadmin, and then lists the files in this directory.

GRANT READ,WRITE ON DIRECTORY PRODUCT_DESCRIPTIONS TO rdsadmin;


SELECT * FROM TABLE(rdsadmin.rds_file_util.listdir(p_directory => 'PRODUCT_DESCRIPTIONS'));

Reading files in a DB instance directory


To read a text file, use the Amazon RDS procedure rdsadmin.rds_file_util.read_text_file. The
read_text_file procedure has the following parameters.

Parameter name Data type Default Required Description

p_directory varchar2 — Yes The name of the directory


that contains the file.

p_filename varchar2 — Yes The name of the file to


read.

The following example creates the file rice.txt in the directory PRODUCT_DESCRIPTIONS.

1927
Amazon Relational Database Service User Guide
Other tasks

declare
fh sys.utl_file.file_type;
begin
fh := utl_file.fopen(location=>'PRODUCT_DESCRIPTIONS', filename=>'rice.txt',
open_mode=>'w');
utl_file.put(file=>fh, buffer=>'AnyCompany brown rice, 15 lbs');
utl_file.fclose(file=>fh);
end;
/

The following example reads the file rice.txt from the directory PRODUCT_DESCRIPTIONS.

SELECT * FROM TABLE


(rdsadmin.rds_file_util.read_text_file(
p_directory => 'PRODUCT_DESCRIPTIONS',
p_filename => 'rice.txt'));

Accessing Opatch files


Opatch is an Oracle utility that enables the application and rollback of patches to Oracle software.
The Oracle mechanism for determining which patches have been applied to a database is the opatch
lsinventory command. To open service requests for Bring Your Own Licence (BYOL) customers, Oracle
Support requests the lsinventory file and sometimes the lsinventory_detail file generated by
Opatch.

To deliver a managed service experience, Amazon RDS doesn't provide shell access to Opatch. Instead,
the lsinventory-dbv.txt in the BDUMP directory contains the patch information related to
your current engine version. When you perform a minor or major upgrade, Amazon RDS updates
lsinventory-dbv.txt within an hour of applying the patch. To verify the applied patches, read
lsinventory-dbv.txt. This action is similar to running the opatch lsinventory command.
Note
The examples in this section assume that the BDUMP directory is named BDUMP. On a read
replica, the BDUMP directory name is different. To learn how to get the BDUMP name by
querying V$DATABASE.DB_UNIQUE_NAME on a read replica, see Listing files (p. 925).

The inventory files use the Amazon RDS naming convention lsinventory-dbv.txt
and lsinventory_detail-dbv.txt, where dbv is the full name of your DB version.
The lsinventory-dbv.txt file is available on all DB versions. The corresponding
lsinventory_detail-dbv.txt is available on the following DB versions:

• 19.0.0.0, ru-2020-01.rur-2020-01.r1 or later


• 12.2.0.1, ru-2020-01.rur-2020-01.r1 or later
• 12.1.0.2, v19 or later

For example, if your DB version is 19.0.0.0.ru-2021-07.rur-2021-07.r1, then your inventory files have the
following names.

lsinventory-19.0.0.0.ru-2021-07.rur-2021-07.r1.txt
lsinventory_detail-19.0.0.0.ru-2021-07.rur-2021-07.r1.txt

Ensure that you download the files that match the current version of your DB engine.

Console

To download an inventory file using the console

1. Open the Amazon RDS console at https://console.aws.amazon.com/rds/.

1928
Amazon Relational Database Service User Guide
Other tasks

2. In the navigation pane, choose Databases.


3. Choose the name of the DB instance that has the log file that you want to view.
4. Choose the Logs & events tab.
5. Scroll down to the Logs section.
6. In the Logs section, search for lsinventory.
7. Select the file that you want to access, and then choose Download.

SQL

To read the lsinventory-dbv.txt in a SQL client, you can use a SELECT


statement. For this technique, use either of the following rdsadmin functions:
rdsadmin.rds_file_util.read_text_file or rdsadmin.tracefile_listing.

In the following sample query, replace dbv with your Oracle DB version. For example, your DB version
might be 19.0.0.0.ru-2020-04.rur-2020-04.r1.

SELECT text
FROM TABLE(rdsadmin.rds_file_util.read_text_file('BDUMP', 'lsinventory-dbv.txt'));

PL/SQL

To read the lsinventory-dbv.txt in a SQL client, you can write a PL/SQL program. This program
uses utl_file to read the file, and dbms_output to print it. These are Oracle-supplied packages.

In the following sample program, replace dbv with your Oracle DB version. For example, your DB version
might be 19.0.0.0.ru-2020-04.rur-2020-04.r1.

SET SERVEROUTPUT ON
DECLARE
v_file SYS.UTL_FILE.FILE_TYPE;
v_line VARCHAR2(1000);
v_oracle_home_type VARCHAR2(1000);
c_directory VARCHAR2(30) := 'BDUMP';
c_output_file VARCHAR2(30) := 'lsinventory-dbv.txt';
BEGIN
v_file := SYS.UTL_FILE.FOPEN(c_directory, c_output_file, 'r');
LOOP
BEGIN
SYS.UTL_FILE.GET_LINE(v_file, v_line,1000);
DBMS_OUTPUT.PUT_LINE(v_line);
EXCEPTION
WHEN no_data_found THEN
EXIT;
END;
END LOOP;
END;
/

Or query rdsadmin.tracefile_listing, and spool the output to a file. The following example
spools the output to /tmp/tracefile.txt.

SPOOL /tmp/tracefile.txt
SELECT *
FROM rdsadmin.tracefile_listing
WHERE FILENAME LIKE 'lsinventory%';
SPOOL OFF;

1929
Amazon Relational Database Service User Guide
Other tasks

Managing advisor tasks


Oracle Database includes a number of advisors. Each advisor supports automated and manual tasks. You
can use procedures in the rdsadmin.rdsadmin_util package to manage some advisor tasks.

The advisor task procedures are available in the following engine versions:

• Oracle Database 21c (21.0.0)


• Version 19.0.0.0.ru-2021-01.rur-2021-01.r1 and higher Oracle Database 19c versions

For more information, see Version 19.0.0.0.ru-2021-01.rur-2021-01.r1 in the Amazon RDS for Oracle
Release Notes.
• Version 12.2.0.1.ru-2021-01.rur-2021-01.r1 and higher Oracle Database 12c (Release 2) 12.2.0.1
versions

For more information, see Version 12.2.0.1.ru-2021-01.rur-2021-01.r1 in the Amazon RDS for Oracle
Release Notes.

Topics
• Setting parameters for advisor tasks (p. 1930)
• Disabling AUTO_STATS_ADVISOR_TASK (p. 1931)
• Re-enabling AUTO_STATS_ADVISOR_TASK (p. 1932)

Setting parameters for advisor tasks


To set parameters for some advisor tasks, use the Amazon RDS procedure
rdsadmin.rdsadmin_util.advisor_task_set_parameter. The advisor_task_set_parameter
procedure has the following parameters.

Parameter name Data Default Required Description


type

p_task_name varchar2 — Yes The name of the advisor task whose parameters
you want to change. The following values are
valid:

• AUTO_STATS_ADVISOR_TASK
• INDIVIDUAL_STATS_ADVISOR_TASK
• SYS_AUTO_SPM_EVOLVE_TASK
• SYS_AUTO_SQL_TUNING_TASK

p_parameter varchar2 — Yes The name of the task parameter. To find valid
parameters for an advisor task, run the following
query. Substitute p_task_name with a valid
value for p_task_name:

COL PARAMETER_NAME FORMAT a30


COL PARAMETER_VALUE FORMAT a30
SELECT PARAMETER_NAME, PARAMETER_VALUE
FROM DBA_ADVISOR_PARAMETERS
WHERE TASK_NAME='p_task_name'
AND PARAMETER_VALUE != 'UNUSED'
ORDER BY PARAMETER_NAME;

1930
Amazon Relational Database Service User Guide
Other tasks

Parameter name Data Default Required Description


type

p_value varchar2 — Yes The value for a task parameter. To find valid
values for task parameters, run the following
query. Substitute p_task_name with a valid
value for p_task_name:

COL PARAMETER_NAME FORMAT a30


COL PARAMETER_VALUE FORMAT a30
SELECT PARAMETER_NAME, PARAMETER_VALUE
FROM DBA_ADVISOR_PARAMETERS
WHERE TASK_NAME='p_task_name'
AND PARAMETER_VALUE != 'UNUSED'
ORDER BY PARAMETER_NAME;

The following PL/SQL program sets ACCEPT_PLANS to FALSE for SYS_AUTO_SPM_EVOLVE_TASK. The
SQL Plan Management automated task verifies the plans and generates a report of its findings, but does
not evolve the plans automatically. You can use a report to identify new SQL plan baselines and accept
them manually.

BEGIN
rdsadmin.rdsadmin_util.advisor_task_set_parameter(
p_task_name => 'SYS_AUTO_SPM_EVOLVE_TASK',
p_parameter => 'ACCEPT_PLANS',
p_value => 'FALSE');
END;

The following PL/SQL program sets EXECUTION_DAYS_TO_EXPIRE to 10 for


AUTO_STATS_ADVISOR_TASK. The predefined task AUTO_STATS_ADVISOR_TASK runs automatically in
the maintenance window once per day. The example sets the retention period for the task execution to
10 days.

BEGIN
rdsadmin.rdsadmin_util.advisor_task_set_parameter(
p_task_name => 'AUTO_STATS_ADVISOR_TASK',
p_parameter => 'EXECUTION_DAYS_TO_EXPIRE',
p_value => '10');
END;

Disabling AUTO_STATS_ADVISOR_TASK
To disable AUTO_STATS_ADVISOR_TASK, use the Amazon RDS procedure
rdsadmin.rdsadmin_util.advisor_task_drop. The advisor_task_drop procedure accepts the
following parameter.
Note
This procedure is available in Oracle Database 12c Release 2 (12.2.0.1) and later.

Parameter name Data Default Required Description


type

p_task_name varchar2 — Yes The name of the advisor task to be disabled. The
only valid value is AUTO_STATS_ADVISOR_TASK.

The following command drops AUTO_STATS_ADVISOR_TASK.

1931
Amazon Relational Database Service User Guide
Other tasks

EXEC rdsadmin.rdsadmin_util.advisor_task_drop('AUTO_STATS_ADVISOR_TASK')

You can re-enable AUTO_STATS_ADVISOR_TASK using


rdsadmin.rdsadmin_util.dbms_stats_init.

Re-enabling AUTO_STATS_ADVISOR_TASK
To re-enable AUTO_STATS_ADVISOR_TASK, use the Amazon RDS procedure
rdsadmin.rdsadmin_util.dbms_stats_init. The dbms_stats_init procedure takes no
parameters.

The following command re-enables AUTO_STATS_ADVISOR_TASK.

EXEC rdsadmin.rdsadmin_util.dbms_stats_init()

Transporting tablespaces
Use the Amazon RDS package rdsadmin.rdsadmin_transport_util to copy a set of tablespaces
from an on-premises Oracle database to an RDS for Oracle DB instance. At the physical level, the
transportable tablespace feature incrementally copies source data files and metadata files to your target
instance. You can transfer the files using either Amazon EFS or Amazon S3. For more information, see
Migrating using Oracle transportable tablespaces (p. 1962).

Topics
• Importing transported tablespaces to your DB instance (p. 1932)
• Importing transportable tablespace metadata into your DB instance (p. 1933)
• Listing orphaned files after a tablespace import (p. 1934)
• Deleting orphaned data files after a tablespace import (p. 1935)

Importing transported tablespaces to your DB instance


Use the procedure rdsadmin.rdsadmin_transport_util.import_xtts_tablespaces to restore
tablespaces that you have previously exported from a source DB instance. In the transport phase,
you back up your read-only tablespaces, export Data Pump metadata, transfer these files to your
target DB instance, and then import the tablespaces. For more information, see Phase 4: Transport the
tablespaces (p. 1967).

Syntax

FUNCTION import_xtts_tablespaces(
p_tablespace_list IN CLOB,
p_directory_name IN VARCHAR2,
p_platform_id IN NUMBER DEFAULT 13,
p_parallel IN INTEGER DEFAULT 0) RETURN VARCHAR2;

Parameters

Parameter name Data type Default Required Description

p_tablespace_list CLOB — Yes The list of tablespaces to


import.

p_directory_name VARCHAR2 — Yes The directory that contains


the tablespace backups.

1932
Amazon Relational Database Service User Guide
Other tasks

Parameter name Data type Default Required Description

p_platform_id NUMBER 13 No Provide a platform ID


that matches the one
specified during the
backup phase. To find a
list of platforms, query V
$TRANSPORTABLE_PLATFORM.
The default platform is
Linux x86 64-bit, which is
little endian.

p_parallel INTEGER 0 No The degree of parallelism.


By default, parallelism is
disabled.

Examples

The following example imports the tablespaces TBS1, TBS2, and TBS3 from the directory
DATA_PUMP_DIR.

VAR task_id CLOB

BEGIN

:task_id:=rdsadmin.rdsadmin_transport_util.import_xtts_tablespaces('TBS1,TBS2,TBS3','DATA_PUMP_DIR');
END;
/

PRINT task_id

Importing transportable tablespace metadata into your DB instance


Use the procedure rdsadmin.rdsadmin_transport_util.import_xtts_metadata to import
transportable tablespace metadata into your RDS for Oracle DB instance. During the operation, the
status of the metadata import is shown in the table rdsadmin.rds_xtts_operation_info. For more
information, see Step 5: Import tablespace metadata on your target DB instance (p. 1969).

Syntax

PROCEDURE import_xtts_metadata(
p_datapump_metadata_file IN SYS.DBA_DATA_FILES.FILE_NAME%TYPE,
p_directory_name IN VARCHAR2,
p_exclude_stats IN BOOLEAN DEFAULT FALSE,
p_remap_tablespace_list IN CLOB DEFAULT NULL,
p_remap_user_list IN CLOB DEFAULT NULL);

Parameters

Parameter name Data type Default Required Description

p_datapump_metadata_file —
SYS.DBA_DATA_FILES.FILE_NAME Yes The name of the Oracle
%TYPE Data Pump file that
contains the metadata
for your transportable
tablespaces.

1933
Amazon Relational Database Service User Guide
Other tasks

Parameter name Data type Default Required Description

p_directory_name VARCHAR2 — Yes The directory that


contains the Data
Pump file.

p_exclude_stats BOOLEAN FALSE No Flag that indicates


whether to exclude
statistics.

p_remap_tablespace_list
CLOB NULL No A list of tablespaces
to be remapped
during the metadata
import. Use the format
from_tbs:to_tbs.
For example, specify
users:user_data.

p_remap_user_list CLOB NULL No A list of user schemas


to be remapped
during the metadata
import. Use the format
from_schema_name:to_schema_na
For example, specify
hr:human_resources.

Examples

The example imports the tablespace metadata from the file xttdump.dmp, which is located in directory
DATA_PUMP_DIR.

BEGIN
rdsadmin.rdsadmin_transport_util.import_xtts_metadata('xttdump.dmp','DATA_PUMP_DIR');
END;
/

Listing orphaned files after a tablespace import


Use the rdsadmin.rdsadmin_transport_util.list_xtts_orphan_files procedure to list data
files that were orphaned after a tablespace import. After you identify the data files, you can delete them
by calling rdsadmin.rdsadmin_transport_util.cleanup_incomplete_xtts_import.

Syntax

FUNCTION list_xtts_orphan_files RETURN xtts_orphan_files_list_t PIPELINED;

Examples

The following example runs the procedure


rdsadmin.rdsadmin_transport_util.list_xtts_orphan_files. The output shows two data
files that are orphaned.

SQL> SELECT * FROM TABLE(rdsadmin.rdsadmin_transport_util.list_xtts_orphan_files);

FILENAME FILESIZE
-------------- ---------

1934
Amazon Relational Database Service User Guide
Other tasks

datafile_7.dbf 104865792
datafile_8.dbf 104865792

Deleting orphaned data files after a tablespace import


Use the rdsadmin.rdsadmin_transport_util.list_xtts_orphan_files procedure to
delete data files that were orphaned after a tablespace import. Running this command generates
a log file that uses the name format rds-xtts-delete_xtts_orphaned_files-YYYY-
MM-DD.HH24-MI-SS.FF.log in the BDUMP directory. Use the procedure
rdsadmin.rdsadmin_transport_util.cleanup_incomplete_xtts_import to find the orphaned
files. You can read the log file by calling the procedure rdsadmin.rds_file_util.read_text_file.
For more information, see Phase 6: Clean up leftover files (p. 1970).

Syntax

PROCEDURE cleanup_incomplete_xtts_import(
p_directory_name IN VARCHAR2);

Parameters

Parameter name Data type Default Required Description

p_directory_name VARCHAR2 — Yes The directory that contains


the orphaned data files.

Examples

The following example deletes the orphaned data files in DATA_PUMP_DIR.

BEGIN
rdsadmin.rdsadmin_transport_util.cleanup_incomplete_xtts_import('DATA_PUMP_DIR');
END;
/

The following example reads the log file generated by the previous command.

SELECT *
FROM TABLE(rdsadmin.rds_file_util.read_text_file(
p_directory => 'BDUMP',
p_filename => 'rds-xtts-
delete_xtts_orphaned_files-2023-06-01.09-33-11.868894000.log'));

TEXT
--------------------------------------------------------------------------------
orphan transported datafile datafile_7.dbf deleted.
orphan transported datafile datafile_8.dbf deleted.

1935
Amazon Relational Database Service User Guide
Configuring advanced RDS for Oracle features

Configuring advanced RDS for Oracle features


RDS for Oracle supports various advanced features, including HugePages, an instance store, and
extended data types.

Topics
• Storing temporary data in an RDS for Oracle instance store (p. 1936)
• Turning on HugePages for an RDS for Oracle instance (p. 1942)
• Turning on extended data types in RDS for Oracle (p. 1945)

Storing temporary data in an RDS for Oracle instance


store
Use an instance store for the temporary tablespaces and the Database Smart Flash Cache (the flash
cache) on supported RDS for Oracle DB instance classes.

Topics
• Overview of the RDS for Oracle instance store (p. 1936)
• Turning on an RDS for Oracle instance store (p. 1938)
• Configuring an RDS for Oracle instance store (p. 1938)
• Considerations when changing the DB instance type (p. 1940)
• Working with an instance store on an Oracle read replica (p. 1941)
• Configuring a temporary tablespace group on an instance store and Amazon EBS (p. 1941)
• Removing an RDS for Oracle instance store (p. 1942)

Overview of the RDS for Oracle instance store


An instance store provides temporary block-level storage for an RDS for Oracle DB instance. You can use
an instance store for temporary storage of information that changes frequently.

An instance store is based on Non-Volatile Memory Express (NVMe) devices that are physically attached
to the host computer. The storage is optimized for low latency, random I/O performance, and sequential
read throughput.

The size of the instance store varies by DB instance type. For more information about the instance store,
see Amazon EC2 instance store in the Amazon Elastic Compute Cloud User Guide for Linux Instances.

Topics
• Types of data in the RDS for Oracle instance store (p. 1936)
• Benefits of the RDS for Oracle instance store (p. 1937)
• Supported instance classes for the RDS for Oracle instance store (p. 1937)
• Supported engine versions for the RDS for Oracle instance store (p. 1938)
• Supported AWS Regions for the RDS for Oracle instance store (p. 1938)
• Cost of the RDS for Oracle instance store (p. 1938)

Types of data in the RDS for Oracle instance store


You can place the following types of RDS for Oracle temporary data in an instance store:

1936
Amazon Relational Database Service User Guide
Configuring the instance store

A temporary tablespace

Oracle Database uses temporary tablespaces to store intermediate query results that don't fit in
memory. Larger queries can generate large amounts of intermediate data that needs to be cached
temporarily, but doesn't need to persist. In particular, a temporary tablespace is useful for sorts,
hash aggregations, and joins. If your RDS for Oracle DB instance uses the Enterprise Edition or
Standard Edition 2, you can place a temporary tablespace in an instance store.
The flash cache

The flash cache improves the performance of single-block random reads in the conventional path. A
best practice is to size the cache to accommodate most of your active data set. If your RDS for Oracle
DB instance uses the Enterprise Edition, you can place the flash cache in an instance store.

By default, an instance store is configured for a temporary tablespace but not for the flash cache. You
can't place Oracle data files and database log files in an instance store.

Benefits of the RDS for Oracle instance store


You might consider using an instance store to store temporary files and caches that you can afford
to lose. If you want to improve DB performance, or if an increasing workload is causing performance
problems for your Amazon EBS storage, consider scaling to an instance class that supports an instance
store.

By placing your temporary tablespace and flash cache on an instance store, you get the following
benefits:

• Lower read latencies


• Higher throughput
• Reduced load on your Amazon EBS volumes
• Lower storage and snapshot costs because of reduced Amazon EBS load
• Less need to provision high IOPS, possibly lowering your overall cost

By placing your temporary tablespace on the instance store, you deliver an immediate performance
boost to queries that use temporary space. When you place the flash cache on the instance store, cached
block reads typically have much lower latency than Amazon EBS reads. The flash cache needs to be
"warmed up" before it delivers performance benefits. The cache warms up by itself because the database
writes blocks to the flash cache as they age out of the database buffer cache.
Note
In some cases, the flash cache causes performance overhead because of cache management.
Before you turn on the flash cache in a production environment, we recommend that you
analyze your workload and test the cache in a test environment.

Supported instance classes for the RDS for Oracle instance store
Amazon RDS supports the instance store for the following DB instance classes:

• db.m5d
• db.r5d
• db.x2idn
• db.x2iedn

RDS for Oracle supports the preceding DB instance classes for the BYOL licensing model only. For more
information, see Supported RDS for Oracle instance classes (p. 1797) and Bring Your Own License
(BYOL) (p. 1793).

1937
Amazon Relational Database Service User Guide
Configuring the instance store

To see the total instance storage for the supported DB instance types, run the following command in the
AWS CLI.

Example

aws ec2 describe-instance-types \


--filters "Name=instance-type,Values=*5d.*large*" \
--query "InstanceTypes[?contains(InstanceType,'m5d')||contains(InstanceType,'r5d')]
[InstanceType, InstanceStorageInfo.TotalSizeInGB]" \
--output table

The preceding command returns the raw device size for the instance store. RDS for Oracle uses a small
portion of this space for configuration. The space in the instance store that is available for temporary
tablespaces or the flash cache is slightly smaller.

Supported engine versions for the RDS for Oracle instance store
The instance store is supported for the following RDS for Oracle engine versions:

• 21.0.0.0.ru-2022-01.rur-2022-01.r1 or higher Oracle Database 21c versions


• 19.0.0.0.ru-2021-10.rur-2021-10.r1 or higher Oracle Database 19c versions

Supported AWS Regions for the RDS for Oracle instance store
The instance store is available in all AWS Regions where one or more of these instance types are
supported. For more information on the db.m5d and db.r5d instance classes, see DB instance
classes (p. 11). For more information on the instance classes supported by Amazon RDS for Oracle, see
RDS for Oracle instance classes (p. 1796).

Cost of the RDS for Oracle instance store


The cost of the instance store is built into the cost of the instance-store turned on instances. You
don't incur additional costs by enabling an instance store on an RDS for Oracle DB instance. For more
information about instance-store turned on instances, see Supported instance classes for the RDS for
Oracle instance store (p. 1937).

Turning on an RDS for Oracle instance store


To turn on the instance store for RDS for Oracle temporary data, do one of the following:

• Create an RDS for Oracle DB instance using a supported instance class. For more information, see
Creating an Amazon RDS DB instance (p. 300).
• Modify an existing RDS for Oracle DB instance to use a supported instance class. For more information,
see Modifying an Amazon RDS DB instance (p. 401).

Configuring an RDS for Oracle instance store


By default, 100% of instance store space is allocated to the temporary tablespace. To configure
the instance store to allocate space to the flash cache and temporary tablespace, set the following
parameters in the parameter group for your instance:

db_flash_cache_size={DBInstanceStore*{0,2,4,6,8,10}/10}

This parameter specifies the amount of storage space allocated for the flash cache. This parameter is
valid only for Oracle Database Enterprise Edition. The default value is {DBInstanceStore*0/10}.
If you set a nonzero value for db_flash_cache_size, your RDS for Oracle instance enables the
flash cache after you restart the instance.

1938
Amazon Relational Database Service User Guide
Configuring the instance store

rds.instance_store_temp_size={DBInstanceStore*{0,2,4,6,8,10}/10}

This parameter specifies the amount of storage space allocated for the temporary tablespace.
The default value is {DBInstanceStore*10/10}. This parameter is modifiable for Oracle
Database Enterprise Edition and read-only for Standard Edition 2. If you set a nonzero value for
rds.instance_store_temp_size, Amazon RDS allocates space in the instance store for the
temporary tablespace.

You can set the db_flash_cache_size and rds.instance_store_temp_size parameters for


DB instances that don't use an instance store. In this case, both settings evaluate to 0, which turns
off the feature. In this case, you can use the same parameter group for different instance sizes and
for instances that don't use an instance store. If you modify these parameters, make sure to reboot
the associated instances so that the changes can take effect.
Important
If you allocate space for a temporary tablespace, Amazon RDS doesn't create the temporary
tablespace automatically. To learn how to create the temporary tablespace on the instance
store, see Creating a temporary tablespace on the instance store (p. 1871).

The combined value of the preceding parameters must not exceed 10/10, or 100%. The following table
illustrates valid and invalid parameter settings.

db_flash_cache_size setting rds.instance_store_temp_size setting Explanation

db_flash_cache_size={DBInstanceStore*0/10}
rds.instance_store_temp_size={DBInstanceStore*10/10}
This is a valid
configuration for all
editions of Oracle
Database. Amazon
RDS allocates
100% of instance
store space to
the temporary
tablespace. This is
the default.

db_flash_cache_size={DBInstanceStore*10/10}
rds.instance_store_temp_size={DBInstanceStore*0/10}This is a valid
configuration for
Oracle Database
Enterprise Edition
only. Amazon RDS
allocates 100% of
instance store space
to the flash cache.

db_flash_cache_size={DBInstanceStore*2/10}
rds.instance_store_temp_size={DBInstanceStore*8/10}This is a valid
configuration for
Oracle Database
Enterprise Edition
only. Amazon RDS
allocates 20%
of instance store
space to the flash
cache, and 80% of
instance store space
to the temporary
tablespace.

1939
Amazon Relational Database Service User Guide
Configuring the instance store

db_flash_cache_size setting rds.instance_store_temp_size setting Explanation

db_flash_cache_size={DBInstanceStore*6/10}
rds.instance_store_temp_size={DBInstanceStore*4/10}This is a valid
configuration for
Oracle Database
Enterprise Edition
only. Amazon RDS
allocates 60%
of instance store
space to the flash
cache, and 40% of
instance store space
to the temporary
tablespace.

db_flash_cache_size={DBInstanceStore*2/10}
rds.instance_store_temp_size={DBInstanceStore*4/10}This is a valid
configuration for
Oracle Database
Enterprise Edition
only. Amazon RDS
allocates 20%
of instance store
space to the flash
cache, and 40% of
instance store space
to the temporary
tablespace.

db_flash_cache_size={DBInstanceStore*8/10}
rds.instance_store_temp_size={DBInstanceStore*8/10}This is an invalid
configuration
because the
combined
percentage of
instance store space
exceeds 100%.
In such cases,
Amazon RDS fails
the attempt.

Considerations when changing the DB instance type


If you change your DB instance type, it can affect the configuration of the flash cache or the temporary
tablespace on the instance store. Consider the following modifications and their effects:

You scale up or scale down the DB instance that supports the instance store.

The following values increase or decrease proportionally to the new size of the instance store:
• The new size of the flash cache.
• The space allocated to the temporary tablespaces that reside in the instance store.

For example, the setting db_flash_cache_size={DBInstanceStore*6/10} on a


db.m5d.4xlarge instance provides around 340 GB of flash cache space. If you scale up the instance
type to db.m5d.8xlarge, the flash cache space increases to around 680 GB.

1940
Amazon Relational Database Service User Guide
Configuring the instance store

You modify a DB instance that doesn't use an instance store to an instance that does use an instance
store.

If db_flash_cache_size is set to a value larger than 0, the flash cache is configured. If


rds.instance_store_temp_size is set to a value larger than 0, the instance store space is
allocated for use by a temporary tablespace. RDS for Oracle doesn't move tempfiles to the instance
store automatically. For information about using the allocated space, see Creating a temporary
tablespace on the instance store (p. 1871) or Adding a tempfile to the instance store on a read
replica (p. 1872).
You modify a DB instance that uses an instance store to an instance that doesn't use an instance
store.

In this case, RDS for Oracle removes the flash cache. RDS re-creates the tempfile that is currently
located on the instance store on an Amazon EBS volume. The maximum size of the new tempfile is
the former size of the rds.instance_store_temp_size parameter.

Working with an instance store on an Oracle read replica


Read replicas support the flash cache and temporary tablespaces on an instance store. While the flash
cache works the same way as on the primary DB instance, note the following differences for temporary
tablespaces:

• You can't create a temporary tablespace on a read replica. If you create a new temporary tablespace on
the primary instance, RDS for Oracle replicates the tablespace information without tempfiles. To add a
new tempfile, use either of the following techniques:
• Use the Amazon RDS procedure rdsadmin.rdsadmin_util.add_inst_store_tempfile. RDS
for Oracle creates a tempfile in the instance store on your read replica, and adds it to the specified
temporary tablespace.
• Run the ALTER TABLESPACE … ADD TEMPFILE command. RDS for Oracle places the tempfile on
Amazon EBS storage.
Note
The tempfile sizes and storage types can be different on the primary DB instance and the read
replica.
• You can manage the default temporary tablespace setting only on the primary DB instance. RDS for
Oracle replicates the setting to all read replicas.
• You can configure the temporary tablespace groups only on the primary DB instance. RDS for Oracle
replicates the setting to all read replicas.

Configuring a temporary tablespace group on an instance store


and Amazon EBS
You can configure a temporary tablespace group to include temporary tablespaces on both an instance
store and Amazon EBS. This technique is useful when you want more temporary storage than is allowed
by the maximum setting of rds.instance_store_temp_size.

When you configure a temporary tablespace group on both an instance store and Amazon EBS, the
two tablespaces have significantly different performance characteristics. Oracle Database chooses
the tablespace to serve queries based on an internal algorithm. Therefore, similar queries can vary in
performance.

Typically, you create a temporary tablespace in the instance store as follows:

1. Create a temporary tablespace in the instance store.


2. Set the new tablespace as the database default temporary tablespace.

1941
Amazon Relational Database Service User Guide
Turning on HugePages

If the tablespace size in the instance store is insufficient, you can create additional temporary storage as
follows:

1. Assign the temporary tablespace in the instance store to a temporary tablespace group.
2. Create a new temporary tablespace in Amazon EBS if one doesn't exist.
3. Assign the temporary tablespace in Amazon EBS to the same tablespace group that includes the
instance store tablespace.
4. Set the tablespace group as the default temporary tablespace.

The following example assumes that the size of the temporary tablespace in the instance store
doesn't meet your application requirements. The example creates the temporary tablespace
temp_in_inst_store in the instance store, assigns it to tablespace group temp_group, adds the
existing Amazon EBS tablespace named temp_in_ebs to this group, and sets this group as the default
temporary tablespace.

SQL> EXEC rdsadmin.rdsadmin_util.create_inst_store_tmp_tblspace('temp_in_inst_store');

PL/SQL procedure successfully completed.

SQL> ALTER TABLESPACE temp_in_inst_store TABLESPACE GROUP temp_group;

Tablespace altered.

SQL> ALTER TABLESPACE temp_in_ebs TABLESPACE GROUP temp_group;

Tablespace altered.

SQL> EXEC rdsadmin.rdsadmin_util.alter_default_temp_tablespace('temp_group');

PL/SQL procedure successfully completed.

SQL> SELECT * FROM DBA_TABLESPACE_GROUPS;

GROUP_NAME TABLESPACE_NAME
------------------------------ ------------------------------
TEMP_GROUP TEMP_IN_EBS
TEMP_GROUP TEMP_IN_INST_STORE

SQL> SELECT PROPERTY_VALUE FROM DATABASE_PROPERTIES WHERE


PROPERTY_NAME='DEFAULT_TEMP_TABLESPACE';

PROPERTY_VALUE
--------------
TEMP_GROUP

Removing an RDS for Oracle instance store


To remove the instance store, modify your RDS for Oracle DB instance to use an instance type that
doesn't support instance store, such as db.m5 or db.r5.

Turning on HugePages for an RDS for Oracle instance


Amazon RDS for Oracle supports Linux kernel HugePages for increased database scalability. HugePages
results in smaller page tables and less CPU time spent on memory management, increasing the
performance of large database instances. For more information, see Overview of HugePages in the
Oracle documentation.

You can use HugePages with all supported versions and editions of RDS for Oracle.

1942
Amazon Relational Database Service User Guide
Turning on HugePages

The use_large_pages parameter controls whether HugePages are turned on for a DB instance. The
possible settings for this parameter are ONLY, FALSE, and {DBInstanceClassHugePagesDefault}.
The use_large_pages parameter is set to {DBInstanceClassHugePagesDefault} in the default
DB parameter group for Oracle.

To control whether HugePages are turned on for a DB instance automatically, you can use the
DBInstanceClassHugePagesDefault formula variable in parameter groups. The value is determined
as follows:

• For the DB instance classes mentioned in the table following, DBInstanceClassHugePagesDefault


always evaluates to FALSE by default, and use_large_pages evaluates to FALSE. You can turn
on HugePages manually for these DB instance classes if the DB instance class has at least 14 GiB of
memory.
• For DB instance classes not mentioned in the table following, if the DB instance class has less than
14 GiB of memory, DBInstanceClassHugePagesDefault always evaluates to FALSE. Also,
use_large_pages evaluates to FALSE.
• For DB instance classes not mentioned in the table following, if the instance class has at least 14 GiB
of memory and less than 100 GiB of memory, DBInstanceClassHugePagesDefault evaluates to
TRUE by default. Also, use_large_pages evaluates to ONLY. You can turn off HugePages manually by
setting use_large_pages to FALSE.
• For DB instance classes not mentioned in the table following, if the instance class has at least
100 GiB of memory, DBInstanceClassHugePagesDefault always evaluates to TRUE. Also,
use_large_pages evaluates to ONLY and HugePages can't be disabled.

HugePages are not turned on by default for the following DB instance classes.

DB instance class family DB instance classes with HugePages not turned on by default

db.m5 db.m5.large

db.m4 db.m4.large, db.m4.xlarge, db.m4.2xlarge, db.m4.4xlarge,


db.m4.10xlarge

db.t3 db.t3.micro, db.t3.small, db.t3.medium, db.t3.large

For more information about DB instance classes, see Hardware specifications for DB instance
classes (p. 87).

To turn on HugePages for new or existing DB instances manually, set the use_large_pages parameter
to ONLY. You can't use HugePages with Oracle Automatic Memory Management (AMM). If you set
the parameter use_large_pages to ONLY, then you must also set both memory_target and
memory_max_target to 0. For more information about setting DB parameters for your DB instance, see
Working with parameter groups (p. 347).

You can also set the sga_target, sga_max_size, and pga_aggregate_target parameters. When
you set system global area (SGA) and program global area (PGA) memory parameters, add the values
together. Subtract this total from your available instance memory (DBInstanceClassMemory) to
determine the free memory beyond the HugePages allocation. You must leave free memory of at least 2
GiB, or 10 percent of the total available instance memory, whichever is smaller.

After you configure your parameters, you must reboot your DB instance for the changes to take effect.
For more information, see Rebooting a DB instance (p. 436).
Note
The Oracle DB instance defers changes to SGA-related initialization parameters until you reboot
the instance without failover. In the Amazon RDS console, choose Reboot but do not choose

1943
Amazon Relational Database Service User Guide
Turning on HugePages

Reboot with failover. In the AWS CLI, call the reboot-db-instance command with the --
no-force-failover parameter. The DB instance does not process the SGA-related parameters
during failover or during other maintenance operations that cause the instance to restart.

The following is a sample parameter configuration for HugePages that enables HugePages manually. You
should set the values to meet your needs.

memory_target = 0
memory_max_target = 0
pga_aggregate_target = {DBInstanceClassMemory*1/8}
sga_target = {DBInstanceClassMemory*3/4}
sga_max_size = {DBInstanceClassMemory*3/4}
use_large_pages = ONLY

Assume the following parameters values are set in a parameter group.

memory_target = IF({DBInstanceClassHugePagesDefault}, 0,
{DBInstanceClassMemory*3/4})
memory_max_target = IF({DBInstanceClassHugePagesDefault}, 0,
{DBInstanceClassMemory*3/4})
pga_aggregate_target = IF({DBInstanceClassHugePagesDefault},
{DBInstanceClassMemory*1/8}, 0)
sga_target = IF({DBInstanceClassHugePagesDefault},
{DBInstanceClassMemory*3/4}, 0)
sga_max_size = IF({DBInstanceClassHugePagesDefault},
{DBInstanceClassMemory*3/4}, 0)
use_large_pages = {DBInstanceClassHugePagesDefault}

The parameter group is used by a db.r4 DB instance class with less than 100 GiB of memory. With
these parameter settings and use_large_pages set to {DBInstanceClassHugePagesDefault},
HugePages are turned on for the db.r4 instance.

Consider another example with following parameters values set in a parameter group.

memory_target = IF({DBInstanceClassHugePagesDefault}, 0,
{DBInstanceClassMemory*3/4})
memory_max_target = IF({DBInstanceClassHugePagesDefault}, 0,
{DBInstanceClassMemory*3/4})
pga_aggregate_target = IF({DBInstanceClassHugePagesDefault},
{DBInstanceClassMemory*1/8}, 0)
sga_target = IF({DBInstanceClassHugePagesDefault},
{DBInstanceClassMemory*3/4}, 0)
sga_max_size = IF({DBInstanceClassHugePagesDefault},
{DBInstanceClassMemory*3/4}, 0)
use_large_pages = FALSE

The parameter group is used by a db.r4 DB instance class and a db.r5 DB instance class, both with less
than 100 GiB of memory. With these parameter settings, HugePages are turned off on the db.r4 and
db.r5 instance.
Note
If this parameter group is used by a db.r4 DB instance class or db.r5 DB instance class with at
least 100 GiB of memory, the FALSE setting for use_large_pages is overridden and set to
ONLY. In this case, a customer notification regarding the override is sent.

After HugePages are active on your DB instance, you can view HugePages information by
enabling enhanced monitoring. For more information, see Monitoring OS metrics with Enhanced
Monitoring (p. 797).

1944
Amazon Relational Database Service User Guide
Turning on extended data types

Turning on extended data types in RDS for Oracle


Amazon RDS for Oracle supports extended data types. With extended data types, the maximum size
is 32,767 bytes for the VARCHAR2, NVARCHAR2, and RAW data types. To use extended data types, set
the MAX_STRING_SIZE parameter to EXTENDED. For more information, see Extended data types in the
Oracle documentation.

If you don't want to use extended data types, keep the MAX_STRING_SIZE parameter set to STANDARD
(the default). In this case, the size limits are 4,000 bytes for the VARCHAR2 and NVARCHAR2 data types,
and 2,000 bytes for the RAW data type.

You can turn on extended data types on a new or existing DB instance. For new DB instances, DB instance
creation time is typically longer when you turn on extended data types. For existing DB instances, the DB
instance is unavailable during the conversion process.

Considerations for extended data types


Consider the following when you enable extended data types for your DB instance:

• When you turn on extended data types, you can't change the DB instance back to use the standard
size for data types. After a DB instance is converted to use extended data types, if you set the
MAX_STRING_SIZE parameter back to STANDARD it results in the incompatible-parameters
status.
• When you restore a DB instance that uses extended data types, you must specify a parameter group
with the MAX_STRING_SIZE parameter set to EXTENDED. During restore, if you specify the default
parameter group or any other parameter group with MAX_STRING_SIZE set to STANDARD it results in
the incompatible-parameters status.
• When the DB instance status is incompatible-parameters because of the MAX_STRING_SIZE
setting, the DB instance remains unavailable until you set the MAX_STRING_SIZE parameter to
EXTENDED and reboot the DB instance.
• We recommend that you don't turn on extended data types for Oracle DB instances running on the
t2.micro DB instance class.

Turning on extended data types for a new DB instance


To turn on extended data types for a new DB instance

1. Set the MAX_STRING_SIZE parameter to EXTENDED in a parameter group.

To set the parameter, you can either create a new parameter group or modify an existing parameter
group.

For more information, see Working with parameter groups (p. 347).
2. Create a new RDS for Oracle DB instance.

For more information, see Creating an Amazon RDS DB instance (p. 300).
3. Associate the parameter group with MAX_STRING_SIZE set to EXTENDED with the DB instance.

For more information, see Creating an Amazon RDS DB instance (p. 300).

1945
Amazon Relational Database Service User Guide
Turning on extended data types

Turning on extended data types for an existing DB instance


When you modify a DB instance to turn on extended data types, RDS converts the data in the database
to use the extended sizes. The conversion and downtime occur when you next reboot the database after
the parameter change. The DB instance is unavailable during the conversion.

The amount of time it takes to convert the data depends on the DB instance class, the database size, and
the time of the last DB snapshot. To reduce downtime, consider taking a snapshot immediately before
rebooting. This shortens the time of the backup that occurs during the conversion workflow.
Note
After you turn on extended data types, you can't perform a point-in-time restore to a time
during the conversion. You can restore to the time immediately before the conversion or after
the conversion.

To turn on extended data types for an existing DB instance

1. Take a snapshot of the database.

If there are invalid objects in the database, Amazon RDS tries to recompile them. The conversion
to extended data types can fail if Amazon RDS can't recompile an invalid object. The snapshot
enables you to restore the database if there is a problem with the conversion. Always check for
invalid objects before conversion and fix or drop those invalid objects. For production databases, we
recommend testing the conversion process on a copy of your DB instance first.

For more information, see Creating a DB snapshot (p. 613).


2. Set the MAX_STRING_SIZE parameter to EXTENDED in a parameter group.

To set the parameter, you can either create a new parameter group or modify an existing parameter
group.

For more information, see Working with parameter groups (p. 347).
3. Modify the DB instance to associate it with the parameter group with MAX_STRING_SIZE set to
EXTENDED.

For more information, see Modifying an Amazon RDS DB instance (p. 401).
4. Reboot the DB instance for the parameter change to take effect.

For more information, see Rebooting a DB instance (p. 436).

1946
Amazon Relational Database Service User Guide
Importing data into Oracle

Importing data into Oracle on Amazon RDS


How you import data into an Amazon RDS for Oracle DB instance depends on the following:

• The amount of data you have


• The number of database objects in your database
• The variety of database objects in your database

For example, you can use the following tools, depending on your requirements:

• Oracle SQL Developer – Import a simple, 20 MB database.


• Oracle Data Pump – Import complex databases, or databases that are several hundred megabytes or
several terabytes in size. For example, you can transport tablespaces from an on-premises database
to your RDS for Oracle DB instance. You can use Amazon S3 or Amazon EFS to transfer the data files
and metadata. For more information, see Migrating using Oracle transportable tablespaces (p. 1962),
Amazon EFS integration (p. 2020), and Amazon S3 integration (p. 1992).
• AWS Database Migration Service (AWS DMS) – Migrate databases without downtime. For more
information about AWS DMS, see What is AWS Database Migration Service and the blog post
Migrating Oracle databases with near-zero downtime using AWS DMS.

Important
Before you use the preceding migration techniques, we recommend that you back up your
database. After you import the data, you can back up your RDS for Oracle DB instances by
creating snapshots. Later, you can restore the snapshots. For more information, see Backing up
and restoring (p. 590).

For many database engines, ongoing replication can continue until you are ready to switch over to the
target database. You can use AWS DMS to migrate to RDS for Oracle from either the same database
engine or a different engine. If you migrate from a different database engine, you can use the AWS
Schema Conversion Tool to migrate schema objects that AWS DMS doesn't migrate.

Topics
• Importing using Oracle SQL Developer (p. 1947)
• Importing using Oracle Data Pump (p. 1948)
• Importing using Oracle Export/Import (p. 1959)
• Importing using Oracle SQL*Loader (p. 1959)
• Migrating with Oracle materialized views (p. 1960)
• Migrating using Oracle transportable tablespaces (p. 1962)

Importing using Oracle SQL Developer


For small databases, you can use Oracle SQL Developer, a graphical Java tool distributed without cost
by Oracle. You can install this tool on your desktop computer (Windows, Linux, or Mac) or on one of
your servers. SQL Developer provides options for migrating data between two Oracle databases, or for
migrating data from other databases, such as MySQL, to an Oracle database. SQL Developer is best
suited for migrating small databases. We recommend that you read the Oracle SQL Developer product
documentation before you begin migrating your data.

After you install SQL Developer, you can use it to connect to your source and target databases. Use the
Database Copy command on the Tools menu to copy your data to your Amazon RDS instance.

1947
Amazon Relational Database Service User Guide
Importing using Oracle Data Pump

To download SQL Developer, go to http://www.oracle.com/technetwork/developer-tools/sql-developer.

Oracle also has documentation on how to migrate from other databases, including MySQL and SQL
Server. For more information, see http://www.oracle.com/technetwork/database/migration in the
Oracle documentation.

Importing using Oracle Data Pump


Oracle Data Pump is a utility that allows you to export Oracle data to a dump file and import it into
another Oracle database. It is a long-term replacement for the Oracle Export/Import utilities. Oracle
Data Pump is the recommended way to move large amounts of data from an Oracle database to an
Amazon RDS DB instance.

The examples in this section show one way to import data into an Oracle database, but Oracle Data
Pump supports other techniques. For more information, see the Oracle Database documentation.

The examples in this section use the DBMS_DATAPUMP package. You can accomplish the same tasks
using the Oracle Data Pump command line utilities impdp and expdp. You can install these utilities
on a remote host as part of an Oracle Client installation, including Oracle Instant Client. For more
information, see How do I use Oracle Instant Client to run Data Pump Import or Export for my Amazon
RDS for Oracle DB instance?

Topics
• Overview of Oracle Data Pump (p. 1948)
• Importing data with Oracle Data Pump and an Amazon S3 bucket (p. 1950)
• Importing data with Oracle Data Pump and a database link (p. 1954)

Overview of Oracle Data Pump


Oracle Data Pump is made up of the following components:

• Command-line clients expdp and impdp


• The DBMS_DATAPUMP PL/SQL package
• The DBMS_METADATA PL/SQL package

You can use Oracle Data Pump for the following scenarios:

• Import data from an Oracle database, either on-premises or on an Amazon EC2 instance, to an RDS for
Oracle DB instance.
• Import data from an RDS for Oracle DB instance to an Oracle database, either on-premises or on an
Amazon EC2 instance.
• Import data between RDS for Oracle DB instances, for example, to migrate data from EC2-Classic to
VPC.

To download Oracle Data Pump utilities, see Oracle database software downloads on the Oracle
Technology Network website. For compatibility considerations when migrating between versions of
Oracle Database, see the Oracle Database documentation.

Oracle Data Pump workflow


Typically, you use Oracle Data Pump in the following stages:

1. Export your data into a dump file on the source database.

1948
Amazon Relational Database Service User Guide
Importing using Oracle Data Pump

2. Upload your dump file to your destination RDS for Oracle DB instance. You can transfer using an
Amazon S3 bucket or by using a database link between the two databases.
3. Import the data from your dump file into your RDS for Oracle DB instance.

Oracle Data Pump best practices


When you use Oracle Data Pump to import data into an RDS for Oracle instance, we recommend the
following best practices:

• Perform imports in schema or table mode to import specific schemas and objects.
• Limit the schemas you import to those required by your application.
• Don't import in full mode or import schemas for system-maintained components.

Because RDS for Oracle doesn't allow access to SYS or SYSDBA administrative users, these actions
might damage the Oracle data dictionary and affect the stability of your database.
• When loading large amounts of data, do the following:
1. Transfer the dump file to the target RDS for Oracle DB instance.
2. Take a DB snapshot of your instance.
3. Test the import to verify that it succeeds.

If database components are invalidated, you can delete the DB instance and re-create it from the DB
snapshot. The restored DB instance includes any dump files staged on the DB instance when you took
the DB snapshot.
• Don't import dump files that were created using the Oracle Data Pump export parameters
TRANSPORT_TABLESPACES, TRANSPORTABLE, or TRANSPORT_FULL_CHECK. RDS for Oracle DB
instances don't support importing these dump files.
• Don't import dump files that contain Oracle Scheduler objects in SYS, SYSTEM, RDSADMIN, RDSSEC,
and RDS_DATAGUARD, and belong to the following categories:
• Jobs
• Programs
• Schedules
• Chains
• Rules
• Evaluation contexts
• Rule sets

RDS for Oracle DB instances don't support importing these dump files.
• To exclude unsupported Oracle Scheduler objects, use additional directives during the Data Pump
export. If you use DBMS_DATAPUMP, you can add an additional METADATA_FILTER before the
DBMS_METADATA.START_JOB:

DBMS_DATAPUMP.METADATA_FILTER(
v_hdnl,
'EXCLUDE_NAME_EXPR',
q'[IN (SELECT NAME FROM SYS.OBJ$
WHERE TYPE# IN (66,67,74,79,59,62,46)
AND OWNER# IN
(SELECT USER# FROM SYS.USER$
WHERE NAME IN ('RDSADMIN','SYS','SYSTEM','RDS_DATAGUARD','RDSSEC')
)
)
]',
'PROCOBJ'
1949
Amazon Relational Database Service User Guide
Importing using Oracle Data Pump

);

If you use expdp, create a parameter file that contains the exclude directive shown in the following
example. Then use PARFILE=parameter_file with your expdp command.

exclude=procobj:"IN
(SELECT NAME FROM sys.OBJ$
WHERE TYPE# IN (66,67,74,79,59,62,46)
AND OWNER# IN
(SELECT USER# FROM SYS.USER$
WHERE NAME IN ('RDSADMIN','SYS','SYSTEM','RDS_DATAGUARD','RDSSEC')
)
)"

Importing data with Oracle Data Pump and an Amazon S3


bucket
The following import process uses Oracle Data Pump and an Amazon S3 bucket. The steps are as follows:

1. Export data on the source database using the Oracle DBMS_DATAPUMP package.
2. Place the dump file in an Amazon S3 bucket.
3. Download the dump file from the Amazon S3 bucket to the DATA_PUMP_DIR directory on the target
RDS for Oracle DB instance.
4. Import the data from the copied dump file into the RDS for Oracle DB instance using the package
DBMS_DATAPUMP.

Topics
• Requirements for Importing data with Oracle Data Pump and an Amazon S3 bucket (p. 1950)
• Step 1: Grant privileges to the database user on the RDS for Oracle target DB instance (p. 1951)
• Step 2: Export data into a dump file using DBMS_DATAPUMP (p. 1951)
• Step 3: Upload the dump file to your Amazon S3 bucket (p. 1952)
• Step 4: Download the dump file from your Amazon S3 bucket to your target DB instance (p. 1953)
• Step 5: Import your dump file into your target DB instance using DBMS_DATAPUMP (p. 1953)
• Step 6: Clean up (p. 1954)

Requirements for Importing data with Oracle Data Pump and an Amazon S3
bucket
The process has the following requirements:

• Make sure that an Amazon S3 bucket is available for file transfers, and that the Amazon S3 bucket is in
the same AWS Region as the DB instance. For instructions, see Create a bucket in the Amazon Simple
Storage Service Getting Started Guide.
• The object that you upload into the Amazon S3 bucket must be 5 TB or less. For more information
about working with objects in Amazon S3, see Amazon Simple Storage Service User Guide.
Note
If you dump file exceeds 5 TB, you can run the Oracle Data Pump export with the parallel
option. This operation spreads the data into multiple dump files so that you do not exceed the
5 TB limit for individual files.

1950
Amazon Relational Database Service User Guide
Importing using Oracle Data Pump

• You must prepare the Amazon S3 bucket for Amazon RDS integration by following the instructions in
Configuring IAM permissions for RDS for Oracle integration with Amazon S3 (p. 1992).
• You must ensure that you have enough storage space to store the dump file on the source instance
and the target DB instance.

Note
This process imports a dump file into the DATA_PUMP_DIR directory, a preconfigured directory
on all Oracle DB instances. This directory is located on the same storage volume as your data
files. When you import the dump file, the existing Oracle data files use more space. Thus, you
should make sure that your DB instance can accommodate that additional use of space. The
imported dump file is not automatically deleted or purged from the DATA_PUMP_DIR directory.
To remove the imported dump file, use UTL_FILE.FREMOVE, found on the Oracle website.

Step 1: Grant privileges to the database user on the RDS for Oracle target DB
instance
In this step, you create the schemas into which you plan to import data and grant the users necessary
privileges.

To create users and grant necessary privileges on the RDS for Oracle target instance

1. Use SQL*Plus or Oracle SQL Developer to log in as the master user to the RDS for Oracle DB instance
into which the data will be imported. For information about connecting to a DB instance, see
Connecting to your RDS for Oracle DB instance (p. 1806).
2. Create the required tablespaces before you import the data. For more information, see Creating and
sizing tablespaces (p. 1870).
3. Create the user account and grant the necessary permissions and roles if the user account into which
the data is imported doesn't exist. If you plan to import data into multiple user schemas, create each
user account and grant the necessary privileges and roles to it.

For example, the following SQL statements create a new user and grant the necessary permissions
and roles to import the data into the schema owned by this user. Replace schema_1 with the name
of your schema in this step and in the following steps.

CREATE USER schema_1 IDENTIFIED BY my_password;


GRANT CREATE SESSION, RESOURCE TO schema_1;
ALTER USER schema_1 QUOTA 100M ON users;

Note
Specify a password other than the prompt shown here as a security best practice.

The preceding statements grant the new user the CREATE SESSION privilege and the RESOURCE
role. You might need additional privileges and roles depending on the database objects that you
import.

Step 2: Export data into a dump file using DBMS_DATAPUMP


To create a dump file, use the DBMS_DATAPUMP package.

To export Oracle data into a dump file

1. Use SQL Plus or Oracle SQL Developer to connect to the source RDS for Oracle DB instance with
an administrative user. If the source database is an RDS for Oracle DB instance, connect with the
Amazon RDS master user.
2. Export the data by calling DBMS_DATAPUMP procedures.

1951
Amazon Relational Database Service User Guide
Importing using Oracle Data Pump

The following script exports the SCHEMA_1 schema into a dump file named sample.dmp in the
DATA_PUMP_DIR directory. Replace SCHEMA_1 with the name of the schema that you want to
export.

DECLARE
v_hdnl NUMBER;
BEGIN
v_hdnl := DBMS_DATAPUMP.OPEN(
operation => 'EXPORT',
job_mode => 'SCHEMA',
job_name => null
);
DBMS_DATAPUMP.ADD_FILE(
handle => v_hdnl ,
filename => 'sample.dmp' ,
directory => 'DATA_PUMP_DIR',
filetype => dbms_datapump.ku$_file_type_dump_file
);
DBMS_DATAPUMP.ADD_FILE(
handle => v_hdnl,
filename => 'sample_exp.log',
directory => 'DATA_PUMP_DIR' ,
filetype => dbms_datapump.ku$_file_type_log_file
);
DBMS_DATAPUMP.METADATA_FILTER(v_hdnl,'SCHEMA_EXPR','IN (''SCHEMA_1'')');
DBMS_DATAPUMP.METADATA_FILTER(
v_hdnl,
'EXCLUDE_NAME_EXPR',
q'[IN (SELECT NAME FROM SYS.OBJ$
WHERE TYPE# IN (66,67,74,79,59,62,46)
AND OWNER# IN
(SELECT USER# FROM SYS.USER$
WHERE NAME IN ('RDSADMIN','SYS','SYSTEM','RDS_DATAGUARD','RDSSEC')
)
)
]',
'PROCOBJ'
);
DBMS_DATAPUMP.START_JOB(v_hdnl);
END;
/

Note
Data Pump starts jobs asynchronously. For information about monitoring a Data Pump job,
see Monitoring job status in the Oracle documentation.
3. (Optional) View the contents of the export log by calling the
rdsadmin.rds_file_util.read_text_file procedure. For more information, see Reading files
in a DB instance directory (p. 1927).

Step 3: Upload the dump file to your Amazon S3 bucket


Use the Amazon RDS procedure rdsadmin.rdsadmin_s3_tasks.upload_to_s3 to copy the dump
file to the Amazon S3 bucket. The following example uploads all of the files from the DATA_PUMP_DIR
directory to an Amazon S3 bucket named myS3bucket.

SELECT rdsadmin.rdsadmin_s3_tasks.upload_to_s3(
p_bucket_name => 'myS3bucket',
p_directory_name => 'DATA_PUMP_DIR')
AS TASK_ID FROM DUAL;

1952
Amazon Relational Database Service User Guide
Importing using Oracle Data Pump

The SELECT statement returns the ID of the task in a VARCHAR2 data type. For more information, see
Uploading files from your RDS for Oracle DB instance to an Amazon S3 bucket (p. 2002).

Step 4: Download the dump file from your Amazon S3 bucket to your target DB
instance
Perform this step using the Amazon RDS procedure
rdsadmin.rdsadmin_s3_tasks.download_from_s3. When you download a file to a directory, the
procedure download_from_s3 skips the download if an identically named file already exists in the
directory. To remove a file from the download directory, use UTL_FILE.FREMOVE, found on the Oracle
website.

To download your dump file

1. Start SQL*Plus or Oracle SQL Developer and log in as the master on your Amazon RDS target Oracle
DB instance
2. Download the dump file using the Amazon RDS procedure
rdsadmin.rdsadmin_s3_tasks.download_from_s3.

The following example downloads all files from an Amazon S3 bucket named myS3bucket to the
directory DATA_PUMP_DIR.

SELECT rdsadmin.rdsadmin_s3_tasks.download_from_s3(
p_bucket_name => 'myS3bucket',
p_directory_name => 'DATA_PUMP_DIR')
AS TASK_ID FROM DUAL;

The SELECT statement returns the ID of the task in a VARCHAR2 data type. For more information,
see Downloading files from an Amazon S3 bucket to an Oracle DB instance (p. 2004).

Step 5: Import your dump file into your target DB instance using
DBMS_DATAPUMP
Use DBMS_DATAPUMP to import the schema into your RDS for Oracle DB instance. Additional options
such as METADATA_REMAP might be required.

To import data into your target DB instance

1. Start SQL*Plus or SQL Developer and log in as the master user to your RDS for Oracle DB instance.
2. Export the data by calling DBMS_DATAPUMP procedures.

The following example imports the SCHEMA_1 data from sample_copied.dmp into your target DB
instance.

DECLARE
v_hdnl NUMBER;
BEGIN
v_hdnl := DBMS_DATAPUMP.OPEN(
operation => 'IMPORT',
job_mode => 'SCHEMA',
job_name => null);
DBMS_DATAPUMP.ADD_FILE(
handle => v_hdnl,
filename => 'sample_copied.dmp',
directory => 'DATA_PUMP_DIR',
filetype => dbms_datapump.ku$_file_type_dump_file);
DBMS_DATAPUMP.ADD_FILE(

1953
Amazon Relational Database Service User Guide
Importing using Oracle Data Pump

handle => v_hdnl,


filename => 'sample_imp.log',
directory => 'DATA_PUMP_DIR',
filetype => dbms_datapump.ku$_file_type_log_file);
DBMS_DATAPUMP.METADATA_FILTER(v_hdnl,'SCHEMA_EXPR','IN (''SCHEMA_1'')');
DBMS_DATAPUMP.START_JOB(v_hdnl);
END;
/

Note
Data Pump jobs are started asynchronously. For information about monitoring a Data Pump
job, see Monitoring job status in the Oracle documentation. You can view the contents of
the import log by using the rdsadmin.rds_file_util.read_text_file procedure. For
more information, see Reading files in a DB instance directory (p. 1927).
3. Verify the data import by listing the schema tables on your target DB instance.

For example, the following query returns the number of tables for SCHEMA_1.

SELECT COUNT(*) FROM DBA_TABLES WHERE OWNER='SCHEMA_1';

Step 6: Clean up
After the data has been imported, you can delete the files that you don't want to keep.

To remove unneeded files

1. Start SQL*Plus or SQL Developer and log in as the master user to your RDS for Oracle DB instance.
2. List the files in DATA_PUMP_DIR using the following command.

SELECT * FROM TABLE(rdsadmin.rds_file_util.listdir('DATA_PUMP_DIR')) ORDER BY MTIME;

3. Delete files in DATA_PUMP_DIR that you no longer require, use the following command.

EXEC UTL_FILE.FREMOVE('DATA_PUMP_DIR','filename');

For example, the following command deletes the file named sample_copied.dmp.

EXEC UTL_FILE.FREMOVE('DATA_PUMP_DIR','sample_copied.dmp');

Importing data with Oracle Data Pump and a database link


The following import process uses Oracle Data Pump and the Oracle DBMS_FILE_TRANSFER package.
The steps are as follows:

1. Connect to a source Oracle database, which can be an on-premises database, Amazon EC2 instance, or
an RDS for Oracle DB instance.
2. Export data using the DBMS_DATAPUMP package.
3. Use DBMS_FILE_TRANSFER.PUT_FILE to copy the dump file from the Oracle database to the
DATA_PUMP_DIR directory on the target RDS for Oracle DB instance that is connected using a
database link.
4. Import the data from the copied dump file into the RDS for Oracle DB instance using the
DBMS_DATAPUMP package.

1954
Amazon Relational Database Service User Guide
Importing using Oracle Data Pump

The import process using Oracle Data Pump and the DBMS_FILE_TRANSFER package has the following
steps.

Topics
• Requirements for importing data with Oracle Data Pump and a database link (p. 1955)
• Step 1: Grant privileges to the user on the RDS for Oracle target DB instance (p. 1955)
• Step 2: Grant privileges to the user on the source database (p. 1956)
• Step 3: Create a dump file using DBMS_DATAPUMP (p. 1956)
• Step 4: Create a database link to the target DB instance (p. 1957)
• Step 5: Copy the exported dump file to the target DB instance using
DBMS_FILE_TRANSFER (p. 1957)
• Step 6: Import the data file to the target DB instance using DBMS_DATAPUMP (p. 1958)
• Step 7: Clean up (p. 1958)

Requirements for importing data with Oracle Data Pump and a database link
The process has the following requirements:

• You must have execute privileges on the DBMS_FILE_TRANSFER and DBMS_DATAPUMP packages.
• You must have write privileges to the DATA_PUMP_DIR directory on the source DB instance.
• You must ensure that you have enough storage space to store the dump file on the source instance
and the target DB instance.

Note
This process imports a dump file into the DATA_PUMP_DIR directory, a preconfigured directory
on all Oracle DB instances. This directory is located on the same storage volume as your data
files. When you import the dump file, the existing Oracle data files use more space. Thus, you
should make sure that your DB instance can accommodate that additional use of space. The
imported dump file is not automatically deleted or purged from the DATA_PUMP_DIR directory.
To remove the imported dump file, use UTL_FILE.FREMOVE, found on the Oracle website.

Step 1: Grant privileges to the user on the RDS for Oracle target DB instance
To grant privileges to the user on the RDS for Oracle target DB instance, take the following steps:

1. Use SQL Plus or Oracle SQL Developer to connect to the RDS for Oracle DB instance into which you
intend to import the data. Connect as the Amazon RDS master user. For information about connecting
to the DB instance, see Connecting to your RDS for Oracle DB instance (p. 1806).
2. Create the required tablespaces before you import the data. For more information, see Creating and
sizing tablespaces (p. 1870).
3. If the user account into which the data is imported doesn't exist, create the user account and grant the
necessary permissions and roles. If you plan to import data into multiple user schemas, create each
user account and grant the necessary privileges and roles to it.

For example, the following commands create a new user named schema_1 and grant the necessary
permissions and roles to import the data into the schema for this user.

CREATE USER schema_1 IDENTIFIED BY my-password;


GRANT CREATE SESSION, RESOURCE TO schema_1;
ALTER USER schema_1 QUOTA 100M ON users;

Note
Specify a password other than the prompt shown here as a security best practice.

1955
Amazon Relational Database Service User Guide
Importing using Oracle Data Pump

The preceding example grants the new user the CREATE SESSION privilege and the RESOURCE role.
Additional privileges and roles might be required depending on the database objects that you import.
Note
Replace schema_1 with the name of your schema in this step and in the following steps.

Step 2: Grant privileges to the user on the source database


Use SQL*Plus or Oracle SQL Developer to connect to the RDS for Oracle DB instance that contains the
data to be imported. If necessary, create a user account and grant the necessary permissions.
Note
If the source database is an Amazon RDS instance, you can skip this step. You use your Amazon
RDS master user account to perform the export.

The following commands create a new user and grant the necessary permissions.

CREATE USER export_user IDENTIFIED BY my-password;


GRANT CREATE SESSION, CREATE TABLE, CREATE DATABASE LINK TO export_user;
ALTER USER export_user QUOTA 100M ON users;
GRANT READ, WRITE ON DIRECTORY data_pump_dir TO export_user;
GRANT SELECT_CATALOG_ROLE TO export_user;
GRANT EXECUTE ON DBMS_DATAPUMP TO export_user;
GRANT EXECUTE ON DBMS_FILE_TRANSFER TO export_user;

Note
Specify a password other than the prompt shown here as a security best practice.

Step 3: Create a dump file using DBMS_DATAPUMP


To create a dump file, do the following:

1. Use SQL*Plus or Oracle SQL Developer to connect to the source Oracle instance with an administrative
user or with the user you created in step 2. If the source database is an Amazon RDS for Oracle DB
instance, connect with the Amazon RDS master user.
2. Create a dump file using the Oracle Data Pump utility.

The following script creates a dump file named sample.dmp in the DATA_PUMP_DIR directory.

DECLARE
v_hdnl NUMBER;
BEGIN
v_hdnl := DBMS_DATAPUMP.OPEN(
operation => 'EXPORT' ,
job_mode => 'SCHEMA' ,
job_name => null
);
DBMS_DATAPUMP.ADD_FILE(
handle => v_hdnl,
filename => 'sample.dmp' ,
directory => 'DATA_PUMP_DIR' ,
filetype => dbms_datapump.ku$_file_type_dump_file
);
DBMS_DATAPUMP.ADD_FILE(
handle => v_hdnl ,
filename => 'sample_exp.log' ,
directory => 'DATA_PUMP_DIR' ,
filetype => dbms_datapump.ku$_file_type_log_file
);

1956
Amazon Relational Database Service User Guide
Importing using Oracle Data Pump

DBMS_DATAPUMP.METADATA_FILTER(
v_hdnl ,
'SCHEMA_EXPR' ,
'IN (''SCHEMA_1'')'
);
DBMS_DATAPUMP.METADATA_FILTER(
v_hdnl,
'EXCLUDE_NAME_EXPR',
q'[IN (SELECT NAME FROM sys.OBJ$
WHERE TYPE# IN (66,67,74,79,59,62,46)
AND OWNER# IN
(SELECT USER# FROM SYS.USER$
WHERE NAME IN ('RDSADMIN','SYS','SYSTEM','RDS_DATAGUARD','RDSSEC')
)
)
]',
'PROCOBJ'
);
DBMS_DATAPUMP.START_JOB(v_hdnl);
END;
/

Note
Data Pump jobs are started asynchronously. For information about monitoring a Data Pump
job, see Monitoring job status in the Oracle documentation. You can view the contents of the
export log by using the rdsadmin.rds_file_util.read_text_file procedure. For more
information, see Reading files in a DB instance directory (p. 1927).

Step 4: Create a database link to the target DB instance


Create a database link between your source DB instance and your target DB instance. Your local Oracle
instance must have network connectivity to the DB instance in order to create a database link and to
transfer your export dump file.

Perform this step connected with the same user account as the previous step.

If you are creating a database link between two DB instances inside the same VPC or peered VPCs, the
two DB instances should have a valid route between them. The security group of each DB instance must
allow ingress to and egress from the other DB instance. The security group inbound and outbound rules
can refer to security groups from the same VPC or a peered VPC. For more information, see Adjusting
database links for use with DB instances in a VPC (p. 1879).

The following command creates a database link named to_rds that connects to the Amazon RDS
master user at the target DB instance.

CREATE DATABASE LINK to_rds


CONNECT TO <master_user_account> IDENTIFIED BY <password>
USING '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=<dns or ip address of remote db>)
(PORT=<listener port>))(CONNECT_DATA=(SID=<remote SID>)))';

Step 5: Copy the exported dump file to the target DB instance using
DBMS_FILE_TRANSFER
Use DBMS_FILE_TRANSFER to copy the dump file from the source database instance to the target DB
instance. The following script copies a dump file named sample.dmp from the source instance to a target
database link named to_rds (created in the previous step).

BEGIN
DBMS_FILE_TRANSFER.PUT_FILE(

1957
Amazon Relational Database Service User Guide
Importing using Oracle Data Pump

source_directory_object => 'DATA_PUMP_DIR',


source_file_name => 'sample.dmp',
destination_directory_object => 'DATA_PUMP_DIR',
destination_file_name => 'sample_copied.dmp',
destination_database => 'to_rds' );
END;
/

Step 6: Import the data file to the target DB instance using DBMS_DATAPUMP
Use Oracle Data Pump to import the schema in the DB instance. Additional options such as
METADATA_REMAP might be required.

Connect to the DB instance with the Amazon RDS master user account to perform the import.

DECLARE
v_hdnl NUMBER;
BEGIN
v_hdnl := DBMS_DATAPUMP.OPEN(
operation => 'IMPORT',
job_mode => 'SCHEMA',
job_name => null);
DBMS_DATAPUMP.ADD_FILE(
handle => v_hdnl,
filename => 'sample_copied.dmp',
directory => 'DATA_PUMP_DIR',
filetype => dbms_datapump.ku$_file_type_dump_file );
DBMS_DATAPUMP.ADD_FILE(
handle => v_hdnl,
filename => 'sample_imp.log',
directory => 'DATA_PUMP_DIR',
filetype => dbms_datapump.ku$_file_type_log_file);
DBMS_DATAPUMP.METADATA_FILTER(v_hdnl,'SCHEMA_EXPR','IN (''SCHEMA_1'')');
DBMS_DATAPUMP.START_JOB(v_hdnl);
END;
/

Note
Data Pump jobs are started asynchronously. For information about monitoring a Data Pump
job, see Monitoring job status in the Oracle documentation. You can view the contents of the
import log by using the rdsadmin.rds_file_util.read_text_file procedure. For more
information, see Reading files in a DB instance directory (p. 1927).

You can verify the data import by viewing the user's tables on the DB instance. For example, the
following query returns the number of tables for schema_1.

SELECT COUNT(*) FROM DBA_TABLES WHERE OWNER='SCHEMA_1';

Step 7: Clean up
After the data has been imported, you can delete the files that you don't want to keep. You can list the
files in DATA_PUMP_DIR using the following command.

SELECT * FROM TABLE(rdsadmin.rds_file_util.listdir('DATA_PUMP_DIR')) ORDER BY MTIME;

To delete files in DATA_PUMP_DIR that you no longer require, use the following command.

EXEC UTL_FILE.FREMOVE('DATA_PUMP_DIR','<file name>');

1958
Amazon Relational Database Service User Guide
Importing using Oracle Export/Import

For example, the following command deletes the file named "sample_copied.dmp".

EXEC UTL_FILE.FREMOVE('DATA_PUMP_DIR','sample_copied.dmp');

Importing using Oracle Export/Import


You might consider Oracle Export/Import utilities for migrations in the following conditions:

• Your data size is small.


• Data types such as binary float and double aren't required.

The import process creates the necessary schema objects. Thus, you don't need to run a script to create
the objects beforehand.

The easiest way to install the Oracle the export and import utilities is to install the Oracle Instant Client.
To download the software, go to https://www.oracle.com/database/technologies/instant-client.html.
For documentation, see Instant Client for SQL*Loader, Export, and Import in the Oracle Database Utilities
manual.

To export tables and then import them

1. Export the tables from the source database using the exp command.

The following command exports the tables named tab1, tab2, and tab3. The dump file is
exp_file.dmp.

exp cust_dba@ORCL FILE=exp_file.dmp TABLES=(tab1,tab2,tab3) LOG=exp_file.log

The export creates a binary dump file that contains both the schema and data for the specified
tables.
2. Import the schema and data into a target database using the imp command.

The following command imports the tables tab1, tab2, and tab3 from dump file exp_file.dmp.

imp cust_dba@targetdb FROMUSER=cust_schema TOUSER=cust_schema \


TABLES=(tab1,tab2,tab3) FILE=exp_file.dmp LOG=imp_file.log

Export and Import have other variations that might be better suited to your requirements. See the
Oracle Database documentation for full details.

Importing using Oracle SQL*Loader


You might consider Oracle SQL*Loader for large databases that contain a limited number of objects.
Because the process of exporting from a source database and loading to a target database is specific to
the schema, the following example creates the sample schema objects, exports from a source, and then
loads the data into a target database.

The easiest way to install Oracle SQL*Loader is to install the Oracle Instant Client. To download the
software, go to https://www.oracle.com/database/technologies/instant-client.html. For documentation,
see Instant Client for SQL*Loader, Export, and Import in the Oracle Database Utilities manual.

To import data using Oracle SQL*Loader

1. Create a sample source table using the following SQL statement.

1959
Amazon Relational Database Service User Guide
Migrating with Oracle materialized views

CREATE TABLE customer_0 TABLESPACE users


AS (SELECT ROWNUM id, o.*
FROM ALL_OBJECTS o, ALL_OBJECTS x
WHERE ROWNUM <= 1000000);

2. On the target RDS for Oracle DB instance, create a destination table for loading the data. The clause
WHERE 1=2 ensures that you copy the structure of ALL_OBJECTS, but don't copy any rows.

CREATE TABLE customer_1 TABLESPACE users


AS (SELECT 0 AS ID, OWNER, OBJECT_NAME, CREATED
FROM ALL_OBJECTS
WHERE 1=2);

3. Export the data from the source database to a text file. The following example uses SQL*Plus. For
your data, you will likely need to generate a script that does the export for all the objects in the
database.

ALTER SESSION SET NLS_DATE_FORMAT = 'YYYY/MM/DD HH24:MI:SS'

SET LINESIZE 800 HEADING OFF FEEDBACK OFF ARRAY 5000 PAGESIZE 0
SPOOL customer_0.out
SET MARKUP HTML PREFORMAT ON
SET COLSEP ','

SELECT id, owner, object_name, created


FROM customer_0;

SPOOL OFF

4. Create a control file to describe the data. You might need to write a script to perform this step.

cat << EOF > sqlldr_1.ctl


load data
infile customer_0.out
into table customer_1
APPEND
fields terminated by "," optionally enclosed by '"'
(
id POSITION(01:10) INTEGER EXTERNAL,
owner POSITION(12:41) CHAR,
object_name POSITION(43:72) CHAR,
created POSITION(74:92) date "YYYY/MM/DD HH24:MI:SS"
)

If needed, copy the files generated by the preceding code to a staging area, such as an Amazon EC2
instance.
5. Import the data using SQL*Loader with the appropriate user name and password for the target
database.

sqlldr cust_dba@targetdb CONTROL=sqlldr_1.ctl BINDSIZE=10485760 READSIZE=10485760


ROWS=1000

Migrating with Oracle materialized views


To migrate large datasets efficiently, you can use Oracle materialized view replication. With replication,
you can keep the target tables synchronized with the source tables. Thus, you can switch over to Amazon
RDS later, if needed.

1960
Amazon Relational Database Service User Guide
Migrating with Oracle materialized views

Before you can migrate using materialized views, make sure that you meet the following requirements:

• Configure access from the target database to the source database. In the following example, access
rules were enabled on the source database to allow the RDS for Oracle target database to connect to
the source over SQL*Net.
• Create a database link from the RDS for Oracle DB instance to the source database.

To migrate data using materialized views

1. Create a user account on both source and RDS for Oracle target instances that can authenticate with
the same password. The following example creates a user named dblink_user.

CREATE USER dblink_user IDENTIFIED BY my-password


DEFAULT TABLESPACE users
TEMPORARY TABLESPACE temp;

GRANT CREATE SESSION TO dblink_user;

GRANT SELECT ANY TABLE TO dblink_user;

GRANT SELECT ANY DICTIONARY TO dblink_user;

Note
Specify a password other than the prompt shown here as a security best practice.
2. Create a database link from the RDS for Oracle target instance to the source instance using your
newly created user.

CREATE DATABASE LINK remote_site


CONNECT TO dblink_user IDENTIFIED BY my-password
USING '(description=(address=(protocol=tcp) (host=my-host)
(port=my-listener-port)) (connect_data=(sid=my-source-db-sid)))';

Note
Specify a password other than the prompt shown here as a security best practice.
3. Test the link:

SELECT * FROM V$INSTANCE@remote_site;

4. Create a sample table with primary key and materialized view log on the source instance.

CREATE TABLE customer_0 TABLESPACE users


AS (SELECT ROWNUM id, o.*
FROM ALL_OBJECTS o, ALL_OBJECTS x
WHERE ROWNUM <= 1000000);

ALTER TABLE customer_0 ADD CONSTRAINT pk_customer_0 PRIMARY KEY (id) USING INDEX;

CREATE MATERIALIZED VIEW LOG ON customer_0;

5. On the target RDS for Oracle DB instance, create a materialized view.

CREATE MATERIALIZED VIEW customer_0


BUILD IMMEDIATE REFRESH FAST
AS (SELECT *
FROM cust_dba.customer_0@remote_site);

6. On the target RDS for Oracle DB instance, refresh the materialized view.

1961
Amazon Relational Database Service User Guide
Migrating using Oracle transportable tablespaces

EXEC DBMS_MV.REFRESH('CUSTOMER_0', 'f');

7. Drop the materialized view and include the PRESERVE TABLE clause to retain the materialized view
container table and its contents.

DROP MATERIALIZED VIEW customer_0 PRESERVE TABLE;

The retained table has the same name as the dropped materialized view.

Migrating using Oracle transportable tablespaces


You can use the Oracle transportable tablespaces feature to copy a set of tablespaces from an
on-premises Oracle database to an RDS for Oracle DB instance. At the physical level, this feature
incrementally copies source data files and metadata files to your target instance. You can transfer the
files using either Amazon EFS or Amazon S3.

Topics
• Overview of Oracle transportable tablespaces (p. 1962)
• Phase 1: Set up your source host (p. 1964)
• Phase 2: Prepare the full tablespace backup (p. 1965)
• Phase 3: Make and transfer incremental backups (p. 1967)
• Phase 4: Transport the tablespaces (p. 1967)
• Phase 5: Validate the transported tablespaces (p. 1970)
• Phase 6: Clean up leftover files (p. 1970)

Overview of Oracle transportable tablespaces


A transportable tablespace set consists of data files for the set of tablespaces being transported and an
export dump file containing tablespace metadata. In a physical migration solution such as transportable
tablespaces, you transfer physical files: data files, configuration files, and Data Pump dump files.

Topics
• Advantages and disadvantages of transportable tablespaces (p. 1962)
• Limitations for transportable tablespaces (p. 1963)
• Prerequisites for transportable tablespaces (p. 1963)

Advantages and disadvantages of transportable tablespaces


We recommend that you use transportable tablespaces when you need to migrate one or more large
tablespaces to RDS with minimum downtime. Transportable tablespaces offer the following advantages
over logical migration:

• Downtime is lower than most other Oracle migration solutions.


• Because the transportable tablespace feature copies only physical files, it avoids the data integrity
errors and logical corruption that can occur in logical migration.
• No additional license is required.
• You can migrate a set of tablespaces across different platforms and endianness types, for example,
from an Oracle Solaris platform to Linux. However, transporting tablespaces to and from Windows
servers isn't supported.

1962
Amazon Relational Database Service User Guide
Migrating using Oracle transportable tablespaces

Note
Linux is fully tested and supported. Not all UNIX variations have been tested.

If you use transportable tablespaces, you can transport data using either Amazon S3 or Amazon EFS:

• When you use S3, you download RMAN backups to EBS storage attached to your DB instance. The
files remain in your EBS storage during the import. After the import, you can free up this space, which
remains allocated to your DB instance.
• When you use EFS, your backups remain in the EFS file system for the duration of the import. You
can remove the files afterward. In this technique, you don't need to provision EBS storage for your DB
instance. For this reason, we recommend using Amazon EFS instead of S3. For more information, see
Amazon EFS integration (p. 2020).

The primary disadvantage of transportable tablespaces is that you need relatively advanced knowledge
of Oracle Database. For more information, see Transporting Tablespaces Between Databases in the
Oracle Database Administrator’s Guide.

Limitations for transportable tablespaces


Oracle Database limitations for transportable tablespaces apply when you use this feature in RDS for
Oracle. For more information, see Limitations on Transportable Tablespaces and General Limitations on
Transporting Data in the Oracle Database Administrator’s Guide. Note the following additional limitations
for transportable tablespaces in RDS for Oracle:

• Neither the source or target database can use Standard Edition 2 (SE2). Only Enterprise Edition is
supported.
• You can't migrate data from an RDS for Oracle DB instance using transportable tablespaces. You can
only use transportable tablespaces to migrate data to an RDS for Oracle DB instance.
• The Windows operating system isn't supported.
• You can't transport tablespaces into a database at a lower release level. The target database must be
at the same or later release level as the source database. For example, you can’t transport tablespaces
from Oracle Database 21c into Oracle Database 19c.
• You can't transport administrative tablespaces such as SYSTEM and SYSAUX.
• You can't transport tablespaces that are encrypted or use encrypted columns.
• If you transfer files using Amazon S3, the maximum supported file size is 5 TiB.
• If the source database uses Oracle options such as Spatial, you can't transport tablespaces unless the
same options are configured on the target database.
• You can't transport tablespaces into an RDS for Oracle DB instance in an Oracle replica configuration.
As a workaround, you can delete all replicas, transport the tablespaces, and then recreate the replicas.

Prerequisites for transportable tablespaces


Before you begin, complete the following tasks:

• Review the requirements for transportable tablespaces described in the following documents in My
Oracle Support:
• Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup (Doc ID
2471245.1)
• Transportable Tablespace (TTS) Restrictions and Limitations: Details, Reference, and Version Where
Applicable (Doc ID 1454872.1)
• Primary Note for Transportable Tablespaces (TTS) -- Common Questions and Issues (Doc ID
1166564.1)

1963
Amazon Relational Database Service User Guide
Migrating using Oracle transportable tablespaces

• Make sure that the transportable tablespace feature is enabled on your target DB instance. The feature
is enabled only if you don't get an ORA-20304 error when you run the following query:

SELECT * FROM TABLE(rdsadmin.rdsadmin_transport_util.list_xtts_orphan_files);

If the transportable tablespace feature isn't enabled, reboot your DB instance. For more information,
see Rebooting a DB instance (p. 436).
• If you plan to transfer files using Amazon S3, do the following:
• Make sure that an Amazon S3 bucket is available for file transfers, and that the Amazon S3 bucket
is in the same AWS Region as your DB instance. For instructions, see Create a bucket in the Amazon
Simple Storage Service Getting Started Guide.
• Prepare the Amazon S3 bucket for Amazon RDS integration by following the instructions in
Configuring IAM permissions for RDS for Oracle integration with Amazon S3 (p. 1992).
• If you plan to transfer files using Amazon EFS, make sure that you have configured EFS according to
the instructions in Amazon EFS integration (p. 2020).
• We strongly recommend that you turn on automatic backups in your target DB instance. Because
the metadata import step (p. 1969) can potentially fail, it's important to be able to restore your DB
instance to its state before the import, thereby avoiding the necessity to back up, transfer, and import
your tablespaces again.

Phase 1: Set up your source host


In this step, you copy the transport tablespaces scripts provided by My Oracle Support and set up
necessary configuration files. In the following steps, the source host is running the database that contains
the tablespaces to be transported to your target instance.

To set up your source host

1. Log in to your source host as the owner of your Oracle home.


2. Make sure that your ORACLE_HOME and ORACLE_SID environment variables point to your source
database.
3. Log in to your database as an administrator, and verify that the time zone version, DB character set,
and national character set are the same as in your target database.

SELECT * FROM V$TIMEZONE_FILE;


SELECT * FROM NLS_DATABASE_PARAMETERS
WHERE PARAMETER IN ('NLS_CHARACTERSET','NLS_NCHAR_CHARACTERSET');

4. Set up the transportable tablespace utility as described in Oracle Support note 2471245.1.

The setup includes editing the xtt.properties file on your source host. The following sample
xtt.properties file specifies backups of three tablespaces in the /dsk1/backups directory.
These are the tablespaces that you intend to transport to your target DB instance.

#linux system
platformid=13
#list of tablespaces to transport
tablespaces=TBS1,TBS2,TBS3
#location where backup will be generated
src_scratch_location=/dsk1/backups
#RMAN command for performing backup
usermantransport=1

1964
Amazon Relational Database Service User Guide
Migrating using Oracle transportable tablespaces

Phase 2: Prepare the full tablespace backup


In this phase, you back up your tablespaces for the first time, transfer the
backups to your target host, and then restore them using the procedure
rdsadmin.rdsadmin_transport_util.import_xtts_tablespaces. When this phase is complete,
the initial tablespace backups reside on your target DB instance and can be updated with incremental
backups.

Topics
• Step 1: Back up the tablespaces on your source host (p. 1965)
• Step 2: Transfer the backup files to your target DB instance (p. 1965)
• Step 3: Import the tablespaces on your target DB instance (p. 1966)

Step 1: Back up the tablespaces on your source host


In this step, you use the xttdriver.pl script to make a full backup of your tablespaces. The output of
xttdriver.pl is stored in the TMPDIR environment variable.

To back up your tablespaces

1. If your tablespaces are in read-only mode, log in to your source database as a user with the ALTER
TABLESPACE privilege, and place your tablespaces in read/write mode. Otherwise, skip to the next
step.

The following example places tbs1, tbs2, and tbs3 in read/write mode.

ALTER TABLESPACE tbs1 READ WRITE;


ALTER TABLESPACE tbs2 READ WRITE;
ALTER TABLESPACE tbs3 READ WRITE;

2. Back up your tablespaces using the xttdriver.pl script. Optionally, you can specify --debug to
run the script in debug mode.

export TMPDIR=location_of_log_files
cd location_of_xttdriver.pl
$ORACLE_HOME/perl/bin/perl xttdriver.pl --backup

Step 2: Transfer the backup files to your target DB instance


In this step, copy the backup and configuration files from your scratch location to your target DB
instance. Choose one of the following options:

• If the source and target hosts share an Amazon EFS file system, use an operating system utility such
as cp to copy your backup files and the res.txt file from your scratch location to a shared directory.
Then skip to Step 3: Import the tablespaces on your target DB instance (p. 1966).
• If you need to stage your backups to an Amazon S3 bucket, complete the following steps.

1965
Amazon Relational Database Service User Guide
Migrating using Oracle transportable tablespaces

Step 2.2: Upload the backups to your Amazon S3 bucket


Upload your backups and the res.txt file from your scratch directory to your Amazon S3 bucket. For
more information, see Uploading objects in the Amazon Simple Storage Service User Guide.

Step 2.3: Download the backups from your Amazon S3 bucket to your target DB instance
In this step, you use the procedure rdsadmin.rdsadmin_s3_tasks.download_from_s3 to download
your backups to your RDS for Oracle DB instance.

To download your backups from your Amazon S3 bucket

1. Start SQL*Plus or Oracle SQL Developer and log in to your RDS for Oracle DB instance.
2. Download the backups from the Amazon S3 bucket to your target DB instance by using the
Amazon RDS procedure rdsadmin.rdsadmin_s3_tasks.download_from_s3 to d. The
following example downloads all of the files from an Amazon S3 bucket named mys3bucket to the
DATA_PUMP_DIR directory.

EXEC UTL_FILE.FREMOVE ('DATA_PUMP_DIR', 'res.txt');


SELECT rdsadmin.rdsadmin_s3_tasks.download_from_s3(
p_bucket_name => 'mys3bucket',
p_directory_name => 'DATA_PUMP_DIR')
AS TASK_ID FROM DUAL;

The SELECT statement returns the ID of the task in a VARCHAR2 data type. For more information,
see Downloading files from an Amazon S3 bucket to an Oracle DB instance (p. 2004).

Step 3: Import the tablespaces on your target DB instance


Use the procedure rdsadmin.rdsadmin_transport_util.import_xtts_tablespaces to restore
your tablespaces to your target DB instance. This procedure automatically converts the data files to the
correct endian format.

Import the tablespaces on your target DB instance

1. Start an Oracle SQL client and log in to your target RDS for Oracle DB instance as the master user.
2. Run the procedure rdsadmin.rdsadmin_transport_util.import_xtts_tablespaces,
specifying the tablespaces to import and the directory containing the backups.

The following example imports the tablespaces TBS1, TBS2, and TBS3 from the directory
DATA_PUMP_DIR.

VAR task_id CLOB

1966
Amazon Relational Database Service User Guide
Migrating using Oracle transportable tablespaces

BEGIN

:task_id:=rdsadmin.rdsadmin_transport_util.import_xtts_tablespaces('TBS1,TBS2,TBS3','DATA_PUMP_DIR
END;
/

PRINT task_id

3. (Optional) Monitor progress by querying the table rdsadmin.rds_xtts_operation_info. The


xtts_operation_state column shows the value EXECUTING, COMPLETED, or FAILED.

SELECT * FROM rdsadmin.rds_xtts_operation_info;

Note
For long-running operations, you can also query V$SESSION_LONGOPS, V$RMAN_STATUS,
and V$RMAN_OUTPUT.
4. View the log of the completed import by using the task ID from the previous step.

SELECT * FROM TABLE(rdsadmin.rds_file_util.read_text_file('BDUMP',


'dbtask-'||'&task_id'||'.log'));

Make sure that the import succeeded before continuing to the next step.

Phase 3: Make and transfer incremental backups


In this phase, you make and transfer incremental backups periodically while the source database is
active. This technique reduces the size of your final tablespace backup. If you take multiple incremental
backups, you must copy the res.txt file after the last incremental backup before you can apply it on
the target instance.

The steps are the same as in Phase 2: Prepare the full tablespace backup (p. 1965), except that the
import step is optional.

Phase 4: Transport the tablespaces


In this phase, you back up your read-only tablespaces and export Data Pump metadata, transfer these
files to your target host, and import both the tablespaces and the metadata.

Topics
• Step 1: Back up your read-only tablespaces (p. 1967)
• Step 2: Export tablespace metadata on your source host (p. 1968)
• Step 3: (Amazon S3 only) Transfer the backup and export files to your target DB instance (p. 1968)
• Step 4: Import the tablespaces on your target DB instance (p. 1968)
• Step 5: Import tablespace metadata on your target DB instance (p. 1969)

Step 1: Back up your read-only tablespaces


This step is identical to Step 1: Back up the tablespaces on your source host (p. 1965), with one key
difference: you place your tablespaces in read-only mode before backing up your tablespaces for the last
time.

The following example places tbs1, tbs2, and tbs3 in read-only mode.

ALTER TABLESPACE tbs1 READ ONLY;


ALTER TABLESPACE tbs2 READ ONLY;

1967
Amazon Relational Database Service User Guide
Migrating using Oracle transportable tablespaces

ALTER TABLESPACE tbs3 READ ONLY;

Step 2: Export tablespace metadata on your source host


Export your tablespace metadata by running the expdb utility on your source host. The following
example exports tablespaces TBS1, TBS2, and TBS3 to dump file xttdump.dmp in directory
DATA_PUMP_DIR.

expdp username/pwd \
dumpfile=xttdump.dmp \
directory=DATA_PUMP_DIR \
statistics=NONE \
transport_tablespaces=TBS1,TBS2,TBS3 \
transport_full_check=y \
logfile=tts_export.log

If DATA_PUMP_DIR is a shared directory in Amazon EFS, skip to Step 4: Import the tablespaces on your
target DB instance (p. 1968).

Step 3: (Amazon S3 only) Transfer the backup and export files to your target DB
instance
If you are using Amazon S3 to stage your tablespace backups and Data Pump export file, complete the
following steps.

Step 3.1: Upload the backups and dump file from your source host to your Amazon S3 bucket
Upload your backup and dump files from your source host to your Amazon S3 bucket. For more
information, see Uploading objects in the Amazon Simple Storage Service User Guide.

Step 3.2: Download the backups and dump file from your Amazon S3 bucket to your target DB
instance
In this step, you use the procedure rdsadmin.rdsadmin_s3_tasks.download_from_s3 to download
your backups and dump file to your RDS for Oracle DB instance. Follow the steps in Step 2.3: Download
the backups from your Amazon S3 bucket to your target DB instance (p. 1966).

Step 4: Import the tablespaces on your target DB instance


Use the procedure rdsadmin.rdsadmin_transport_util.import_xtts_tablespaces to restore
the tablespaces. For syntax and semantics of this procedure, see Importing transported tablespaces to
your DB instance (p. 1932)
Important
After you complete your final tablespace import, the next step is importing the Oracle Data
Pump metadata (p. 1968). If the import fails, it's important to return your DB instance to its
state before the failure. Thus, we recommend that you create a DB snapshot of your DB instance
by following the instructions in Creating a DB snapshot (p. 613). The snapshot will contain all
imported tablespaces, so if the import fails, you don’t need to repeat the backup and import
process.
If your target DB instance has automatic backups turned on, and Amazon RDS doesn't detect
that a valid snapshot was initiated before you import the metadata, RDS attempts to create a
snapshot. Depending on your instance activity, this snapshot might or might not succeed. If a
valid snapshot isn't detected or a snapshot can't be initiated, the metadata import exits with
errors.

Import the tablespaces on your target DB instance

1. Start an Oracle SQL client and log in to your target RDS for Oracle DB instance as the master user.

1968
Amazon Relational Database Service User Guide
Migrating using Oracle transportable tablespaces

2. Run the procedure rdsadmin.rdsadmin_transport_util.import_xtts_tablespaces,


specifying the tablespaces to import and the directory containing the backups.

The following example imports the tablespaces TBS1, TBS2, and TBS3 from the directory
DATA_PUMP_DIR.

BEGIN

:task_id:=rdsadmin.rdsadmin_transport_util.import_xtts_tablespaces('TBS1,TBS2,TBS3','DATA_PUMP_DIR
END;
/
PRINT task_id

3. (Optional) Monitor progress by querying the table rdsadmin.rds_xtts_operation_info. The


xtts_operation_state column shows the value EXECUTING, COMPLETED, or FAILED.

SELECT * FROM rdsadmin.rds_xtts_operation_info;

Note
For long-running operations, you can also query V$SESSION_LONGOPS, V$RMAN_STATUS,
and V$RMAN_OUTPUT.
4. View the log of the completed import by using the task ID from the previous step.

SELECT * FROM TABLE(rdsadmin.rds_file_util.read_text_file('BDUMP',


'dbtask-'||'&task_id'||'.log'));

Make sure that the import succeeded before continuing to the next step.
5. Take a manual DB snapshot by following the instructions in Creating a DB snapshot (p. 613).

Step 5: Import tablespace metadata on your target DB instance


In this step, you import the transportable tablespace metadata into your RDS for Oracle DB instance
using the procedure rdsadmin.rdsadmin_transport_util.import_xtts_metadata. For
syntax and semantics of this procedure, see Importing transportable tablespace metadata into
your DB instance (p. 1933). During the operation, the status of the import is shown in the table
rdsadmin.rds_xtts_operation_info.
Important
Before you import metadata, we strongly recommend that you confirm that a DB snapshot was
successfully created after you imported your tablespaces. If the import step fails, restore your
DB instance, address the import errors, and then attempt the import again.

Import the Data Pump metadata into your RDS for Oracle DB instance

1. Start your Oracle SQL client and log in to your target DB instance as the master user.
2. Create the users that own schemas in your transported tablespaces, if these users don't already exist.

CREATE USER tbs_owner IDENTIFIED BY password;

3. Import the metadata, specifying the name of the dump file and its directory location.

BEGIN
rdsadmin.rdsadmin_transport_util.import_xtts_metadata('xttdump.dmp','DATA_PUMP_DIR');
END;
/

1969
Amazon Relational Database Service User Guide
Migrating using Oracle transportable tablespaces

4. (Optional) Query the transportable tablespace history table to see the status of the metadata
import.

SELECT * FROM rdsadmin.rds_xtts_operation_info;

When the operation completes, your tablespaces are in read-only mode.


5. (Optional) View the log file.

The following example lists the contents of the BDUMP directory and then queries the import log.

SELECT * FROM TABLE(rdsadmin.rds_file_util.listdir(p_directory => 'BDUMP'));

SELECT * FROM TABLE(rdsadmin.rds_file_util.read_text_file(


p_directory => 'BDUMP',
p_filename => 'rds-xtts-import_xtts_metadata-2023-05-22.01-52-35.560858000.log'));

Phase 5: Validate the transported tablespaces


In this optional step, you validate your transported tablespaces using the procedure
rdsadmin.rdsadmin_rman_util.validate_tablespace, and then place your tablespaces in read/
write mode.

To validate the transported data

1. Start SQL*Plus or SQL Developer and log in to your target DB instance as the master user.
2. Validate the tablespaces using the procedure
rdsadmin.rdsadmin_rman_util.validate_tablespace.

SET SERVEROUTPUT ON
BEGIN
rdsadmin.rdsadmin_rman_util.validate_tablespace(
p_tablespace_name => 'TBS1',
p_validation_type => 'PHYSICAL+LOGICAL',
p_rman_to_dbms_output => TRUE);
rdsadmin.rdsadmin_rman_util.validate_tablespace(
p_tablespace_name => 'TBS2',
p_validation_type => 'PHYSICAL+LOGICAL',
p_rman_to_dbms_output => TRUE);
rdsadmin.rdsadmin_rman_util.validate_tablespace(
p_tablespace_name => 'TBS3',
p_validation_type => 'PHYSICAL+LOGICAL',
p_rman_to_dbms_output => TRUE);
END;
/

3. Place your tablespaces in read/write mode.

ALTER TABLESPACE TBS1 READ WRITE;


ALTER TABLESPACE TBS2 READ WRITE;
ALTER TABLESPACE TBS3 READ WRITE;

Phase 6: Clean up leftover files


In this optional step, you remove any unneeded files. Use the
rdsadmin.rdsadmin_transport_util.list_xtts_orphan_files procedure

1970
Amazon Relational Database Service User Guide
Migrating using Oracle transportable tablespaces

to list data files that were orphaned after a tablespace import, and then use
rdsadmin.rdsadmin_transport_util.list_xtts_orphan_files procedure to delete them. For
syntax and semantics of these procedures, see Listing orphaned files after a tablespace import (p. 1934)
and Deleting orphaned data files after a tablespace import (p. 1935).

To clean up leftover files

1. Remove old backups in DATA_PUMP_DIR as follows:

a. List the backup files by running rdsadmin.rdsadmin_file_util.listdir.

SELECT * FROM TABLE(rdsadmin.rds_file_util.listdir(p_directory =>


'DATA_PUMP_DIR'));

b. Remove the backups one by one by calling UTL_FILE.FREMOVE.

EXEC UTL_FILE.FREMOVE ('DATA_PUMP_DIR', 'backup_filename');

2. If you imported tablespaces but didn't import metadata for these tablespaces, you can delete the
orphaned data files as follows:

a. List the orphaned data files that you need to delete. The following example runs the procedure
rdsadmin.rdsadmin_transport_util.list_xtts_orphan_files.

SQL> SELECT * FROM TABLE(rdsadmin.rdsadmin_transport_util.list_xtts_orphan_files);

FILENAME FILESIZE
-------------- ---------
datafile_7.dbf 104865792
datafile_8.dbf 104865792

b. Delete the orphaned files by running the procedure


rdsadmin.rdsadmin_transport_util.cleanup_incomplete_xtts_import.

BEGIN
rdsadmin.rdsadmin_transport_util.cleanup_incomplete_xtts_import('DATA_PUMP_DIR');
END;
/

The cleanup operation generates a log file that uses the name format rds-xtts-
delete_xtts_orphaned_files-YYYY-MM-DD.HH24-MI-SS.FF.log in the BDUMP
directory.
c. Read the log file generated in the previous step. The following example reads log rds-xtts-
delete_xtts_orphaned_files-2023-06-01.09-33-11.868894000.log.

SELECT *
FROM TABLE(rdsadmin.rds_file_util.read_text_file(
p_directory => 'BDUMP',
p_filename => 'rds-xtts-
delete_xtts_orphaned_files-2023-06-01.09-33-11.868894000.log'));

TEXT
--------------------------------------------------------------------------------
orphan transported datafile datafile_7.dbf deleted.
orphan transported datafile datafile_8.dbf deleted.

3. If you imported tablespaces and imported metadata for these tablespaces, but you encountered
compatibility errors or other Oracle Data Pump issues, clean up the partially transported data files
as follows:

1971
Amazon Relational Database Service User Guide
Migrating using Oracle transportable tablespaces

a. List the tablespaces that contain partially transported data files by querying
DBA_TABLESPACES.

SQL> SELECT TABLESPACE_NAME FROM DBA_TABLESPACES WHERE PLUGGED_IN='YES';

TABLESPACE_NAME
--------------------------------------------------------------------------------
TBS_3

b. Drop the tablespaces and the partially transported data files.

DROP TABLESPACE TBS_3 INCLUDING CONTENTS AND DATAFILES;

1972
Amazon Relational Database Service User Guide
Working with Oracle replicas

Working with read replicas for Amazon RDS for


Oracle
To configure replication between Oracle DB instances, you can create replica databases. For an overview
of Amazon RDS read replicas, see Overview of Amazon RDS read replicas (p. 439). For a summary of the
differences between Oracle replicas and other DB engines, see Differences among read replicas for DB
engines (p. 441).

Topics
• Overview of RDS for Oracle replicas (p. 1973)
• Requirements and considerations for RDS for Oracle replicas (p. 1974)
• Preparing to create an Oracle replica (p. 1977)
• Creating an RDS for Oracle replica in mounted mode (p. 1978)
• Modifying the RDS for Oracle replica mode (p. 1979)
• Working with RDS for Oracle replica backups (p. 1980)
• Performing an Oracle Data Guard switchover (p. 1982)
• Troubleshooting RDS for Oracle replicas (p. 1988)

Overview of RDS for Oracle replicas


An Oracle replica database is a physical copy of your primary database. An Oracle replica in read-only
mode is called a read replica. An Oracle replica in mounted mode is called a mounted replica. Oracle
Database doesn't permit writes in a replica, but you can promote a replica to make it writable. The
promoted read replica has the replicated data to the point when the request was made to promote it.

The following video provides a helpful overview of RDS for Oracle disaster recovery.

For more information, see the blog post Managed disaster recovery with Amazon RDS for Oracle cross-
Region automated backups - Part 1 and Managed disaster recovery with Amazon RDS for Oracle cross-
Region automated backups - Part 2.

Topics
• Read-only and mounted replicas (p. 1973)
• Multitenant read replicas (p. 1974)
• Archived redo log retention (p. 1974)
• Outages during replication (p. 1974)

Read-only and mounted replicas


When creating or modifying an Oracle replica, you can place it in either of the following modes:

Read-only

This is the default. Active Data Guard transmits and applies changes from the source database to all
read replica databases.

You can create up to five read replicas from one source DB instance. For general information about
read replicas that applies to all DB engines, see Working with DB instance read replicas (p. 438). For
information about Oracle Data Guard, see Oracle Data Guard concepts and administration in the
Oracle documentation.

1973
Amazon Relational Database Service User Guide
Requirements and considerations for Oracle replicas

Mounted

In this case, replication uses Oracle Data Guard, but the replica database doesn't accept user
connections. The primary use for mounted replicas is cross-Region disaster recovery.

A mounted replica can't serve a read-only workload. The mounted replica deletes archived redo log
files after it applies them, regardless of the archived log retention policy.

You can create a combination of mounted and read-only DB replicas for the same source DB instance.
You can change a read-only replica to mounted mode, or change a mounted replica to read-only mode.
In either case, the Oracle database preserves the archived log retention setting.

Multitenant read replicas


RDS for Oracle supports Data Guard read replicas for Oracle Database 19c and 21c CDBs. You can
create, manage, and promote read replicas in a CDB, just as you can in a non-CDB. Mounted replicas are
supported for the single-tenant configuration. You get the following benefits:

• Managed disaster recovery, high availability, and read-only access to your replicas
• The ability to create read replicas in a different AWS Region.
• Integration with the existing RDS read replica APIs: CreateDBInstanceReadReplica,
PromoteReadReplica, and SwitchoverReadReplica

To use this feature, you need an Active Data Guard license and an Oracle Database Enterprise Edition
license for both the replica and primary DB instances. There are no additional costs related to using CDB
architecture. You pay only for your DB instances.

Archived redo log retention


If a primary DB instance has no cross-Region read replicas, Amazon RDS for Oracle keeps a minimum
of two hours of archived redo logs on the source DB instance. This is true regardless of the setting for
archivelog retention hours in rdsadmin.rdsadmin_util.set_configuration.

RDS purges logs from the source DB instance after two hours or after the archive log retention hours
setting has passed, whichever is longer. RDS purges logs from the read replica after the archive log
retention hours setting has passed only if they have been successfully applied to the database.

In some cases, a primary DB instance might have one or more cross-Region read replicas. If
so, Amazon RDS for Oracle keeps the transaction logs on the source DB instance until they
have been transmitted and applied to all cross-Region read replicas. For information about
rdsadmin.rdsadmin_util.set_configuration, see Retaining archived redo logs (p. 1893).

Outages during replication


When you create an Oracle replica, no outage occurs for the source DB instance. Amazon RDS takes a
snapshot of the source DB instance. This snapshot becomes the replica. Amazon RDS sets the necessary
parameters and permissions for the source DB and replica without service interruption. Similarly, if you
delete a replica, no outage occurs.

Requirements and considerations for RDS for Oracle


replicas
Before creating an Oracle replica, familiarize yourself with the following requirements and
considerations.

1974
Amazon Relational Database Service User Guide
Requirements and considerations for Oracle replicas

Topics
• Version and licensing requirements for RDS for Oracle replicas (p. 1975)
• Option group considerations for RDS for Oracle replicas (p. 1975)
• Backup and restore considerations for RDS for Oracle replicas (p. 1976)
• Oracle Data Guard requirements and limitations for RDS for Oracle replicas (p. 1976)
• Miscellaneous considerations for RDS for Oracle replicas (p. 1976)

Version and licensing requirements for RDS for Oracle replicas


Before you create an RDS for Oracle replica, consider the following:

• If the replica is in read-only mode, make sure that you have an Active Data Guard license. If you place
the replica in mounted mode, you don't need an Active Data Guard license. Only the Oracle DB engine
supports mounted replicas.
• Oracle replicas are supported for the Oracle Enterprise Edition (EE) engine only.
• Oracle replicas of non-CDBs are supported only for DB instances created using version Oracle Database
12c Release 1 (12.1.0.2.v10) and higher 12c releases, and for non-CDB instances of Oracle Database
19c.
• Oracle replicas of CDBs are supported only for CDB instances created using version Oracle Database
19c and higher.
• Oracle replicas are available for DB instances running only on DB instance classes with two or more
vCPUs. A source DB instance can't use the db.t3.micro or db.t3.small instance classes.
• The Oracle DB engine version of the source DB instance and all of its replicas must be the same.
Amazon RDS upgrades the replicas immediately after upgrading the source DB instance, regardless
of a replica's maintenance window. For major version upgrades of cross-Region replicas, Amazon RDS
automatically does the following:
• Generates an option group for the target version.
• Copies all options and option settings from the original option group to the new option group.
• Associates the upgraded cross-Region replica with the new option group.

For more information about upgrading the DB engine version, see Upgrading the RDS for Oracle DB
engine (p. 2103).

Option group considerations for RDS for Oracle replicas


Before you create an RDS for Oracle replica, consider the following:

• If your Oracle replica is in the same AWS Region as its source DB instance, make sure that it belongs
to the same option group as the source DB instance. Modifications to the source option group or
source option group membership propagate to replicas. These changes are applied to the replicas
immediately after they are applied to the source DB instance, regardless of the replica's maintenance
window.

For more information about option groups, see Working with option groups (p. 331).
• When you create an RDS for Oracle cross-Region replica, Amazon RDS creates a dedicated option
group for it.

You can't remove an RDS for Oracle cross-Region replica from its dedicated option group. No other DB
instances can use the dedicated option group for an RDS for Oracle cross-Region replica.

You can only add or remove the following nonreplicated options from a dedicated option group:

1975
Amazon Relational Database Service User Guide
Requirements and considerations for Oracle replicas

• NATIVE_NETWORK_ENCRYPTION
• OEM
• OEM_AGENT
• SSL

To add other options to an RDS for Oracle cross-Region replica, add them to the source DB instance's
option group. The option is also installed on all of the source DB instance's replicas. For licensed
options, make sure that there are sufficient licenses for the replicas.

When you promote an RDS for Oracle cross-Region replica, the promoted replica behaves the same
as other Oracle DB instances, including the management of its options. You can promote a replica
explicitly or implicitly by deleting its source DB instance.

For more information about option groups, see Working with option groups (p. 331).

Backup and restore considerations for RDS for Oracle replicas


Before you create an RDS for Oracle replica, consider the following:

• To create snapshots of RDS for Oracle replicas or turn on automatic backups, make sure to set the
backup retention period manually. Automatic backups aren't turned on by default.
• When you restore a replica backup, you restore to the database time, not the time that the backup was
taken. The database time refers to the latest applied transaction time of the data in the backup. The
difference is significant because a replica can lag behind the primary for minutes or hours.

To find the difference, use the describe-db-snapshots command. Compare the


snapshotDatabaseTime, which is the database time of the replica backup, and the
OriginalSnapshotCreateTime field, which is the latest applied transaction on the primary
database.

Oracle Data Guard requirements and limitations for RDS for


Oracle replicas
Before you create an RDS for Oracle replica, note the following requirements and limitations:

• If your primary DB instance uses the single-tenant configuration of the multitenant architecture,
consider the following:
• You must use Oracle Database 19c or higher with the Enterprise Edition.
• Your primary CDB instance must be in an ACTIVE lifecycle.
• You can't convert a non-CDB primary instance to a CDB instance and convert its replicas in the same
operation. Instead, delete the non-CDB replicas, convert the primary DB instance to a CDB, and then
create new replicas
• Make sure that a logon trigger on a primary DB instance permits access to the RDS_DATAGUARD user
and to any user whose AUTHENTICATED_IDENTITY value is RDS_DATAGUARD or rdsdb. Also, the
trigger must not set the current schema for the RDS_DATAGUARD user.
• To avoid blocking connections from the Data Guard broker process, don't enable restricted
sessions. For more information about restricted sessions, see Enabling and disabling restricted
sessions (p. 1858).

Miscellaneous considerations for RDS for Oracle replicas


Before you create an RDS for Oracle replica, consider the following:

1976
Amazon Relational Database Service User Guide
Preparing to create an Oracle replica

• If your DB instance is a source for one or more cross-Region replicas, the source DB retains its archived
redo logs until they are applied on all cross-Region replicas. The archived redo logs might result in
increased storage consumption.
• To avoid disrupting RDS automation, system triggers must permit specific users to log on to the
primary and replica database. System triggers include DDL, logon, and database role triggers. We
recommend that you add code to your triggers to exclude the users listed in the following sample
code:

-- Determine who the user is


SELECT SYS_CONTEXT('USERENV','AUTHENTICATED_IDENTITY') INTO CURRENT_USER FROM DUAL;
-- The following users should always be able to login to either the Primary or Replica
IF CURRENT_USER IN ('master_user', 'SYS', 'SYSTEM', 'RDS_DATAGUARD', 'rdsdb') THEN
RETURN;
END IF;

• Block change tracking is supported for read-only replicas, but not for mounted replicas. You can
change a mounted replica to a read-only replica, and then enable block change tracking. For more
information, see Enabling and disabling block change tracking (p. 1903).

Preparing to create an Oracle replica


Before you can begin using your replica, perform the following tasks.

Topics
• Enabling automatic backups (p. 1977)
• Enabling force logging mode (p. 1977)
• Changing your logging configuration (p. 1977)
• Setting the MAX_STRING_SIZE parameter (p. 1978)
• Planning compute and storage resources (p. 1978)

Enabling automatic backups


Before a DB instance can serve as a source DB instance, make sure to enable automatic backups on the
source DB instance. To learn how to perform this procedure, see Enabling automated backups (p. 593).

Enabling force logging mode


We recommend that you enable force logging mode. In force logging mode, the Oracle database writes
redo records even when NOLOGGING is used with data definition language (DDL) statements.

To enable force logging mode

1. Log in to your Oracle database using a client tool such as SQL Developer.
2. Enable force logging mode by running the following procedure.

exec rdsadmin.rdsadmin_util.force_logging(p_enable => true);

For more information about this procedure, see Setting force logging (p. 1889).

Changing your logging configuration


If you want to change your logging configuration, we recommend that you complete the changes before
making a DB instance the source for replicas. Also, we recommend that you not modify the logging

1977
Amazon Relational Database Service User Guide
Creating a mounted Oracle replica

configuration after you create the replicas. Modifications can cause the online redo logging configuration
to get out of sync with the standby logging configuration.

Modify the logging configuration for a DB instance by using the Amazon RDS procedures
rdsadmin.rdsadmin_util.add_logfile and rdsadmin.rdsadmin_util.drop_logfile. For
more information, see Adding online redo logs (p. 1890) and Dropping online redo logs (p. 1891).

Setting the MAX_STRING_SIZE parameter


Before you create an Oracle replica, ensure that the setting of the MAX_STRING_SIZE parameter is
the same on the source DB instance and the replica. You can do this by associating them with the same
parameter group. If you have different parameter groups for the source and the replica, you can set
MAX_STRING_SIZE to the same value. For more information about setting this parameter, see Turning
on extended data types for a new DB instance (p. 1945).

Planning compute and storage resources


Ensure that the source DB instance and its replicas are sized properly, in terms of compute and storage,
to suit their operational load. If a replica reaches compute, network, or storage resource capacity, the
replica stops receiving or applying changes from its source. Amazon RDS for Oracle doesn't intervene to
mitigate high replica lag between a source DB instance and its replicas. You can modify the storage and
CPU resources of a replica independently from its source and other replicas.

Creating an RDS for Oracle replica in mounted mode


By default, Oracle replicas are read-only. To create a replica in mounted mode, use the console, the AWS
CLI, or the RDS API.

Console

To create a mounted replica from a source Oracle DB instance

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases.
3. Choose the Oracle DB instance that you want to use as the source for a mounted replica.
4. For Actions, choose Create replica.
5. For Replica mode, choose Mounted.
6. Choose the settings that you want to use. For DB instance identifier, enter a name for the read
replica. Adjust other settings as needed.
7. For Regions, choose the Region where the mounted replica will be launched.
8. Choose your instance size and storage type. We recommend that you use the same DB instance class
and storage type as the source DB instance for the read replica.
9. For Multi-AZ deployment, choose Create a standby instance to create a standby of your replica
in another Availability Zone for failover support for the mounted replica. Creating your mounted
replica as a Multi-AZ DB instance is independent of whether the source database is a Multi-AZ DB
instance.
10. Choose the other settings that you want to use.
11. Choose Create replica.

In the Databases page, the mounted replica has the role Replica.

1978
Amazon Relational Database Service User Guide
Modifying the replica mode

AWS CLI
To create an Oracle replica in mounted mode, set --replica-mode to mounted in the AWS CLI
command create-db-instance-read-replica.

Example

For Linux, macOS, or Unix:

aws rds create-db-instance-read-replica \


--db-instance-identifier myreadreplica \
--source-db-instance-identifier mydbinstance \
--replica-mode mounted

For Windows:

aws rds create-db-instance-read-replica ^


--db-instance-identifier myreadreplica ^
--source-db-instance-identifier mydbinstance ^
--replica-mode mounted

To change a read-only replica to a mounted state, set --replica-mode to mounted in the AWS CLI
command modify-db-instance. To place a mounted replica in read-only mode, set --replica-mode to
open-read-only.

RDS API
To create an Oracle replica in mounted mode, specify ReplicaMode=mounted in the RDS API operation
CreateDBInstanceReadReplica.

Modifying the RDS for Oracle replica mode


To change the replica mode of an existing replica, use the console, AWS CLI, or RDS API. When you
change to mounted mode, the replica disconnects all active connections. When you change to read-only
mode, Amazon RDS initializes Active Data Guard.

The change operation can take a few minutes. During the operation, the DB instance status changes
to modifying. For more information about status changes, see Viewing Amazon RDS DB instance
status (p. 684).

Console

To change the replica mode of an Oracle replica from mounted to read-only

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases.
3. Choose the mounted replica database.
4. Choose Modify.
5. For Replica mode, choose Read-only.
6. Choose the other settings that you want to change.
7. Choose Continue.

1979
Amazon Relational Database Service User Guide
Working with Oracle replica backups

8. For Scheduling of modifications, choose Apply immediately.


9. Choose Modify DB instance.

AWS CLI
To change a read replica to mounted mode, set --replica-mode to mounted in the AWS CLI command
modify-db-instance. To change a mounted replica to read-only mode, set --replica-mode to open-
read-only.

Example

For Linux, macOS, or Unix:

aws rds modify-db-instance \


--db-instance-identifier myreadreplica \
--replica-mode mode

For Windows:

aws rds modify-db-instance ^


--db-instance-identifier myreadreplica ^
--replica-mode mode

RDS API
To change a read-only replica to mounted mode, set ReplicaMode=mounted in ModifyDBInstance. To
change a mounted replica to read-only mode, set ReplicaMode=read-only.

Working with RDS for Oracle replica backups


You can create and restore backups of an RDS for Oracle replica. Both automatic backups and manual
snapshots are supported. For more information, see Backing up and restoring (p. 590). The following
sections describe the key differences between managing backups of a primary and an RDS for Oracle
replica.

Turning on RDS for Oracle replica backups


An Oracle replica doesn't have automated backups turned on by default. You turn on automated backups
by setting the backup retention period to a positive nonzero value.

Console

To enable automated backups immediately

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. In the navigation pane, choose Databases, and then choose the DB instance or Multi-AZ DB cluster
that you want to modify.
3. Choose Modify.
4. For Backup retention period, choose a positive nonzero value, for example 3 days.
5. Choose Continue.

1980
Amazon Relational Database Service User Guide
Working with Oracle replica backups

6. Choose Apply immediately.


7. Choose Modify DB instance or Modify cluster to save your changes and enable automated backups.

AWS CLI

To enable automated backups, use the AWS CLI modify-db-instance or modify-db-cluster


command.

Include the following parameters:

• --db-instance-identifier (or --db-cluster-identifier for a Multi-AZ DB cluster)


• --backup-retention-period
• --apply-immediately or --no-apply-immediately

In the following example, we enable automated backups by setting the backup retention period to three
days. The changes are applied immediately.

Example

For Linux, macOS, or Unix:

aws rds modify-db-instance \


--db-instance-identifier mydbinstance \
--backup-retention-period 3 \
--apply-immediately

For Windows:

aws rds modify-db-instance ^


--db-instance-identifier mydbinstance ^
--backup-retention-period 3 ^
--apply-immediately

RDS API

To enable automated backups, use the RDS API ModifyDBInstance or ModifyDBCluster operation
with the following required parameters:

• DBInstanceIdentifier or DBClusterIdentifier
• BackupRetentionPeriod

Restoring an RDS for Oracle replica backup


You can restore an Oracle replica backup just as you can restore a backup of the primary instance. For
more information, see the following:

• Restoring from a DB snapshot (p. 615)


• Restoring a DB instance to a specified time (p. 660)

The main consideration when you restore a replica backup is determining the point in time to which you
are restoring. The database time refers to the latest applied transaction time of the data in the backup.
When you restore a replica backup, you restore to the database time, not the time when the backup

1981
Amazon Relational Database Service User Guide
Performing an Oracle Data Guard switchover

completed. The difference is significant because an RDS for Oracle replica can lag behind the primary by
minutes or hours. Thus, the database time of a replica backup, and thus the point in time to which you
restore it, might be much earlier than the backup creation time.

To find the difference between database time and creation time, use the describe-db-snapshots
command. Compare the SnapshotDatabaseTime, which is the database time of the replica backup,
and the OriginalSnapshotCreateTime field, which is the latest applied transaction on the primary
database. The following example shows the difference between the two times:

aws rds describe-db-snapshots \


--db-instance-identifier my-oracle-replica
--db-snapshot-identifier my-replica-snapshot

{
"DBSnapshots": [
{
"DBSnapshotIdentifier": "my-replica-snapshot",
"DBInstanceIdentifier": "my-oracle-replica",
"SnapshotDatabaseTime": "2022-07-26T17:49:44Z",
...
"OriginalSnapshotCreateTime": "2021-07-26T19:49:44Z"
}
]
}

Performing an Oracle Data Guard switchover


In an Oracle Data Guard environment, a primary database supports one or more standby databases.
You can perform a managed, switchover-based role transition from a primary database to a standby
database.

Topics
• Overview of Oracle Data Guard switchover (p. 1982)
• Preparing for the Oracle Data Guard switchover (p. 1986)
• Initiating the Oracle Data Guard switchover (p. 1986)
• Monitoring the Oracle Data Guard switchover (p. 1988)

Overview of Oracle Data Guard switchover


A switchover is a role reversal between a primary database and a standby database. During a switchover,
the original primary database transitions to a standby role, while the original standby database
transitions to the primary role.

1982
Amazon Relational Database Service User Guide
Performing an Oracle Data Guard switchover

Amazon RDS supports a fully managed, switchover-based role transition for Oracle Database replicas.
The replicas can reside in separate AWS Regions or in different Availability Zones (AZs) of a single Region.
You can only initiate a switchover to a standby database that is mounted or open read-only.

Topics
• Benefits of Oracle Data Guard switchover (p. 1984)
• Supported Oracle Database versions (p. 1984)
• AWS Region support (p. 1984)
• Cost of Oracle Data Guard switchover (p. 1984)
• How Oracle Data Guard switchover works (p. 1985)

1983
Amazon Relational Database Service User Guide
Performing an Oracle Data Guard switchover

Benefits of Oracle Data Guard switchover


Just as for RDS for Oracle read replicas, a managed switchover relies on Oracle Data Guard. The
operation is designed to have zero data loss. Amazon RDS automates the following aspects of the
switchover:

• Reverses the roles of your primary database and specified standby database, putting the new standby
database in the same state (mounted or read-only) as the original standby
• Ensures data consistency
• Maintains your replication configuration after the transition
• Supports repeated reversals, allowing your new standby database to return to its original primary role

Supported Oracle Database versions


Oracle Data Guard switchover is supported for the following releases:

• Oracle Database 19c


• Oracle Database 12c Release 2 (12.2)
• Oracle Database 12c Release 1 (12.1) using PSU 12.1.0.2.v10 or higher

AWS Region support


Oracle Data Guard switchover is available in the following AWS Regions:

• Asia Pacific (Mumbai)


• Asia Pacific (Osaka)
• Asia Pacific (Seoul)
• Asia Pacific (Singapore)
• Asia Pacific (Sydney)
• Asia Pacific (Tokyo)
• Canada (Central)
• Europe (Frankfurt)
• Europe (Ireland)
• Europe (London)
• Europe (Paris)
• Europe (Stockholm)
• South America (São Paulo)
• US East (N. Virginia)
• US East (Ohio)
• US West (N. California)
• US West (Oregon)
• AWS GovCloud (US-East)
• AWS GovCloud (US-West)

Cost of Oracle Data Guard switchover


The Oracle Data Guard switchover feature doesn't incur additional costs. Oracle Database Enterprise
Edition includes support for standby databases in mounted mode. To open standby databases in read-
only mode, you need the Oracle Active Data Guard option.

1984
Amazon Relational Database Service User Guide
Performing an Oracle Data Guard switchover

How Oracle Data Guard switchover works


Oracle Data Guard switchover is a fully managed operation. You initiate the switchover for a standby
database by issuing the CLI command switchover-read-replica. Then Amazon RDS modifies the
primary and standby roles in your replication configuration.

The original standby and original primary are the roles that exist before the switchover. The new standby
and new primary are the roles that exist after the switchover. A bystander replica is a replica database
that serves as a standby database in the Oracle Data Guard environment but is not switching roles.

Topics
• Stages of the Oracle Data Guard switchover (p. 1985)
• After the Oracle Data Guard switchover (p. 1985)

Stages of the Oracle Data Guard switchover

To perform the switchover, Amazon RDS must take the following steps:

1. Block new transactions on the original primary database. During the switchover, Amazon RDS
interrupts replication for all databases in your Oracle Data Guard configuration. During the switchover,
the original primary database can't process write requests.
2. Ship unapplied transactions to the original standby database, and apply them.
3. Restart the new standby database in read-only or mounted mode. The mode depends on the open
state of the original standby database before the switchover.
4. Open the new primary database in read/write mode.

After the Oracle Data Guard switchover

Amazon RDS switches the roles of the primary and standby database. You are responsible for
reconnecting your application and performing any other desired configuration.

Topics
• Success criteria (p. 1985)
• Connection to the new primary database (p. 1985)
• Configuration of the new primary database (p. 1986)

Success criteria

The Oracle Data Guard switchover is successful when the original standby database does the following:

• Transitions to its role as new primary database


• Completes its reconfiguration

To limit downtime, your new primary database becomes active as soon as possible. Because Amazon
RDS configures bystander replicas asynchronously, these replicas might become active after the original
primary database.

Connection to the new primary database

Amazon RDS won't propagate your current database connections to the new primary database after the
switchover. After the Oracle Data Guard switchover completes, reconnect your application to the new
primary database.

1985
Amazon Relational Database Service User Guide
Performing an Oracle Data Guard switchover

Configuration of the new primary database


To perform a switchover to the new primary database, Amazon RDS changes the mode of the original
standby database to open. The change in role is the only change to the database. Amazon RDS doesn't
set up features such as Multi-AZ replication.

If you perform a switchover to a cross-Region replica with different options, the new primary database
keeps its own options. Amazon RDS won't migrate the options on the original primary database. If the
original primary database had options such as SSL, NNE, OEM, and OEM_AGENT, Amazon RDS doesn't
propagate them to the new primary database.

Preparing for the Oracle Data Guard switchover


Before initiating the Oracle Data Guard switchover, make sure that your replication environment meets
the following requirements:

• The original standby database is mounted or open read-only.


• Automatic backups are enabled on the original standby database.
• The original primary database and the original standby database are in an available state.
• The original primary database and the original standby database have no pending maintenance
actions.
• The original standby database is in the replicating state.
• You aren't attempting to initiate a switchover when either the primary database or standby database is
currently in a switchover lifecycle. If a replica database is reconfiguring after a switchover, Amazon RDS
prevents you from initiating another switchover.
Note
A bystander replica is a replica in the Oracle Data Guard configuration that isn't the target of
the switchover. Bystander replicas can be in any state during the switchover.
• The original standby database has a configuration that is as close as desired to the original primary
database. Assume a scenario where the original primary and original standby databases have different
options. After the switchover completes, Amazon RDS doesn't automatically reconfigure the new
primary database to have the same options as the original primary database.
• You configure your desired Multi-AZ deployment before initiating a switchover. Amazon RDS doesn't
manage Multi-AZ as part of the switchover. The Multi-AZ deployment remains as it is.

Assume that db_maz is the primary database in a Multi-AZ deployment, and db_saz is a Single-AZ
replica. You initiate a switchover from db_maz to db_saz. Afterward, db_maz is a Multi-AZ replica
database, and db_saz is a Single-AZ primary database. The new primary database is now unprotected
by a Multi-AZ deployment.
• In preparation for a cross-Region switchover, the primary database doesn't use the same option group
as a DB instance outside of the replication configuration. For a cross-Region switchover to succeed, the
current primary database and its read replicas must be the only DB instances to use the option group
of the current primary database. Otherwise, Amazon RDS prevents the switchover.

Initiating the Oracle Data Guard switchover


You can switch over an RDS for Oracle read replica to the primary role, and the former primary DB
instance to a replica role.

Console

To switch over an Oracle read replica to the primary DB role

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.

1986
Amazon Relational Database Service User Guide
Performing an Oracle Data Guard switchover

2. In the Amazon RDS console, choose Databases.

The Databases pane appears. Each read replica shows Replica in the Role column.
3. Choose the read replica that you want to switch over to the primary role.
4. For Actions, choose Switch over replica.
5. Choose I acknowledge. Then choose Switch over replica.
6. On the Databases page, monitor the progress of the switchover.

When the switchover completes, the role of the switchover target changes from Replica to Source.

AWS CLI

To switch over an Oracle replica to the primary DB role, use the AWS CLI switchover-read-replica
command. The following examples make the Oracle replica named replica-to-be-made-primary
into the new primary database.

Example

For Linux, macOS, or Unix:

aws rds switchover-read-replica \


--db-instance-identifier replica-to-be-made-primary

For Windows:

aws rds switchover-read-replica ^


--db-instance-identifier replica-to-be-made-primary

RDS API

To switch over an Oracle replica to the primary DB role, call the Amazon RDS API
SwitchoverReadReplica operation with the required parameter DBInstanceIdentifier. This
parameter specifies the name of the Oracle replica that you want to assume the primary DB role.

1987
Amazon Relational Database Service User Guide
Troubleshooting Oracle replicas

Monitoring the Oracle Data Guard switchover


To check the status of your instances, use the AWS CLI command describe-db-instances. The
following command checks the status of the DB instance orcl2. This database was a standby database
before the switchover, but is the new primary database after the switchover.

aws rds describe-db-instances \


--db-instance-identifier orcl2

To confirm that the switchover completed successfully, query V$DATABASE.OPEN_MODE. Check that the
value for the new primary database is READ WRITE.

SELECT OPEN_MODE FROM V$DATABASE;

To look for switchover-related events, use the AWS CLI command describe-events. The following
example looks for events on the orcl2 instance.

aws rds describe-events \


--source-identifier orcl2 \
--source-type db-instance

Troubleshooting RDS for Oracle replicas


This section describes possible replication problems and solutions.

Topics
• Monitoring Oracle replication lag (p. 1988)
• Troubleshooting Oracle replication failure after adding or modifying triggers (p. 1989)

Monitoring Oracle replication lag


To monitor replication lag in Amazon CloudWatch, view the Amazon RDS ReplicaLag metric. For more
information about replication lag time, see Monitoring read replication (p. 449) and Amazon CloudWatch
metrics for Amazon RDS (p. 806).

For a read replica, if the lag time is too long, query the following views:

• V$ARCHIVED_LOG – Shows which commits have been applied to the read replica.
• V$DATAGUARD_STATS – Shows a detailed breakdown of the components that make up the
ReplicaLag metric.
• V$DATAGUARD_STATUS – Shows the log output from Oracle's internal replication processes.

For a mounted replica, if the lag time is too long, you can't query the V$ views. Instead, do the following:

• Check the ReplicaLag metric in CloudWatch.


• Check the alert log file for the replica in the console. Look for errors in the recovery messages. The
messages include the log sequence number, which you can compare to the primary sequence number.
For more information, see Oracle database log files (p. 924).

1988
Amazon Relational Database Service User Guide
Troubleshooting Oracle replicas

Troubleshooting Oracle replication failure after adding or


modifying triggers
If you add or modify any triggers, and if replication fails afterward, the problem may be the triggers.
Ensure that the trigger excludes the following user accounts, which are required by RDS for replication:

• User accounts with administrator privileges


• SYS
• SYSTEM
• RDS_DATAGUARD
• rdsdb

For more information, see Miscellaneous considerations for RDS for Oracle replicas (p. 1976).

1989
Amazon Relational Database Service User Guide
Options for Oracle

Adding options to Oracle DB instances


In Amazon RDS, an option is an additional feature. Following, you can find a description of options that
you can add to Amazon RDS instances running the Oracle DB engine.

Topics
• Overview of Oracle DB options (p. 1990)
• Amazon S3 integration (p. 1992)
• Oracle Application Express (APEX) (p. 2009)
• Amazon EFS integration (p. 2020)
• Oracle Java virtual machine (p. 2031)
• Oracle Enterprise Manager (p. 2034)
• Oracle Label Security (p. 2049)
• Oracle Locator (p. 2052)
• Oracle Multimedia (p. 2055)
• Oracle native network encryption (p. 2057)
• Oracle OLAP (p. 2065)
• Oracle Secure Sockets Layer (p. 2068)
• Oracle Spatial (p. 2075)
• Oracle SQLT (p. 2078)
• Oracle Statspack (p. 2084)
• Oracle time zone (p. 2087)
• Oracle time zone file autoupgrade (p. 2091)
• Oracle Transparent Data Encryption (p. 2097)
• Oracle UTL_MAIL (p. 2099)
• Oracle XML DB (p. 2102)

Overview of Oracle DB options


To enable options for your Oracle database, add them to an option group, and then associate the option
group with your DB instance. For more information, see Working with option groups (p. 331).

Topics
• Summary of Oracle Database options (p. 1990)
• Options supported for different editions (p. 1991)
• Memory requirements for specific options (p. 1991)

Summary of Oracle Database options


You can add the following options for Oracle DB instances.

Option Option ID

Amazon S3 integration (p. 1992) S3_INTEGRATION

Oracle Application Express (APEX) (p. 2009) APEX

APEX-DEV

1990
Amazon Relational Database Service User Guide
Overview of Oracle DB options

Option Option ID

Oracle Enterprise Manager (p. 2034) OEM

OEM_AGENT

Oracle Java virtual machine (p. 2031) JVM

Oracle Label Security (p. 2049) OLS

Oracle Locator (p. 2052) LOCATOR

Oracle Multimedia (p. 2055) MULTIMEDIA

Oracle native network encryption (p. 2057) NATIVE_NETWORK_ENCRYPTION

Oracle OLAP (p. 2065) OLAP

Oracle Secure Sockets Layer (p. 2068) SSL

Oracle Spatial (p. 2075) SPATIAL

Oracle SQLT (p. 2078) SQLT

Oracle Statspack (p. 2084) STATSPACK

Oracle time zone (p. 2087) TIMEZONE

Oracle time zone file autoupgrade (p. 2091) TIMEZONE_FILE_AUTOUPGRADE

Oracle Transparent Data Encryption (p. 2097) TDE

Oracle UTL_MAIL (p. 2099) UTL_MAIL

Oracle XML DB (p. 2102) XMLDB

Options supported for different editions


RDS for Oracle prevents you from adding options to an edition if they aren't supported. To find out
which RDS options are supported in different Oracle Database editions, use the command aws rds
describe-option-group-options. The following example lists supported options for Oracle
Database 19c Enterprise Edition.

aws rds describe-option-group-options \


--engine-name oracle-ee \
--major-engine-version 19

For more information, see describe-option-group-options in the AWS CLI Command Reference.

Memory requirements for specific options


Some options require additional memory to run on your DB instance. For example, Oracle Enterprise
Manager Database Control uses about 300 MB of RAM. If you enable this option for a small DB instance,
you might encounter performance problems due to memory constraints. You can adjust the Oracle
parameters so that the database requires less RAM. Alternatively, you can scale up to a larger DB
instance.

1991
Amazon Relational Database Service User Guide
Amazon S3 integration

Amazon S3 integration
You can transfer files between your RDS for Oracle DB instance and an Amazon S3 bucket. You can use
Amazon S3 integration with Oracle Database features such as Oracle Data Pump. For example, you can
download Data Pump files from Amazon S3 to your RDS for Oracle DB instance. For more information,
see Importing data into Oracle on Amazon RDS (p. 1947).
Note
Your DB instance and your Amazon S3 bucket must be in the same AWS Region.

Topics
• Configuring IAM permissions for RDS for Oracle integration with Amazon S3 (p. 1992)
• Adding the Amazon S3 integration option (p. 2000)
• Transferring files between Amazon RDS for Oracle and an Amazon S3 bucket (p. 2001)
• Troubleshooting Amazon S3 integration (p. 2007)
• Removing the Amazon S3 integration option (p. 2008)

Configuring IAM permissions for RDS for Oracle integration with


Amazon S3
For RDS for Oracle to integrate with Amazon S3, your DB instance must have access to an Amazon S3
bucket. The Amazon VPC used by your DB instance doesn't need to provide access to the Amazon S3
endpoints.

RDS for Oracle supports uploading files from a DB instance in one account to an Amazon S3 bucket in a
different account. Where additional steps are required, they are noted in the following sections.

Topics
• Step 1: Create an IAM policy for your Amazon RDS role (p. 1992)
• Step 2: (Optional) Create an IAM policy for your Amazon S3 bucket (p. 1996)
• Step 3: Create an IAM role for your DB instance and attach your policy (p. 1997)
• Step 4: Associate your IAM role with your RDS for Oracle DB instance (p. 1999)

Step 1: Create an IAM policy for your Amazon RDS role


In this step, you create an AWS Identity and Access Management (IAM) policy with the permissions
required to transfer files from your Amazon S3 bucket to your RDS DB instance. This step assumes that
you have already created an S3 bucket.

Before you create the policy, note the following pieces of information:

• The Amazon Resource Name (ARN) for your bucket


• The ARN for your AWS KMS key, if your bucket uses SSE-KMS or SSE-S3 encryption
Note
An RDS for Oracle DB instance can't access Amazon S3 buckets encrypted with SSE-C.

For more information, see Protecting data using server-side encryption in the Amazon Simple Storage
Service User Guide.

1992
Amazon Relational Database Service User Guide
Amazon S3 integration

Console

To create an IAM policy to allow Amazon RDS to access your Amazon S3 bucket

1. Open the IAM Management Console.


2. Under Access management, choose Policies.
3. Choose Create Policy.
4. On the Visual editor tab, choose Choose a service, and then choose S3.
5. For Actions, choose Expand all, and then choose the bucket permissions and object permissions
required to transfer files from an Amazon S3 bucket to Amazon RDS. For example, do the following:

• Expand List, and then select ListBucket.


• Expand Read, and then select GetObject.
• Expand Write, and then select PutObject and DeleteObject.
• Expand Permissions management, and then select PutObjectAcl. This permission is necessary
if you plan to upload files to a bucket owned by a different account, and this account needs full
control of the bucket contents.

Object permissions are permissions for object operations in Amazon S3. You must grant them
for objects in a bucket, not the bucket itself. For more information, see Permissions for object
operations.
6. Choose Resources, and then do the following:

a. Choose Specific.
b. For bucket, choose Add ARN. Enter your bucket ARN. The bucket name is filled in automatically.
Then choose Add.
c. If the object resource is shown, either choose Add ARN to add resources manually or choose
Any.
Note
You can set Amazon Resource Name (ARN) to a more specific ARN value to allow
Amazon RDS to access only specific files or folders in an Amazon S3 bucket. For more
information about how to define an access policy for Amazon S3, see Managing access
permissions to your Amazon S3 resources.
7. (Optional) Choose Add additional permissions to add resources to the policy. For example, do the
following:

a. If your bucket is encrypted with a custom KMS key, select KMS for the service.
b. For Manual actions, select the following:

• Encrypt
• ReEncrypt from and ReEncrypt to
• Decrypt
• DescribeKey
• GenerateDataKey
c. For Resources, choose Specific.
d. For key, choose Add ARN. Enter the ARN of your custom key as the resource, and then choose
Add.

For more information, see Protecting Data Using Server-Side Encryption with KMS keys Stored
in AWS Key Management Service (SSE-KMS) in the Amazon Simple Storage Service User Guide.
e. If you want Amazon RDS to access to access other buckets, add the ARNs for these buckets.
Optionally, you can also grant access to all buckets and objects in Amazon S3.

1993
Amazon Relational Database Service User Guide
Amazon S3 integration

8. Choose Next: Tags and then Next: Review.


9. For Name, enter a name for your IAM policy, for example rds-s3-integration-policy. You
use this name when you create an IAM role to associate with your DB instance. You can also add an
optional Description value.
10. Choose Create policy.

AWS CLI
Create an AWS Identity and Access Management (IAM) policy that grants Amazon RDS access to an
Amazon S3 bucket. After you create the policy, note the ARN of the policy. You need the ARN for a
subsequent step.

Include the appropriate actions in the policy based on the type of access required:

• GetObject – Required to transfer files from an Amazon S3 bucket to Amazon RDS.


• ListBucket – Required to transfer files from an Amazon S3 bucket to Amazon RDS.
• PutObject – Required to transfer files from Amazon RDS to an Amazon S3 bucket.

The following AWS CLI command creates an IAM policy named rds-s3-integration-policy with
these options. It grants access to a bucket named your-s3-bucket-arn.

Example
For Linux, macOS, or Unix:

aws iam create-policy \


--policy-name rds-s3-integration-policy \
--policy-document '{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "s3integration",
"Action": [
"s3:GetObject",
"s3:ListBucket",
"s3:PutObject"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::your-s3-bucket-arn",
"arn:aws:s3:::your-s3-bucket-arn/*"
]
}
]
}'

The following example includes permissions for custom KMS keys.

aws iam create-policy \


--policy-name rds-s3-integration-policy \
--policy-document '{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "s3integration",
"Action": [
"s3:GetObject",
"s3:ListBucket",
"s3:PutObject",

1994
Amazon Relational Database Service User Guide
Amazon S3 integration

"kms:Decrypt",
"kms:Encrypt",
"kms:ReEncrypt",
"kms:GenerateDataKey",
"kms:DescribeKey",
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::your-s3-bucket-arn",
"arn:aws:s3:::your-s3-bucket-arn/*",
"arn:aws:kms:::your-kms-arn"
]
}
]
}'

For Windows:

aws iam create-policy ^


--policy-name rds-s3-integration-policy ^
--policy-document '{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "s3integration",
"Action": [
"s3:GetObject",
"s3:ListBucket",
"s3:PutObject"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::your-s3-bucket-arn",
"arn:aws:s3:::your-s3-bucket-arn/*"
]
}
]
}'

The following example includes permissions for custom KMS keys.

aws iam create-policy ^


--policy-name rds-s3-integration-policy ^
--policy-document '{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "s3integration",
"Action": [
"s3:GetObject",
"s3:ListBucket",
"s3:PutObject",
"kms:Decrypt",
"kms:Encrypt",
"kms:ReEncrypt",
"kms:GenerateDataKey",
"kms:DescribeKey",
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::your-s3-bucket-arn",
"arn:aws:s3:::your-s3-bucket-arn/*",
"arn:aws:kms:::your-kms-arn"
]

1995
Amazon Relational Database Service User Guide
Amazon S3 integration

}
]
}'

Step 2: (Optional) Create an IAM policy for your Amazon S3 bucket


This step is necessary only in the following conditions:

• You plan to upload files to an Amazon S3 bucket from one account (account A) and access them from a
different account (account B).
• Account B owns the bucket.
• Account B needs full control of objects loaded into the bucket.

If the preceding conditions don't apply to you, skip to Step 3: Create an IAM role for your DB instance
and attach your policy (p. 1997).

To create your bucket policy, make sure you have the following:

• The account ID for account A


• The user name for account A
• The ARN value for the Amazon S3 bucket in account B

Console

To create or edit a bucket policy

1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Buckets list, choose the name of the bucket that you want to create a bucket policy for or
whose bucket policy you want to edit.
3. Choose Permissions.
4. Under Bucket policy, choose Edit. This opens the Edit bucket policy page.
5. On the Edit bucket policy page, explore Policy examples in the Amazon S3 User Guide, choose
Policy generator to generate a policy automatically, or edit the JSON in the Policy section.

If you choose Policy generator, the AWS Policy Generator opens in a new window:

a. On the AWS Policy Generator page, in Select Type of Policy, choose S3 Bucket Policy.
b. Add a statement by entering the information in the provided fields, and then choose Add
Statement. Repeat for as many statements as you would like to add. For more information
about these fields, see the IAM JSON policy elements reference in the IAM User Guide.
Note
For convenience, the Edit bucket policy page displays the Bucket ARN (Amazon
Resource Name) of the current bucket above the Policy text field. You can copy this
ARN for use in the statements on the AWS Policy Generator page.
c. After you finish adding statements, choose Generate Policy.
d. Copy the generated policy text, choose Close, and return to the Edit bucket policy page in the
Amazon S3 console.
6. In the Policy box, edit the existing policy or paste the bucket policy from the Policy generator. Make
sure to resolve security warnings, errors, general warnings, and suggestions before you save your
policy.

1996
Amazon Relational Database Service User Guide
Amazon S3 integration

"Version": "2012-10-17",
"Statement": [
{
"Sid": "Example permissions",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::account-A-ID:account-A-user"
},
"Action": [
"s3:PutObject",
"s3:PutObjectAcl"
],
"Resource": [
"arn:aws:s3:::account-B-bucket-arn",
"arn:aws:s3:::account-B-bucket-arn/*"
]
}
]
}

7. Choose Save changes, which returns you to the Bucket Permissions page.

Step 3: Create an IAM role for your DB instance and attach your policy
This step assumes that you have created the IAM policy in Step 1: Create an IAM policy for your Amazon
RDS role (p. 1992). In this step, you create a role for your RDS for Oracle DB instance and then attach
your policy to the role.

Console

To create an IAM role to allow Amazon RDS to access an Amazon S3 bucket

1. Open the IAM Management Console.


2. In the navigation pane, choose Roles.
3. Choose Create role.
4. Choose AWS service.
5. For Use cases for other AWS services:, choose RDS and then RDS – Add Role to Database. Then
choose Next.
6. For Search under Permissions policies, enter the name of the IAM policy you created in Step 1:
Create an IAM policy for your Amazon RDS role (p. 1992), and select the policy when it appears in
the list. Then choose Next.
7. For Role name, enter a name for your IAM role, for example, rds-s3-integration-role. You can
also add an optional Description value.
8. Choose Create role.

AWS CLI

To create a role and attach your policy to it

1. Create an IAM role that Amazon RDS can assume on your behalf to access your Amazon S3 buckets.

We recommend using the aws:SourceArn and aws:SourceAccount global condition context keys
in resource-based trust relationships to limit the service's permissions to a specific resource. This is
the most effective way to protect against the confused deputy problem.

You might use both global condition context keys and have the aws:SourceArn value contain the
account ID. In this case, the aws:SourceAccount value and the account in the aws:SourceArn
value must use the same account ID when used in the same statement.

1997
Amazon Relational Database Service User Guide
Amazon S3 integration

• Use aws:SourceArn if you want cross-service access for a single resource.


• Use aws:SourceAccount if you want to allow any resource in that account to be associated with
the cross-service use.

In the trust relationship, make sure to use the aws:SourceArn global condition context key with
the full Amazon Resource Name (ARN) of the resources accessing the role.

The following AWS CLI command creates the role named rds-s3-integration-role for this
purpose.

Example

For Linux, macOS, or Unix:

aws iam create-role \


--role-name rds-s3-integration-role \
--assume-role-policy-document '{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "rds.amazonaws.com"
},
"Action": "sts:AssumeRole",
"Condition": {
"StringEquals": {
"aws:SourceAccount": my_account_ID,
"aws:SourceArn": "arn:aws:rds:Region:my_account_ID:db:dbname"
}
}
}
]
}'

For Windows:

aws iam create-role ^


--role-name rds-s3-integration-role ^
--assume-role-policy-document '{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "rds.amazonaws.com"
},
"Action": "sts:AssumeRole",
"Condition": {
"StringEquals": {
"aws:SourceAccount": my_account_ID,
"aws:SourceArn": "arn:aws:rds:Region:my_account_ID:db:dbname"
}
}
}
]
}'

1998
Amazon Relational Database Service User Guide
Amazon S3 integration

For more information, see Creating a role to delegate permissions to an IAM user in the IAM User
Guide.
2. After the role is created, note the ARN of the role. You need the ARN for a subsequent step.
3. Attach the policy you created to the role you created.

The following AWS CLI command attaches the policy to the role named rds-s3-integration-
role.

Example

For Linux, macOS, or Unix:

aws iam attach-role-policy \


--policy-arn your-policy-arn \
--role-name rds-s3-integration-role

For Windows:

aws iam attach-role-policy ^


--policy-arn your-policy-arn ^
--role-name rds-s3-integration-role

Replace your-policy-arn with the policy ARN that you noted in a previous step.

Step 4: Associate your IAM role with your RDS for Oracle DB instance
The last step in configuring permissions for Amazon S3 integration is associating your IAM role with your
DB instance. Note the following requirements:

• You must have access to an IAM role with the required Amazon S3 permissions policy attached to it.
• You can only associate one IAM role with your RDS for Oracle DB instance at a time.
• Your DB instance must be in the Available state.

Console

To associate your IAM role with your RDS for Oracle DB instance

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://
console.aws.amazon.com/rds/.
2. Choose Databases from the navigation pane.
3. Choose the RDS for Oracle DB instance name to display its details.
4. On the Connectivity & security tab, scroll down to the Manage IAM roles section at the bottom of
the page.
5. For Add IAM roles to this instance, choose the role that you created in Step 3: Create an IAM role
for your DB instance and attach your policy (p. 1997).
6. For Feature, choose S3_INTEGRATION.

1999
Amazon Relational Database Service User Guide
Amazon S3 integration

7. Choose Add role.

AWS CLI

The following AWS CLI command adds the role to an Oracle DB instance named mydbinstance.

Example

For Linux, macOS, or Unix:

aws rds add-role-to-db-instance \


--db-instance-identifier mydbinstance \
--feature-name S3_INTEGRATION \
--role-arn your-role-arn

For Windows:

aws rds add-role-to-db-instance ^


--db-instance-identifier mydbinstance ^
--feature-name S3_INTEGRATION ^
--role-arn your-role-arn

Replace your-role-arn with the role ARN that you noted in a previous step. S3_INTEGRATION must
be specified for the --feature-name option.

Adding the Amazon S3 integration option


To integrate Amazon RDS for Oracle with Amazon S3, your DB instance must be associated with an
option group that includes the S3_INTEGRATION option.

Console

To configure an option group for Amazon S3 integration

1. Create a new option group or identify an existing option group to which you can add the
S3_INTEGRATION option.

For information abo

You might also like