Migrating Your Databases To Amazon Aurora PDF
Migrating Your Databases To Amazon Aurora PDF
Amazon Aurora
June 2016
Amazon Web Services – Migrating Your Databases to Amazon Aurora June 2016
© 2016, Amazon Web Services, Inc. or its affiliates. All rights reserved.
Notices
This document is provided for informational purposes only. It represents AWS’s
current product offerings and practices as of the date of issue of this document,
which are subject to change without notice. Customers are responsible for
making their own independent assessment of the information in this document
and any use of AWS’s products or services, each of which is provided “as is”
without warranty of any kind, whether express or implied. This document does
not create any warranties, representations, contractual commitments, conditions
or assurances from AWS, its affiliates, suppliers or licensors. The responsibilities
and liabilities of AWS to its customers are controlled by AWS agreements, and
this document is not part of, nor does it modify, any agreement between AWS
and its customers.
Page 2 of 42
Amazon Web Services – Migrating Your Databases to Amazon Aurora June 2016
Contents
Abstract 4
Introduction to Amazon Aurora 4
Database Migration Considerations 6
Migration Phases 6
Application Considerations 6
Sharding and Read-Replica Considerations 7
Reliability Considerations 8
Cost and Licensing Considerations 8
Other Migration Considerations 9
Planning Your Database Migration Process 9
Homogenous Migration 10
Heterogeneous Migration 12
Migrating Large Databases to Amazon Aurora 12
Partition and Shard Consolidation on Amazon Aurora 13
Migration Options at a Glance 14
RDS Snapshot Migration 15
Migrating the Database Schema 20
Homogenous Schema Migration 20
Heterogeneous Schema Migration 22
Schema Migration using AWS Schema Conversion Tool 23
Migrating Data 31
Introduction and General Approach to AWS DMS 31
Migration Methods 32
Migration Procedure 33
Testing and Cutover 38
Migration Testing 38
Page 3 of 42
Amazon Web Services – Migrating Your Databases to Amazon Aurora June 2016
Cutover 39
Conclusion 40
Contributors 40
Further Reading 41
Notes 41
Abstract
Amazon Aurora is a MySQL-compatible, enterprise grade relational database
engine. Amazon Aurora is a cloud-native database that overcomes many of the
limitations of traditional relational database engines. The goal of this whitepaper
is to highlight best practices of migrating your existing databases to Amazon
Aurora. It presents migration considerations and the step-by-step process of
migrating open-source and commercial databases to Amazon Aurora with
minimum disruption to the applications.
Page 4 of 42
Amazon Web Services – Migrating Your Databases to Amazon Aurora June 2016
For applications that need read-only replicas, you can create up to 15 Aurora
Replicas per Aurora database with very low replica lag. These replicas share the
same underlying storage as the source instance, lowering costs and avoiding the
need to perform writes at the replica nodes.
Amazon Aurora is highly secure and allows you to encrypt your databases using
keys that you create and control through AWS Key Management Service (AWS
KMS). On a database instance running with Amazon Aurora encryption, data
stored at rest in the underlying storage is encrypted, as are the automated
backups, snapshots, and replicas in the same cluster. Amazon Aurora uses SSL
(AES-256) to secure data in transit.
For a complete list of Aurora features, see the Amazon Aurora product page.1
Given the rich feature set and cost effectiveness of Amazon Aurora, it is
increasingly viewed as the go-to database for mission-critical applications.
Page 5 of 42
Amazon Web Services – Migrating Your Databases to Amazon Aurora June 2016
Migration Phases
Because database migrations tend to be complex, we advocate taking a phased,
iterative approach.
Application Considerations
Evaluate Aurora Features
Although most applications can be architected to work with many relational
database engines, you should make sure that your application works with
Amazon Aurora. Amazon Aurora is designed to be wire-compatible with MySQL
5.6. Therefore, most of the code, applications, drivers, and tools that are used
today with MySQL databases can be used with Aurora with little or no change.
However, certain MySQL features, like the MyISAM storage engine, are not
available with Amazon Aurora. Also, due to the managed nature of the Aurora
service, SSH access to database nodes is restricted, which may affect your ability
to install third-party tools or plugins on the database host.
Performance Considerations
Database performance is a key consideration when migrating a database to a new
platform. Therefore, many successful database migration projects start with
performance evaluations of the new database platform. Although running
Sysbench and TPC-C benchmarks gives you a decent idea of overall database
performance, these benchmarks do not emulate the data access patterns of your
applications. For more useful results, test the database performance for time-
Page 6 of 42
Amazon Web Services – Migrating Your Databases to Amazon Aurora June 2016
sensitive workloads by running your queries (or subset of your queries) on the
new platform directly.
If you are on a non-MySQL compliant engine, you can selectively copy the
busiest tables to Amazon Aurora and test your queries for those tables. This
gives you a good starting point. Of course, testing after complete data
migration will provide a full picture of real-world performance of your
application on the new platform.
One area where Amazon Aurora significantly improves upon traditional MySQL
is highly concurrent workloads. In order to maximize your workload’s throughput
on Amazon Aurora, we recommend architecting your applications to drive a large
number of concurrent queries.
If your application is read/write heavy, consider using Aurora read replicas for
offloading read-only workload from the master database node. Doing this can
Page 7 of 42
Amazon Web Services – Migrating Your Databases to Amazon Aurora June 2016
improve concurrency of your primary database for writes and will improve
overall read and write performance. Using read replicas can also lower your costs
in a Multi-AZ configuration since you may be able to use smaller instances for
your primary instance while adding failover capabilities in your database cluster.
Aurora read replicas offer near-zero replication lag and you can create up to 15
read replicas.
Reliability Considerations
An important consideration with databases is high availability and disaster
recovery. Determine the RTO (recovery time objective) and RPO (recovery point
objective) requirements of your application. With Amazon Aurora, you can
significantly improve both these factors.
Amazon Aurora reduces database restart times to less than 60 seconds in most
database crash scenarios. Aurora also moves the buffer cache out of the database
process and makes it available immediately at restart time. In rare scenarios of
hardware and Availability Zone failures, recovery is automatically handled by the
database platform.
Aurora is designed to provide you zero RPO recovery within an AWS Region,
which is a major improvement over on-premises database systems. Aurora
maintains six copies of your data across three Availability Zones and
automatically attempts to recover your database in a healthy AZ with no data
loss. In the unlikely event that your data is unavailable within Amazon Aurora
storage, you can restore from a DB snapshot or perform a point-in-time restore
operation to a new instance.
Page 8 of 42
Amazon Web Services – Migrating Your Databases to Amazon Aurora June 2016
Page 9 of 42
Amazon Web Services – Migrating Your Databases to Amazon Aurora June 2016
Homogenous Migration
If your source database is a MySQL 5.6 compliant database (MySQL, MariaDB,
Percona, etc.), then migration to Aurora is quite straightforward.
Migration using native MySQL tools. You may use native MySQL tools to
migrate your data and schema to Aurora. This is a great option when you
need more control over the database migration process, you are more
comfortable using native MySQL tools, and other migration methods are
not performing as well for your use case. For best practices when using this
option, download the whitepaper Amazon RDS for Aurora Export/Import
Performance Best Practices.3
Page 10 of 42
Amazon Web Services – Migrating Your Databases to Amazon Aurora June 2016
When your database is relatively large and the migration time using
downtime options is longer than your application maintenance window
When you want to run source and target databases in parallel for testing
purposes
In such cases, you can replicate changes from your source MySQL database to
Aurora in real time using replication. You have a couple of options to choose
from:
Page 11 of 42
Amazon Web Services – Migrating Your Databases to Amazon Aurora June 2016
Heterogeneous Migration
If you are looking to migrate a non-MySQL-compliant database (Oracle, SQL
Server, PostgresSQL etc.) to Amazon Aurora, several options can help you
accomplish this migration quickly and easily.
Schema Migration
Schema migration from a non-MySQL-compliant database to Amazon Aurora can
be achieved using the AWS Schema Conversion Tool. This tool is a desktop
application that helps you convert your database schema from an Oracle,
Microsoft SQL Server, or PostgreSQL database to an Amazon RDS MySQL DB
instance or an Amazon Aurora DB cluster. In cases where the schema from your
source database cannot be automatically and completely converted, the AWS
Schema Conversion Tool provides guidance on how you can create the equivalent
schema in your target Amazon RDS database. For details, see the Migrating the
Database Schema section.
Data Migration
While supporting homogenous migrations with near-zero downtime, AWS
Database Migration Service (AWS DMS) also supports continuous replication
across heterogeneous databases and is a preferred option to move your source
database to your target database, for both migrations with downtime and
migrations with near-zero downtime. Once the migration has started, AWS DMS
manages all the complexities of the migration process like data type
transformation, compression, and parallel transfer (for faster data transfer) while
ensuring that data changes to the source database that occur during the
migration process are automatically replicated to the target.
Besides using AWS DMS, you can use various third-party tools like Attunity
Replicate, Tungsten Replicator, Oracle Golden Gate, etc. to migrate your data to
Amazon Aurora. Whatever tool you choose, take performance and licensing costs
into consideration before finalizing your toolset for migration.
Page 12 of 42
Amazon Web Services – Migrating Your Databases to Amazon Aurora June 2016
Copy static tables first. If your database relies on large static tables with
reference data, you may migrate these large tables to the target database
before migrating your active dataset. You can leverage AWS DMS to copy
tables selectively or export and import these tables manually.
Database cleanup. Many large databases contain data and tables that
remain unused. In many cases, developers and DBAs keep backup copies of
tables in the same database, or they just simply forget to drop unused
tables. Whatever the reason, a database migration project provides an
opportunity to clean up the existing database before the migration. If some
tables are not being used, you might either drop them or archive them to
another database. You might also delete old data from large tables or
archive that data to flat files.
Page 13 of 42
Amazon Web Services – Migrating Your Databases to Amazon Aurora June 2016
Consolidation strategy. Since all shards share the same database schema,
you only need to create the target schema once. If you are using a MySQL-
compliant database, use native tools to migrate the database schema to
Aurora. If you are using a non-MySQL database, use AWS Schema
Conversion Tool to migrate the database schema to Aurora. Once the
database schema has been migrated, it is best to stop writes to the database
shards and use native tools or an AWS DMS one-time data load to migrate
an individual shard to Aurora. If writes to the application cannot be
stopped for an extended period, you might still use AWS DMS with
replication but only after proper planning and testing.
Amazon RDS MySQL Option 1: RDS snapshot migration Option 1: Migration using native tools
+ binlog replication
Option 2: Manual migration using Option 2: RDS snapshot migration +
native tools* binlog replication
Option 3: Schema migration using Option 3: Schema migration using
native tools and data load using native tools + AWS DMS for data
AWS DMS movement
Page 14 of 42
Amazon Web Services – Migrating Your Databases to Amazon Aurora June 2016
MySQL Amazon EC2 Option 1: Migration using native Option 1: Migration using native tools
or on-premises tools + binlog replication
Option 2: Schema migration with Option 2: Schema migration using
native tools + AWS DMS for data native tools + AWS DMS to move data
load
Oracle/SQL server Option 1: AWS Schema Conversion Option 1: AWS Schema Conversion
Tool + AWS DMS (recommended) Tool + AWS DMS (recommended)
Option 2: Manual or third-party tool Option 2: Manual or third-party tool for
for schema conversion + manual or schema conversion + manual or third-
third-party data load in target party data load in target + third-party
tool for replication.
Other non-MySQL Option: Manual or third-party tool for Option: Manual or third-party tool for
databases schema conversion + manual or schema conversion + manual or third-
third-party data load in target party data load in target + third-party
tool for replication (GoldenGate, etc.)
*Mysql Native tools: mysqldump, SELECT INTO OUTFILE, third-party tools like mydumper/myloader
The biggest advantage to this migration method is that it is the simplest and
requires the fewest number of steps. In particular, it migrates over all schema
objects, secondary indexes, and stored procedures along with all of the database
data.
During snapshot migration without binlog replication, your source database must
either be offline or in a read-only mode (so that no changes are being made to the
source database during migration). To estimate downtime, you can simply use
the existing snapshot of your database to do a test migration. If the migration
time fits within your downtime requirements, then this may be the best method
for you. Note that in some cases, migration using AWS DMS or native migration
tools can be faster than using snapshot migration.
Page 15 of 42
Amazon Web Services – Migrating Your Databases to Amazon Aurora June 2016
If you can’t tolerate extended downtime, you may still achieve near-zero
downtime migration by first migrating a snapshot of the RDS database to
Amazon Aurora, while allowing the source database to continue being updated.
You then bring it up to date using MySQL binlog replication from MySQL to
Aurora.
You can migrate either a manual or an automated DB snapshot. The general steps
you must take are as follows:
The size of the migration volume is based on the allocated size of the source
MySQL database that the snapshot was made from. Therefore, if you have
MyISAM or compressed tables that make up a small percentage of the overall
database size and there is available space in the original database, then migration
Page 16 of 42
Amazon Web Services – Migrating Your Databases to Amazon Aurora June 2016
should succeed without encountering any space issues. However, if the original
database would not have enough room to store a copy of converted MyISAM
tables as well as another (uncompressed) copy of compressed tables, then the
migration volume will not be big enough. In this situation, you would need to
modify the source Amazon RDS MySQL database to increase the database size
allocation to make room for the additional copies of these tables, take a new
snapshot of the database, and then migrate the new snapshot.
When migrating data into your DB cluster, observe the following guidelines and
limitations:
You might want to modify your database schema (convert MyISAM tables to
InnoDB and remove ROW_FORMAT=COMPRESSED) prior to migrating it into
Amazon Aurora. This can be helpful in the following cases:
You have attempted to migrate your data and the migration has failed due
to a lack of provisioned space.
Make sure that you are not making these changes in your production Amazon
RDS MySQL database but rather on a database instance that was restored from
your production snapshot. For more details on doing this, see Reducing the
Amount of Space Required to Migrate Data into Amazon Aurora in the Amazon
RDS User Guide.5
Page 17 of 42
Amazon Web Services – Migrating Your Databases to Amazon Aurora June 2016
made from an RDS DB instance running MySQL 5.6 and must not be encrypted.
For information about creating a DB snapshot, see Creating a DB Snapshot in the
Amazon RDS User Guide.6
If the DB snapshot is not in the region where you want to locate your Aurora DB
cluster, use the Amazon RDS console to copy the DB snapshot to that region. For
information about copying a DB snapshot, see Copying a DB Snapshot in Amazon
RDS User Guide.7
1. Sign in to the AWS Management Console and open the Amazon RDS
console at https://console.aws.amazon.com/rds/.
2. Choose Snapshots.
3. On the Snapshots page, choose the Amazon RDS MySQL 5.6 snapshot
that you want to migrate into an Aurora DB cluster.
4. Choose Migrate Database.
5. On the Migrate Database page, specify the values that match your
environment and processing requirements as shown in the following
illustration. For descriptions of these options, see Migrating a DB Snapshot
by Using the Console in the Amazon RDS User Guide.8
Page 18 of 42
Amazon Web Services – Migrating Your Databases to Amazon Aurora June 2016
Page 19 of 42
Amazon Web Services – Migrating Your Databases to Amazon Aurora June 2016
In most situations, the database schema remains relatively static, and therefore
you don’t need downtime during the database schema migration step. The
schema from your source database can be extracted while your source database is
up and running without affecting the performance. If your application or
developers do make frequent changes to the database schema, make sure that
these changes are either paused while the migration is in process, or are
accounted for during the schema migration process.
Depending on the type of your source database, you can use the techniques
discussed in the next sections to migrate the database schema. As a prerequisite
to schema migration, you must have a target Aurora database created and
available.
Page 20 of 42
Amazon Web Services – Migrating Your Databases to Amazon Aurora June 2016
Exporting database schema. You can use the mysqldump client utility to
export the database schema. To execute this utility, you need to connect to
your source database and redirect the output of mysqldump command to a
flat file. The –no-data option ensures that only database schema is
exported without any actual table data. For the complete mysqldump
command reference, see mysqldumpA Database Backup Program.10
Amazon Aurora supports InnoDB tables only. If you have MyISAM tables
in your source database, Aurora automatically changes the engine to
InnoDB when the CREATE TABLE command is executed.
Amazon Aurora does not support compressed tables (that is, tables created
with ROW_FORMAT=COMPRESSED). If you have compressed tables in your
Page 21 of 42
Amazon Web Services – Migrating Your Databases to Amazon Aurora June 2016
Once you have successfully imported the schema into Amazon Aurora from your
MySQL 5.6-compliant source database, the next step is to copy the actual data
from the source to the target. For more information, see the Introduction and
General Approach to AWS DMS later in this paper.
Page 22 of 42
Amazon Web Services – Migrating Your Databases to Amazon Aurora June 2016
The following description walks you through the high-level steps of using AWS
the Schema Conversion Tool. For detailed instructions, see the Getting Started
section of the AWS Schema Conversion Tool User Guide.12
1. First, install the tool. The AWS Schema Conversion Tool is available for the
Microsoft Windows, Mac OS X, Ubuntu Linux, and Fedora Linux.
Detailed download and installation instructions can be found in the
installation and update section of the user guide.13 Where you install AWS
Schema Conversion Tool is important. The tool needs to connect to both
source and target databases directly in order to convert and apply schema.
Make sure that the desktop where you install AWS Schema Conversion
Tool has network connectivity with the source and target databases.
2. Install JDBC drivers. The AWS Schema Conversion Tool uses JDBC drivers
to connect to the source and target databases. In order to use this tool, you
must download these JDBC drivers to your local desktop. Instructions for
driver download can be found at Required Database Drivers in the AWS
Schema Conversion Tool User Guide.14 Also, check the AWS forum for
AWS Schema Conversion Tool for instructions on setting up JDBC drivers
for different database engines.15
3. Create a target database. Create an Amazon Aurora target database. For
instructions on creating an Amazon Aurora database, see Creating an
Amazon Aurora DB Cluster in the Amazon RDS User Guide.16
4. Open the AWS Schema Conversion Tool and start the New Project
Wizard.
Page 23 of 42
Amazon Web Services – Migrating Your Databases to Amazon Aurora June 2016
5. Configure the source database and test connectivity between AWS Schema
Conversion Tool and the source database. Your source database must be
reachable from your desktop for this to work, so make sure that you have
the appropriate network and firewall settings in place.
Page 24 of 42
Amazon Web Services – Migrating Your Databases to Amazon Aurora June 2016
6. In the next screen, select the schema of your source database that you want
to convert to Amazon Aurora.
Page 25 of 42
Amazon Web Services – Migrating Your Databases to Amazon Aurora June 2016
Page 26 of 42
Amazon Web Services – Migrating Your Databases to Amazon Aurora June 2016
The Summary tab displays the summary information from the database
migration assessment report. It shows items that were automatically
converted and items that could not be automatically converted.
For schema items that could not be automatically converted to the target
database engine, the summary includes an estimate of the effort that it
would take to create a schema that is equivalent to your source database in
your target DB instance. The report categorizes the estimated time to
convert these schema items as follows:
Significant – Actions that are very complex and will take more
than four hours to complete.
Page 27 of 42
Amazon Web Services – Migrating Your Databases to Amazon Aurora June 2016
Important: If you are evaluating the effort required for your database
migration project, this assessment report is an important artifact to
consider. Study the assessment report in details to determine what code
changes are required in the database schema and what impact the changes
might have on your application functionality and design.
11. The next step is to convert the schema. The converted schema is not
immediately applied to the target database. Instead, it is stored locally until
you explicitly apply the converted schema to the target database. To
convert the schema from your source database, choose a schema object to
convert from the left panel of your project. Right-click the object and
choose Convert schema, as shown in the following illustration.
Page 28 of 42
Amazon Web Services – Migrating Your Databases to Amazon Aurora June 2016
This action adds converted schema to the right panel of the project window
and shows objects that were automatically converted by the AWS Schema
Conversion Tool.
12. You can respond to the action items in the assessment report in different
ways:
Add equivalent schema manually. You can write the portion of the schema
that can be automatically converted to your target DB instance by choosing
Apply to database in the right panel of your project. The schema that is
written to your target DB instance won't contain the items that couldn't be
automatically converted. Those items are listed in your database migration
assessment report.
After applying the schema to your target DB instance, you can then
manually create the schema in your target DB instance for the items that
could not be automatically converted. In some cases, you may not be able
to create an equivalent schema in your target DB instance. You might need
to redesign a portion of your application and database to use the
functionality that is available from the DB engine for your target DB
instance. In other cases, you can simply ignore the schema that can't be
automatically converted.
Page 29 of 42
Amazon Web Services – Migrating Your Databases to Amazon Aurora June 2016
your target DB instance overwrites schema of the same name in the target
DB instance, and you lose any updates that you added manually.
Modify your source database schema and refresh the schema in your
project. For some items, you might be best served to modify the database
schema in your source database to the schema that is compatible with your
application architecture and that can also be automatically converted to the
DB engine of your target DB instance. After updating the schema in your
source database and verifying that the updates are compatible with your
application, choose Refresh from Database in the left panel of your
project to update the schema from your source database. You can then
convert your updated schema and generate the database migration
assessment report again. The action item for your updated schema no
longer appears.
13. When you are ready to apply your converted schema to your target Aurora
instance, choose the schema element from the right panel of your project.
Right-click the schema element and choose Apply to database, as shown
in the following figure.
Note: The first time that you apply your converted schema to your target
DB instance, the AWS Schema Conversion Tool adds an additional schema
(AWS_ORACLE_EXT or AWS_SQLSERVER_EXT) to your target DB instance.
Page 30 of 42
Amazon Web Services – Migrating Your Databases to Amazon Aurora June 2016
This schema implements system functions of the source database that are
required when writing your converted schema to your target DB instance.
Do not modify this schema, or you might encounter unexpected results in
the converted schema that is written to your target DB instance. When
your schema is fully migrated to your target DB instance, and you no
longer need the AWS Schema Conversion Tool, you can delete the
AWS_ORACLE_EXT or AWS_SQLSERVER_EXT schema.
Migrating Data
After the database schema has been copied from the source database to the target
Aurora database, the next step is to migrate actual data from source to target.
While data migration can be accomplished using different tools, we recommend
moving data using the AWS Database Migration Service (AWS DMS) as it
provides both the simplicity and the features needed for the task at hand.
AWS DMS works with databases that are on premises, running on Amazon EC2,
or running on Amazon RDS. However, AWS DMS does not work in situations
Page 31 of 42
Amazon Web Services – Migrating Your Databases to Amazon Aurora June 2016
where both the source database and the target database are on premises; one
endpoint must be in AWS.
AWS DMS supports specific versions of Oracle, SQL Server, Amazon Aurora,
MySQL, and PostgreSQL. For currently supported versions, see the AWS
Database Migration Service User Guide.18 However, this whitepaper is just
focusing on Amazon Aurora as a migration target.
Migration Methods
AWS DMS provides three methods for migrating data:
Migrate existing data. This method creates the tables in the target database,
automatically defines the metadata that is required at the target, and populates
the tables with data from the source database (also referred to as a “full load”).
The data from the tables is loaded in parallel for improved efficiency. Tables are
only created in case of homogenous migrations, and secondary indexes aren’t
created automatically by AWS DMS. Read further for details.
Migrate existing data and replicate ongoing changes. This method does a full
load, as described above, and in addition captures any ongoing changes being
made to the source database during the full load and stores them on the
replication instance. Once the full load is complete, the stored changes are
applied to the destination database until it has been brought up to date with the
source database. Additionally, any ongoing changes being made to the source
database continue to be replicated to the destination database to keep them in
sync. This migration method is very useful when you want to perform a database
migration with very little downtime.
Replicate data changes only. This method just reads changes from the recovery
log file of the source database and applies these changes to the target database on
an ongoing basis. If the target database is unavailable, these changes are buffered
on the replication instance until the target becomes available.
When AWS DMS is performing a full load migration, the processing puts a load
on the tables in the source database, which could affect the performance of
applications that are hitting this database at the same time. If this is an issue, and
you cannot shut down your applications during the migration, you can consider
the following approaches:
Page 32 of 42
Amazon Web Services – Migrating Your Databases to Amazon Aurora June 2016
Running the migration at a time when the application load on the database
is at its lowest point.
Creating a read replica of your source database and then performing the
AWS DMS migration from the read replica.
Migration Procedure
The general outline for using AWS DMS is as follows:
Copy Schema
Additionally, you should create the schema in this target database. AWS DMS
supports basic schema migration, including the creation of tables and primary
keys. However, AWS DMS doesn't automatically create secondary indexes,
foreign keys, stored procedures, user accounts, etc., in the target database. For
full schema migration details, see Migrating the Database Schema section.
Page 33 of 42
Amazon Web Services – Migrating Your Databases to Amazon Aurora June 2016
Also, your replication instance can be stopped once your database migration is
complete.
AWS DMS currently supports the T2 and C4 instance classes for replication
instances. The T2 instance classes are low-cost standard instances designed to
provide a baseline level of CPU performance with the ability to burst above the
baseline. They are suitable for developing, configuring, and testing your database
migration process as well as for periodic data migration tasks that can benefit
from the CPU burst capability. The C4 instance classes are designed to deliver the
highest level of processor performance and achieve significantly higher packet
per second (PPS) performance, lower network jitter, and lower network latency.
You should use C4 instance classes if you are migrating large databases and want
to minimize the migration time.
Normally, doing a full load does not require a significant amount of instance
storage on your AWS DMS replication instance. However, if you are doing
replication along with your full load, then the changes to the source database are
stored on the AWS DMS replication instance while the full load is taking place. So
if you are migrating a very large source database that is also receiving a lot of
updates while the migration is in progress, then a significant amount of instance
storage could be consumed. The C4 instance family comes with 100 GB of
instance storage and the T2 instance family comes with 50 GB. Normally these
amounts of storage should be more than adequate for most migration scenarios.
Also, in some extreme cases where very large databases with very high
transaction rates are being migrated with replication enabled, it is possible that
the AWS DMS replication may not be able to catch up in time. If you encounter
this situation, it may be necessary to stop the changes to the source database for
some number of minutes in order for the replication to catch up before you
repoint your application to the target Aurora DB.
Page 34 of 42
Amazon Web Services – Migrating Your Databases to Amazon Aurora June 2016
Figure 12: Create replication instance page in the AWS DMS console
We highly recommended that you test your database endpoint connection after
you define it. The same page used to create a database endpoint can also be used
to test it, as explained later in this paper.
Note: If you have foreign key constraints in your source schema, when creating
your target endpoint you need to enter the following for Extra connection
attributes in the Advanced section:
initstmt=SET FOREIGN_KEY_CHECKS=0
This disables the foreign key checks while the target tables are being loaded. This
in turn prevents the load from being interrupted by failed foreign key checks on
partially loaded tables.
Page 35 of 42
Amazon Web Services – Migrating Your Databases to Amazon Aurora June 2016
Figure 13: Create database endpoint page in the AWS DMS console
Also, under Task Settings, if you have already created the full schema in the
target database, then you should change the Target table preparation mode
to Do nothing rather than using the default value of Drop tables on target.
The latter can cause you to lose aspects of your schema definition like foreign key
constraints when it drops and recreates tables.
Page 36 of 42
Amazon Web Services – Migrating Your Databases to Amazon Aurora June 2016
When creating a task, you can create table mappings that specify the source
schema along with the corresponding tables to be migrated to the target
endpoint. The default mapping method migrates all source tables to target tables
of the same name if they exist. Otherwise, it creates the source table(s) on the
target (depending on your task settings). Additionally, you can create custom
mappings (using a JSON file) if you want to migrate only certain tables or if you
want to have more control over the field and table mapping process. You can also
choose to migrate only one schema or all schemas from your source endpoint.
You can use the AWS Management Console to monitor the progress of your AWS
Database Migration Service (AWS DMS) tasks. You can also monitor the
resources and network connectivity used. The AWS DMS console shows basic
statistics for each task, including the task status, percent complete, elapsed time,
and table statistics, as the following image shows.
Page 37 of 42
Amazon Web Services – Migrating Your Databases to Amazon Aurora June 2016
Additionally, you can select a task and display performance metrics for that task,
including throughput, records per second migrated, disk and memory use, and
latency.
Migration Testing
Test Category Purpose
Basic acceptance tests These pre-cutover tests should be automatically executed upon
completion of the data migration process. Their primary purpose is to
verify whether the data migration was successful. Following are some
common outputs from these tests:
• Total number of items processed
• Total number of items imported
• Total number of items skipped
• Total number of warnings
• Total number of errors
If any of these totals reported by the tests deviate from the expected
values, then it means the migration was not successful, and the issues
need to be resolved before moving to the next step in the process or the
next round of testing.
Functional tests These post-cutover tests exercise the functionality of the application(s)
using Aurora for data storage. They include a combination of automated
and manual tests. The primary purpose of the functional tests is to
identify problems in the application caused by the migration of the data
to Aurora.
Nonfunctional tests These post-cutover tests assess the nonfunctional characteristics of the
application, such as performance under varying levels of load.
Page 38 of 42
Amazon Web Services – Migrating Your Databases to Amazon Aurora June 2016
User acceptance tests These post-cutover tests should be executed by the end users of the
application once the final data migration and cutover is complete. The
purpose of these tests is for the end users to decide if the application is
sufficiently usable to meet its primary function in the organization.
Cutover
Once you have completed the final migration and testing, it is time to point your
application to the Amazon Aurora database. This phase of migration is known as
cutover. If the planning and testing phase has been executed properly, cutover
should not lead to unexpected issues.
Pre-cutover Actions
Choose a cutover window: Identify a block of time when you can
accomplish cutover to the new database with minimum disruption to the
business. Normally you would select a low activity period for the database
(typically nights and/or weekends).
Stop the application: Stop the application processes on the source database
and put the source database in read-only mode so that no further writes
can be made to the source database. If the source database changes aren’t
fully caught up with the target database, wait for some time while these
changes are fully propagated to the target database.
Page 39 of 42
Amazon Web Services – Migrating Your Databases to Amazon Aurora June 2016
Cutover
Execute cutover: If pre-cutover checks were completed successfully, you
can now point your application to Amazon Aurora. Execute scripts created
in the pre-cutover phase to change the application configuration to point to
the new Aurora database.
Start your application: At this point, you may start your application. If you
have an ability to stop users from accessing the application while the
application is running, exercise that option until you have executed your
post-cutover checks.
Post-cutover Checks
Execute post-cutover tests: Execute predefined automated or manual test
cases to make sure your application works as expected with the new
database. It’s a good strategy to start testing read-only functionality of the
database first before executing tests that write to the database.
Enable user access and closely monitor: If your test cases were executed
successfully, you may give user access to the application to complete the
migration process. Both application and database should be closely
monitored at this time.
Conclusion
Amazon Aurora is a high performance, highly available, and enterprise-grade
database built for the cloud. Leveraging Amazon Aurora can result in better
performance and greater availability than other open-source databases and lower
costs than most commercial grade databases. This paper proposes strategies for
identifying the best method to migrate databases to Amazon Aurora and details
the procedures for planning and executing those migrations. In particular, AWS
Database Migration Service (AWS DMS) as well as the AWS Schema Conversion
Tool are the recommended tools for heterogeneous migration scenarios. These
powerful tools can greatly reduce the cost and complexity of database migrations.
Contributors
The following individuals and organizations contributed to this document:
Page 40 of 42
Amazon Web Services – Migrating Your Databases to Amazon Aurora June 2016
Further Reading
For additional help, consult the following sources:
Notes
1 https://aws.amazon.com/rds/aurora/
2 http://aws.amazon.com/rds/aurora/pricing/
3 https://d0.awsstatic.com/product-
marketing/Aurora/Aurora_Export_Import_Best_Practices_v1-3.pdf
4 http://docs.aws.amazon.com/pt_br/AmazonRDS/latest/UserGuide/
Aurora.Replication.html#Aurora.Overview.Replication.MySQLReplication
5 http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/
Aurora.Migrate.html#USER_ImportAurora.PreImport
6 http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_CreateSnapshot.html
7 http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_CopySnapshot.html
8 http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/
Aurora.Migrate.html#USER_ImportAuroraCluster.Console
9 http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Aurora.Connect.html
10 https://dev.mysql.com/doc/refman/5.6/en/mysqldump.html
11 http://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/Welcome.html
12 http://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/
CHAP_SchemaConversionTool.GettingStarted.html
13 http://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/
CHAP_SchemaConversionTool.Installing.html
Page 41 of 42
Amazon Web Services – Migrating Your Databases to Amazon Aurora June 2016
14 http://docs.aws.amazon.com/SchemaConversionTool/latest/userguide
/CHAP_SchemaConversionTool.Installing.html#CHAP_SchemaConversionTool.Installing
.JDBCDrivers
15 https://forums.aws.amazon.com/forum.jspa?forumID=208
16 http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Aurora.CreateInstance.html
17 http://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/
CHAP_SchemaConversionTool.BestPractices.html
18 http://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.html
19 http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Aurora.CreateInstance.html
Page 42 of 42