Migrating to Amazon RDS
via Xtrabackup
Santa Clara, California | April 23th – 25th, 2018
Who are we?
● Agustín Gallego
○ Support Engineer - Percona
● Alexander Rubin
○ Principal Consultant - Percona
!2
Agenda
● XtraBackup
● Amazon Relational Database Service (RDS)
● Pros and Cons of Migrating to RDS
● Migrating to RDS via XtraBackup
● Limitations
!3
XtraBackup
Introduction to XtraBackup
● XtraBackup is a free and open source hot
backup tool for MySQL
○ Percona Server and MariaDB are also
supported
● Implements functionality in MySQL
Enterprise Backup (InnoDB Hot Backup)
○ and more!
!5
Introduction to XtraBackup
● Supports hot (lockless) backups for InnoDB and XtraDB
● Locking backups for MyISAM
● Packaged for Linux operating systems only
○ DEB and RPM packages available
○ Generic Linux tarball and source code
● Main features:
○ Incremental and compressed backups
○ Backup locks (as an alternative to FTWRL)
○ Encrypted backups
○ Streaming
○ Ability to export individual tables and partitions
!6
How does it work?
● There are three main phases
● Backup
○ Copies all files needed
● Apply logs
○ Performs a crash recovery, to leave in a consistent state
● Copy back
○ Moves the files to their final destination
● Enough permissions at OS and MySQL levels required
○ we'll use root accounts, but there is more in-depth documentation on this
!7
Full backup - example
shell> cd /var/lib/mysql/
shell> du -h --max-depth=1 .
636K ./performance_schema
1.8M ./mysql
31G ./test
32G .
shell> xtrabackup \
--defaults-file=/etc/my.cnf \
--backup \
--target-dir=/backups/full/
!8
Compressed backup - example #1
shell> cd /var/lib/mysql/
shell> du -h --max-depth=1 .
636K ./performance_schema
1.8M ./mysql
31G ./test
32G .
shell> xtrabackup \
--defaults-file=/etc/my.cnf \
--backup \
--compress \
--target-dir=/backups/compressed/
!9
Compressed backup - example #2
shell> cd /var/lib/mysql/
shell> du -h --max-depth=1 .
636K ./performance_schema
1.8M ./mysql
31G ./test
32G .
shell> xtrabackup \
--defaults-file=/etc/my.cnf \
--backup \
--compress \
--stream=xbstream \
> /backups/compressed/backup.xbstream
!10
Incremental backup - example
shell> cd /var/lib/mysql/
shell> du -h --max-depth=1 .
636K ./performance_schema
1.8M ./mysql
37G ./test
38G .
shell> xtrabackup \
--defaults-file=/etc/my.cnf \
--backup \
--incremental-basedir=/backups/full/ \
--target-dir=/backups/incremental/
!11
Parallel compressed backup - example
shell> xtrabackup \
--defaults-file=/etc/my.cnf \
--backup \
--parallel=8 \
--compress \
--compress-threads=8 \
--stream=xbstream \
> /backups/compressed_parallel_backup.xbstream
!12
Differences between them
● Full
○ took: 9 min 20 sec
○ resulting size: 32 Gb
● Compressed
○ took: 7 min 40 sec
○ resulting size: 8.4 Gb (original 32 Gb)
● Incremental
○ took: 5 min 40 sec
○ resulting size: 6.2 Gb
● Parallel + compressed
○ took: 3 min 40 sec
○ resulting size: 8.4 Gb (original 32 Gb)
!13
Yet another compressed backup example
shell> xtrabackup \
--defaults-file=/etc/my.cnf \
--backup \
--stream=tar \
| gzip -c \
| split -d --bytes=10GB - /backups/compressed/compressed_backup.tar.gz
● `split` will come handy afterwards, when we discuss limitations
● `gzip` is also very slow, since it uses one processor to compress
○ use pigz instead of gzip
!14
Yet another compressed backup example
!15
Yet another compressed backup example
gzip
pigz
!16
Logical vs Binary Backups
● Logical backup: mysqldump, text file with commands to restore
● Physical backup: copied files
● Main differences: restore time
○ Much faster with binary backup, i.e. xtrabackup
!17
Amazon Relational Database
Service (RDS)
What is RDS / Aurora?
● Web Service targeted to easily:
○ setup
○ operate
○ scale
● Features:
○ rapid provisioning
○ scalable resources
○ high availability
○ automatic admin tasks
!19
AWS Aurora Features
● Storage Auto-Scaling (up to 64Tb)
● Replication:
○ Amazon Aurora replicas share the same underlying volume as the primary instance
■ up to 15 replicas
○ MySQL based replicas
● Scalability for reads:
○ can autoscale and add more read replicas
● High Availability
○ Amazon Aurora automatically maintains six copies of your data across three
Availability Zones (AZs)
○ Automatically attempt to recover
!20
Read Scaling with RDS / Aurora
● RDS MySQL: same as MySQL
○ adding MySQL replication slaves
● Aurora MySQL:
○ Read replicas (aurora specific) - not based on MySQL replication
○ MySQL replication (for cross-datacenter replication)
!21
Pros and Cons of Migrating to RDS
Pros of Migrating to RDS
● Easy to manage
○ Minor upgrades automatically handled
○ Backups automatically handled
○ Less DBA work
○ Less things to worry about (OS config, replication setup, etc)
!23
RDS Aurora for MySQL
● Aurora / MySQL: additional features
○ Low latency read replicas
○ Load balancer for reads built-in
○ Instant add column, faster GIS, etc
● Aurora / MySQL: preview
○ Aurora Multi-Master adds the ability to scale out write performance across multiple
Availability Zones
○ Aurora Serverless automatically scales database capacity up and down to match
your application needs.
○ Amazon Aurora Parallel Query improves the performance of large analytic queries
by pushing processing down to the Aurora storage layer, spreading processing
across hundreds of nodes.
!24
Cons of Migrating to RDS
● Instance type limits
○ Instances offer up to 32 vCPUs and 244 GiB Memory
● Less control over the server
● More expensive than using EC2 (can be 3x more expensive)
● Aurora / MySQL - single-threaded workload can be much slower
!25
Migrating to RDS via XtraBackup
Announcement
https://aws.amazon.com/about-aws/whats-new/2017/11/easily-restore-an-amazon-rds-mysql-database-from-your-mysql-backup/ !27
General Steps to Migrate
● Take XtraBackup backup from instance
● Upload to S3 bucket
● Create new RDS instance using the backup
○ IAM account used should have access to S3
!28
General Steps to Migrate
● Take XtraBackup backup from instance
● Upload to S3 bucket
● Create new RDS instance using the backup
● It is also possible to do #1 and #2 in one step!
○ but if it fails, you will have to restart all of it
shell> xtrabackup \
--defaults-file=/etc/my.cnf \
--backup --stream=tar \
| aws s3 cp - s3://rds.migration.perconalive18/backup.tar
!29
Taking the Backup
● Use XtraBackup 2.3
○ latest patch version, if possible
● The innobackupex script is deprecated
● Choose timing wisely
○ even if it is a hot backup tool, it will lock for some time
● Use the options from the documentation
○ parallel
○ compressed
○ compress-threads
!30
Taking the Backup
shell> xtrabackup \
--defaults-file=/etc/my.cnf \
--backup \
--target-dir=/backups/full/2018_04_23/
<...output trimmed...>
180423 16:17:25 [00] ...done
xtrabackup: Transaction log of lsn (181862656397) to (181862656397) was
copied.
180423 16:17:25 completed OK!
!31
Uploading to S3
shell> time aws s3 cp /backups/full/2018_04_23/ \
s3://rds.migration.perconalive18/full/2018_04_23/ \
--recursive
<...output trimmed...>
upload: backups/full/2018_04_23/xtrabackup_checkpoints to s3://
rds.migration.perconalive18/full/2018_04_23/xtrabackup_checkpoints
upload: backups/full/2018_04_23/xtrabackup_info to s3://
rds.migration.perconalive18/full/2018_04_23/xtrabackup_info
upload: backups/full/2018_04_23/xtrabackup_logfile to s3://
rds.migration.perconalive18/full/2018_04_23/xtrabackup_logfile
real 6m15.868s
...
!32
Uploading to S3
● Can also be done via web GUI
!33
Uploading backups to S3
● Full
○ 6 min 38 sec
○ 32 Gb
● Incremental
○ 1 min 32 sec
○ 6.2 Gb
● Compressed
○ 1 min 50 sec
○ 8.4 Gb
!34
Creating the new RDS instance
!35
Creating the new RDS instance
!36
Creating the new RDS instance
!37
Creating the new RDS instance
!38
Creating the new RDS instance
!39
Creating the new RDS instance
!40
Creating the new RDS instance
!41
Creating the new RDS instance
!42
Creating the new RDS instance
!43
Creating the new RDS instance
!44
Creating the new RDS instance
!45
Creating the new RDS instance
!46
Creating the new RDS instance
!47
Creating the new RDS instance
!48
Creating the new RDS instance
!49
Creating the new RDS instance
!50
Creating the new RDS instance
!51
Creating the new RDS instance
!52
Creating the new RDS instance
!53
Creating the new RDS instance
!54
Creating the new RDS instance
!55
How much time will it take to restore?
!56
How much time will it take to restore?
!57
Using an incremental backup
!58
Using an incremental backup
!59
Using an incremental backup
● S3 folder path is empty, because we will use all contents
!60
Using a compressed backup
!61
Using a split backup
● Use a command like the following (seen in slide 13)
● Upload all generated files to one folder
● Use that folder as "S3 folder path prefix"
shell> xtrabackup \
--defaults-file=/etc/my.cnf \
--backup \
--stream=tar \
| gzip -c \
| split -d --bytes=10GB - /backups/compressed/compressed_backup.tar.gz
!62
Using the aws CLI command
shell> aws rds restore-db-instance-from-s3 \
--db-instance-identifier rdsmigrationpl18cli \
--db-instance-class db.t2.large \
--engine mysql \
--source-engine mysql \
--source-engine-version 5.6.39 \
--s3-bucket-name rds.migration.perconalive18 \
--s3-ingestion-role-arn "arn:aws:iam::123456789012:user/username" \
--allocated-storage 100 \
--master-username rdspl18usercli \
--master-user-password rdspl18usercli \
--s3-prefix compressed_split_backup
!63
Limitations
Limitations
● Only Percona's XtraBackup is supported
○ it may work with forks, but...
● Source databases should all be contained within the datadir
● Only MySQL 5.6 versions are allowed
● There is a 6 Tb size limit
● Encryption is only partially supported
○ only restore to an encrypted RDS instance is allowed
○ source backup can't be encrypted, nor the S3 bucket
● The S3 bucket has to be in the same region
!65
Limitations
● Importing to a db.t2.micro instance class is not supported
○ it can be changed later
● S3 limits file size to 5 Tb
○ it can be split into smaller files
○ alphabetical and natural number orders are used
● RDS limits the number of files on the S3 bucket to 1M
○ they can be merged with .tar.gz
● The following are not imported automatically
○ Users
○ Functions
○ Stored Procedures
○ Time zone information
!66
Limitations
● Migrating to previous versions is not supported
● Partial restores are not supported
● Import is only available for new DB instances
● No partial backups supported
○ --databases / --tables / --databases-file / --tables-file
● Corruption on source server is not detected, if any, due to being physical
copy
!67
Questions?
Rate Our Session
!69
Thank You!