Backing Up Linux and Other Unix-Like Systems

Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

Backing up Linux and other Unix(-like) systems

www.halfgaar.net /backing-up-unix

There are two kinds of people: those who do regular backups and those who never had a hard drive failure — Unknown.

1. Introduction
The topic of doing backups of a (live) Un*x (mostly Linux) system regularly comes up on Linux mailing lists and forums and invariably the advice to
simply do "tar cvfz backup.tgz /bin /boot /etc ... " is given. Unfortunately, a good backup takes more effort than that. In this article I
will outline a great deal (but not necessarily all) of the pitfalls and details you will have to be watchful of when making backups.

Note that this is not an application how-to, so you should not use the given examples verbatim, nor does it give an exhaustive list of backup
programs and examples. It also doesn't give step-by-step instructions. It is meant to create awareness for people who already have a general
understanding of Un*x systems. Reading all of the documentation of the tool itself is and remains important, for it may make you think of things you
wouldn't otherwise have considered.

Also note that this article mostly describes the process of making backups to an external device or location. If data protection is important to you, I
also highly recommend using RAID. While RAID offers no protection against fires, earthquakes, data corruption or humans, it does offer protection
against failing disks. It has saved me more than once. Additionally, I'd advice you to consider using a UPS.

Although my personal experience is limited to Linux, the issues I'll discuss should (could) work as well on all or most Un*x systems.

2. A backup is more than data


A proper backup contains far more than just your data. It also contains the data about your data: the meta data. It will also contain all the specific file
system attributes and special devices to make your operating system work. It is vital that the target medium/software of your backup supports all
these. As an extreme example, you shouldn't backup an Ext3 file system (standard file system on Linux machines) on FAT32/FAT16 (ancient
Microsoft file systems, still used on USB sticks and similair devices, even though devices like USB sticks can be formatted with any choice file
system, of course). This chapter discusses these meta data and special files.

2.1. File meta data

On an Ext3 partition, the meta data of a file consists of: file modification time, inode modification time, last access time, user and group ID's and
permissions. When you have extended attributes, this can be a whole lot more, most notably Access Control List information. You need to backup
as much of this as you can. Obviously, when you don't store and restore proper permissions, you can end up with a buggered installation. This is
even true for something simple as the mtime. The Gentoo Linux distribution, for example, uses mtimes to determine if files belong to the installation
of a certain package, or if they have been replaced with something new. If you don't restore proper mtimes, the package management will be
completely shot.

It depends very much on the software used what you should do to include all this information. You need to make sure that you're backing it up and
restoring it. When using tar, this only goes right by default when you're root (although I haven't tested how it deals with extended attributes and
ACL's).

Ownership information can be stored in two ways: numerically or textually. A lot of backup programs find it user friendly to use textual matches, but
for making backups of an entire system, this is very undesirable. It's very likely that you will restore the backup using some kind of live CD, while
the original backup was made on the system you're backing up itself. On restoring the backup, files belonging to user bin will be given the ID on the
file system based on the /etc/passwd file of the live CD. If this ID is 2 for example, but ID 2 is user daemon on the system you are restoring, all the
files that used to belong to bin, now belong to daemon. Therefore, always store owner information numericly. Tar has the --numeric-owner option
for that. Rdiff-backup has the --preserve-numerical-ids option, added to version 1.1.0 per my request. Dar will never support textual matches. I
discussed the issue with the author, and he agreed with my reasoning.

Certain backup software have the ability to set back atimes after files are read when doing a backup (dar and tar for example). The purpose of this
is to leave everything behind exactly as it was. One should be very careful with this behavior, because setting back the atime, changes the ctime.
There is no way to change this, because ctimes cannot be set artificially. According to the dar man page, Leafnode NNTP caching software relies
on the preserveration of atimes, but normally the necessity for setting back the atime is very rare. I would like to add that in my opinion, any
program which relies on preservation of atimes, is flawed. Atimes can be changed very arbitrarily, even by users who have no write permissions on
the file. Also, automatic indexing software like Beagle can cause atimes to change. Futhermore, a change in ctime can trip certain security software.
As I said, ctimes cannot be set artificially, meaning that if a file has a new ctime but an identical mtime, since it was last checked, it's possibly
replaced by a different file, usually one that is part of a rootkit. Therefore, don't preserve atimes unless you know what you are doing. Dar preserves
the atime per default. This behavior has been changed in the CVS repository, and is likely to be released in version 2.4.0. Until the default has
changed, use the --alter=atime option.

2.2. Special files

1/6
2.2.1. Links

Links come in two forms: symbolic links and hard links. A symbolic link is simply a reference to another path. A hard link is an additional reference to
an inode.

For preserving symbolic links, all you have to do, is make sure the backup application stores the link information, instead of the file it links to. This is
not always the default, so be careful.

Hard links require a bit more attention. As I said, a hard link is basically a second (or third, or fourth...) name for a file. When you have a file A, link it
to B, you have what appears to be two files. If these two files are 1 GB big, it will only consume 1 GB of space, even though applications can claim
they take up 2 GB. Because the file B is not just a link to A, but a second name for it, you can safely delete A. The file B will still exist after the
deletion of A.

Most backup applications have support for hard links, but only when they are all in the same source tree. This means, if you copy /bin, /etc, /usr...
with separate cp -a commands, hard link information is not detected and copied. Because hard links cannot span accross file systems, suppling
one backup and restore command per partition will work fine. For example, if you have your /home on a separate partition, you could make a
separate archive for / with /home excluded, and another archive of /home alone. If you choose to make one archive with all mount points included,
you may have to take special actions to make sure that upon restoration, the data is restored to the proper partitions. If the program in question
doesn't complain about existing directories, creating the mount points in the new file system with identical names as before should do it. Otherwise,
restoring to one partition first and copying parts to another partition with cp -a later, will most likely work for you. Don't use mv to move the data. You
can imagine what will happen if the command fails half-way...

Linux, and all Unix machines, use hard links extensively, so make 100% sure you maintain link integrity. Rsync for instance, needs the special flag -
-hard-links, even when you've also specified --archive (as the man page says, --archive still lacks --hard-links , --acls and --xattrs).

2.2.2. Sparse files

A sparse file is a file of which the zeros aren't stored on disk as zeros, but are not allocated. Therefore, it's possible that a 1 GB file with a lot of
empty space takes up only 1 MB, for example. A program which uses sparse files is Azureus, a Bittorrent download client.

Support for sparse files varies widely in backup software. When you use a program which doesn't support sparse files, the file is read in the regular
way. The data of the file remains the same, but it can take up a lot more space. You therefore have to be careful, because it's possible a backup
won't fit on the disk anymore when you restore it, since the sparse files are created as normal ones.

For Bittorrent download files, it's not really an issue that they're restored as normal files, because they will be filled up with data as the download
progresses anyway. But, if you have a lot of sparse files which should remain sparse, selecting a backup program that supports sparse files is
essential. Note however, that when a file is determined as sparse, the copy is not sparse in the exact same way and places as the original, because
this information can't be retrieved. Instead, it's created as a new sparse file, where non allocated space is used as the backup tools sees fit. This
shouldn't be a problem however; I can't think of a situation where this matters.

2.2.3. Others

There are some other special files, like FIFO's, named pipes, block devices, etc. These are pretty unremarkable, and most applications know how
to deal with them. However, you do have to supply the correct option(s). Using cp without -a on a named pipe for example, will try to copy the data
of the pipe, and not recreate the pipe.

There is also a special kind of directory: lost+found (part of Ext2/3/4). This is actually not a directory at all, and should not be made with mkdir.
Instead, use mklost+found. In case you were wondering, lost+found is used to store files "recovered" with e2fsck when the file system is damaged.

3. What to exclude
To save space on your backup medium, you can choose not to back up certain locations. For my Gentoo Linux system, these include /usr/portage/
and /var/tmp/portage.

There are also special file systems mounted within the root file system, which are created dynamically upon boot, and shouldn't be backed up. For
my system, these are /sys, /proc, /lost+found, /media (which contains only dynamically created directories for removable media) and /dev (because
I use udev). I also exclude /mnt, but the need to backup /mnt can vary from system to system.

4. Application data
When making a backup of a live system, you have to be mindful of programs which can change their data files during the backup. A good example
is a database, such as MySQL or PostreSQL, but also the data of e-mail programs (mbox files are more sensitive than maildir). The data files
(stored somewhere in /var usually) can undergo change on a live system. This can be because of normal transactions, or automatic database
clean-up. Never trust these data files of a running database, LDAP server, Subversion repository, or whatever similar software you may use.

If shutting down such software before the backup is not an option, schedule jobs which periodicly dump the data of the database (using pg_dump

2/6
for Postgresql, slapcat for OpenLDAP, "svnadmin dump" or svn-backup-dumps for Subversion, etc) into (date stamped) files. These files are then
backed up and you should be safe. Use the software's native dump utility whenever possible, as pg_dump and slapcat are for Postgresql and
OpenLDAP respectively.

Doing this (scheduled dumps) is always a good idea, regardless of the situation. Should the data suddenly get corrupted, you still have dumps of
past situations, so not everything is lost. And, when you dump them somewhere in the local file system, you don't have the hassle of searching
through your backups when the need arrises to restore the database (or other application data).

5. General warnings
This chapter contains some general warnings and guidelines you should keep in mind.

5.1. Incremental backup and mtimes


Some backup utilities support incremental backups, so that they only backup the data that has changed since the last backup. Rdiff-backup is a
very good example, as it supports nothing else. My advice is to be careful with incremental backups and investigate how they detect change. The
best way is if they support hash checking, but this can be slow. Second best, and very reliable and fast, is ctime checking. The file system has to
support ctimes though, but most do. Only the ones which you shouldn't be using, don't (like FAT32).

Some utilities only use the mtime, or the mtime+size combination, to detect changes in files. This is somewhat unreliable. For example, disk
images attached to loop devices with losetup do not change their mtime when you mount the loop device and write to it. A real-world example I
encountered was this: I had just read a disk image with ddrescue, which needed heavy file system correction. I decided to run my daily backup
routine first. It ocurred to me to check if the mtime of the image file actually changed when writing to the mounted loop device, because I suspected
the file was not accessed through regular file-open routines. And indeed, I was right. To make sure the file is backed up whenever I changed it, I
needed to touch it first.

This could also be an issue with virtual machine disk images. I always use logical volumes as storage backends, so I can't check (easily), but you
may want to check if the mtime of your virtual machine images actually changes when it's written to. Or, setup a backup routine in the virtual
machine itself, of course (which is what I would do).

Another (small) real world example is editing ID3-tags with Easytag. Easytag has an option to preserve the mtime of a file when changing the tag.
Should the size of the altered tag be the same, when changing one charachter for example, the mtime and size will be identical, and the change will
not be detected.

Rsync has an option to actually check if the file is different. The problem with this, however, is that it's very slow. For example, scanning my /home
for changed files takes longer than doing a complete backup with dar without compression. See the detailed rsync info below for more information.

You can of course decide that the risk of this failure in change detection is not a problem for you (since the chance is small), and benefit from the
speed increase. Personally, I just don't really like it if I can formulate a scenario where it is known that the program fails to do what it should, but still
accept it and use rdiff-backup on several locations.

5.2. Backing up into a file system


it may not be the smartest choice to simply copy your data to another file system. Using cp -a may do a pretty good job of preserving everything you
need (but only when copying into a file system which supports everything the source file systems supports, and in case of cp, when not using
extended attributes), but my concerns are of a different nature. It's all too easy to accidentally make a change to a file, or it's meta data, by opening
and saving it. It is more robust to have the data in archive files, like tar or dar does.

Because rsync also simply stores its meta data in the file system, this warning also applies to rsync.

With the information provided in this article, you should be able to decide for yourself if this is an issue or not. It also has definite advantages, such
as rapidly being able to find and copy one single file out of the backup.

5.3. Archive size


A lot of file systems (or network protocols) have a very limited maximum file size. When making backups with software that creates archives, the
size of these archives has to be considered. A maximum of 2 GB (or a little less, to be on the safe side) is usually a good idea. Files this big can
stored on ISO DVD's and FAT32 file systems. My preferred choice would be 650 MB, so that they can be burned on (74 minutes) CD's.

Even if you're not planning to store the backup on such a file system, it's still a good idea split the archive files, so that you still can burn them on
CD or DVD when you have to.

5.4. Restoring
Just as important as making the backup, is restoring the backup. Just as you would read the man page to figure out the correct options for backing
up, you would for restoring. In my opinion, a good backup program has either well chosen defaults, or stores the options made during backup in
the archive or meta data files, to be used upon restoration again.

Dar is good in this respect, as you should not need to specify any special options when restoring. When using tar you have to be somewhat more

3/6
careful.

6. Software recommendations, disrecommendations and examples


This chapter describes some details and examples of a number or programs. Please note that there are far more backup programs in existence
than the ones I mention here. The reason I mention these, is because these are the ones I have a lot experience with, and it shows how to apply or
consider the concepts described above, in practice.

Sometimes as specifically refer to the GNU version of an application, the standard version on Linux installations, which can be fundamentally
different than the "classic" version, so keep that in mind.

6.1. Dar
First a word of caution. It's highly recommended that you use version 2.3.3 (most recent stable release at the time of writing) or newer because it
contains a major bugfix. Read the announcement on Dar's news list for more info.

Dar is very well thought through and has solutions for classic pitfalls. For example, it comes with a statically compiled binary which you can copy on
(the first disk of) your backup. It supports automatic archive slicing, and has an option to set the size of the first slice separately, so you have some
space on the first disk left for making a boot CD, for example. It also lets you run a command between each slice, so you can burn it on CD, or
calculate parity information on it, etc. And, very importantly, its default options are well chosen. Well, that is, except for the preservation of atimes
(see atime-preserveration above).

I use the following command to backup my system on an external USB drive about once a week, using dar 2.2.6 (most site specific options
removed, and abstracted a bit):

dar --execute "par2 c -r5 \"%p/%b.%n.par2\" \"%p/%b.%n.%e\"" --alter=atime --empty-dir \


--fs-root / --noconf --create ARCHIVE_NAME --slice 620M --first-slice 600M -z6 \
-an -Z "*.ogg" -Z "*.avi" -Z "*.mp?" -Z "*.pk3" -Z "*.flac" -Z "*.zip" -Z "*.tgz" \
-Z "*.gz" -Z "*.gzip" -Z "*.bz2" -Z "*.bzip2" -Z "*.mov" -Z "*.rar" -Z "*.jar" \
--prune lost+found --prune usr/portage/ --prune var/tmp/portage --prune media \
--prune proc --prune mnt --prune sys

With the --execute statement, I calculate parity information with par2. The mystery-strings passed to par2 translate into the par2 file(s) to be
generated, and the archive slice name. --alter=atime Is mentioned above. The --empty-dir option stores every excluded dir as an empty dir in the
archive. The -an and subsequent -Z options specifies what to exclude for compression, with case insensitive masks. The compression level is
specified with -z6. --prune Is used to exclude paths. The rest should be clear.

I also used to run a daily backup with dar on /home, without compression. This was about 6 GB and takes about 10 minutes. Very feasible I would
say. However, when my home dir grew in size, I switched over to Rsync and later to rdiff-backup, and accepted the change detection flaw.

Restoring a dar archive should be safe with the defaults (a very important aspect in my opinion), but read the man page to be sure.

6.2. GNU Tar


GNU Tar supports everything needed to make a reliable backup (although I must say I don't know how well, if at all, it supports extended
attributes). One has to be careful though, to supply the --numeric-owner option. This could also include --same-owner , but a quick test and glance
at the manual shows the --preserve-permissions option (which is enabled by default for user root) implies this. Should you have forgotten the --
numeric-owner option for the backup command, this can also be given at restore time. Giving it at backup time,should negate the necessity of
giving it at restore time because tar won't store the textual names in the archive when you give this option.

Tar also has no decent splitting ability. I've seen people recommending to use split, but that is not very convenient. With split, you first have to
create the archive somewhere where it fits, and then split them up to be stored somewhere else. At restore time, this is even more annoying,
because you first have to concatenate the segments with cat before you can extract them.

An additional problem with Tar was reported to me by a reader. With version 1.15.1 he got corrupted sparse files which were bigger than 4 GB.
Current releases (1.20+) of Tar don't seem to have this problem anymore, but you might want to check for yourself.

I recommend not using tar unless you can live with the limitations are and careful to supply the correct options.

6.3. Rdiff-backup

You have to consider its method of change detection to determine if rdiff-backup will be reliable enough for you. If you deal with disk images a lot,
read the example above. I once discussed alternative change detection methods with the author, but because of lack of time on his part, the
discussion was never really concluded. It can very well be that in the future it will include a reliable change detection system, like hashes or ctimes.
Note that rdiff-backup does store the hash in its meta data since version 1.1.1 (2005/11/05), but it still doesn't use it for detecting changes in files
(at time of version 1.2.1, 2008/08/24).

4/6
Also, when restoring a full system backup from a different OS installation, like a LiveCD, with rdiff-backup, be sure to use the --preserve-numerical-
ids option, otherwise you will end up with files with wrong owners. This is very easy to forget (I've done it myself).

In the mean time, if you are confident enough you won't suffer from the mtime problem described above, in your /home for example, it can safely
be used. You may also find my concerns exaggerated and use it for your entire file system after all. I decided these concerns weren't big enough to
stop me from using it. Rdiff-backup is a very reliable and one of the best incremental backup programs, in my opinion.

6.4. Rsync
My main problem with rsync is that it stores its meta data as new meta data in the target file system. Not only does this restrict the use of target file
system, but it is also somewhat flaky, as described above. Also, rsync is one of the tools which uses the mtime+size for detecting changes in files.
And because its options for reliable checks for file changes (with --ignore-times or especially --checksum) are too slow, it can make dar without
compression a better choice. That is, only when backing up locally on a fast medium, of course.

An important thing to note here, is that because rsync doesn't store meta data files, it compares the mtime+size with the target. This means that
only when you use the --times option, to preserve the mtime of the files, does this change detection fail in the scenario described above.

Rsync has a special switch, --archive, specifically meant for preserving all meta data. Using this flag is not enough, however. For instance, it
doesn't store hard link information by default, because that would be too slow. Also, it doesn't store hard links, extended attributes and access
control lists. So, you need to supply --hard-links as well, and --acls and --xattrs should you use them (rsync supports extended attributes since
version 3, I think). The options --sparse and --numeric-ids are also recommended, as outlined above. Additionally, I would supply "--delete --delete-
excluded --delete-after" as well, to not get stale files in my backup. The --delete-after is necessary because otherwise, should the backup fail half-
way, files that have been renamed since the last backup (meaning, old file is deleted, new file is created), are deleted before the new one is
transferred. It's best to first transfer the new file, then delete the old one.

It is also important to construct a proper restore command. Because the destination should be an exact mirror of the source, using the same options
as you use for making the backup, is probably enough.

Some time ago, Rsync 3 was released. It contains interesting new features, such as support for access control lists and extended attributes. It may
have the issue with the change detection, but especially since version 3, you might want to consider using it (perhaps only for a part of your backup
plan), because it's still very efficient and thorough.

6.5. GNU cp
I always thought that cp -a would preserve everthing you need, but it appears it doesn't copy access control list information, and probably extended
attributes in general. I don't use access control lists, so I can't do any tests with them, so you want to test this yourself.

6.6. Clonezilla

Clonezilla is an elaborate live-CD for making images of several types of file systems, including NTFS. Besides some interface quirks, it's a
beautifully engineered piece of software. It does all the extra things one expects, like using dd to backup the MBR and the space between the MBR
and the first partition. It even calls sync when it's done. It's almost as if they read this article :). And, perhaps the most important: it allows you to
remove the CD when halting or rebooting!

6.7. Partition cloners in general


Partition cloners, like g4u, partimage, clonezilla (or dd...) can be very convenient, but they (mostly) have one major annoyance: they (often) require
that the backup is restored on an identical disk and/or partition layout. If your disk explodes and you need to get a new one, this can be difficult.

However, this is not always the case. I recently just dd'ed an entire disk over to another one (that was 1 GB bigger), with
dd if=/dev/sda of=/dev/sdb . The new one now has unpartitioned space, but it still works fine. The Windows XP installation that it contained,
still boots on the new drive, even though Windows (XP) is very picky about that. In any case, restoring to a smaller partition can be a problem.
Sometimes you can improvise, but this is something best avoided. If you are going to use this method of backing up, it's a good idea to make sure
replacement disks are always big enough. You can do this by assuming disks will always get bigger in the future or by keeping your partitions small
(but not too small as not to inhibit fragmentation prevention).

The less intelligent cloners, like g4u or dd, take an enormous amount of time, because they copy every file system block, including the ones that
are not used. And, when these unused blocks are not zeroed out first, the resulting image is very large.

Another aspect most people will find annoying, is that you have to take your machine down to make the image. To my knowledge, there are no
partition imagers which can do this on a live system. In any case, you certainly shouldn't use dd on a live partition...

7. Automation
When automating backups, there is one important thing you have to keep in mind: sync the disk buffers as often as possible, and preferably wait a
second or two after the sync, for the drive to write its own cache to the disk itself. Some disks lie about having written their cache to disk, so it's not
always safe to assume that all the volatile cache data has been written after the sync command returns, hence the wait.

To illustrate why this is important, consider a tar backup routine which first makes the backup and then removes the old one. If the cache isn't

5/6
synced and the power fails during removing of the old backup, you may end up with both the new and the old backup corrupted. It has been
suggested to me that this is pointless, because cache data is serialized, so any delete after the initial backup command first writes the volatile
cache. This is not the case, especially if you have a disk with NCQ/TCQ, which all modern disks have. The whole point of the write cache is to be
able to write out of order.

sync ; sleep 2
mount target
[BACKUP-COMMAND]
sync ; sleep 2
[CLEAN-COMMAND]
sync ; sleep 2
umount
My suggestion is, to run the backup like this:

The sync at the end is also important, so that when the unmount fails, the data is still safe when the drive is unplugged or whatever.

8. File system snapshotting


With tools like LVM and Btrfs you have the ability to take snapshots of the file system. This allows for some extra flexibility when making backups.
For instance, you can put your root file system on logical volumes (especially easy, and recommended, for virtual machines), giving you the ability to
make a snapshot before dangerous operations, allowing for easy reverting in case something goes wrong. Snapshots are also convenient for
making consistent full file system backups; you make the snapshot, use partclone or similar tools, and delete it again.

Remember that snapshots are usually copy-on-write. That means that when you (over)write something on the source, the state as it was is copied
to the snapshot. This means that having snapshots around decreases performance.

9. Choice of file system


Although it is somewhat out of the scope of this article, I would like to say a few words about the choice of file system. People often choose Reiser
FS over Ext3, because it's new or just different. The default file system on most Linux systems is Ext3. My recommendation is to stay with that,
unless you have a specific reason to use something else. Reiser FS, for example, does logical journaling. According to this, this can be dangerous
in the event of a power failure. Also, Hans Reiser himself has said (or so it's stated) that Reiser FS is optimized for speed, not correctness.

Even the new Ext4 file system has potential problems, because of delayed allocation. Ext3 has it as an option (data=writeback), but for Ext4, it's the
default. Delayed allocation means, in the case of Ext4, that the data is written to disk in the order of once a minute, whereas the meta data is written
much more often, in the order of every few seconds. Linus Torvalds says: "It literally does everything the wrong way around -- writing data later than
the metadata that points to it. Whoever came up with that solution was a moron. No ifs, buts, or maybes about it. At least with ext3 it's not the
default mode."

Anyway, my point is to investigate any file system you may want to use. And, Ext3 may be "ordinary", but it is not that bad. The references chapter
has some links to more information about the subject.

6/6

You might also like