Backups are taken to protect data from human errors, hardware failures, power failures, and software errors. There are two main types of backups: physical backups and logical backups (exports). Physical backups involve copying datafiles, redo logs, and control files, while exports create a logical copy of the data and DDL statements in a dump file. Exports allow for recovery of individual items and moving data between databases. The main uses of backups are to restore lost or corrupted data.
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0 ratings0% found this document useful (0 votes)
74 views4 pages
10 Backups
Backups are taken to protect data from human errors, hardware failures, power failures, and software errors. There are two main types of backups: physical backups and logical backups (exports). Physical backups involve copying datafiles, redo logs, and control files, while exports create a logical copy of the data and DDL statements in a dump file. Exports allow for recovery of individual items and moving data between databases. The main uses of backups are to restore lost or corrupted data.
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 4
Backups
We take backups to protect from
o Human errors o Hardware failures o Power failures o Software errors The main use of backup is to restore the data. Overview of backups are o Physical backups It is a copy of control files, redo log files and data files Every datafile contains some free space in blocks. So while taking the data backup in physical backup it also takes the free space. This is manual process. It protect against global failure In physical backup we have two types Cold backups : Here while taking backup database needs to be stopped Hot backups : Here while taking backup database keeps on running o Logical backups (or “exports”) In logical backups the data will be read and stored in the another disk. So it wont take free space present in the datafile. It takes huge amount of time while using logical backups It protect against deletion of tables It can be run by the user himself Logical backups o In logical backups Oracle Export / Import o These are the utilities generate a file with a logical copy of the data and application. o Export is the utility to backup the data from the database. o Import is the utility to restore the data in to the database o These utilities are useful to recover specific items lost due to user errors.( most of the times it is used for certain databases) o Export writes the object type definitions and all associated data to the dump file o Import then re-creates these file from dump files. o While performing export the database must be up and running o Export reads the database using SQL ( like select statement), export file contains create and insert statements. o Export provides a read consistent view of the database (the committed transaction is only read, the data which is saved after export is not backuped) Benefits of export o Easy to recover individual items o Portable : can be used to move data from one machine to another machine. Export can be done in different ways o Full database mode: All the database objects except those owned by the SYS schema are exported and written to the EXPORT dump file. The dump file includes the business data and the Data Definition Language (DDL) statements needed to recreate the full database. o User database mode : In the user mode, all objects owned by a given schema are exported and written to the EXPORT dump file. The Grants and indexes created by users other than the owner are not exported. Privileged database users including the DBA can EXPORT all objects owned by one or more schemas. o Table database mode: In a table mode specified tables owned by the user schema are exported and written to the EXPORT dump file. This mode also enables the user to specify partitions of a table to EXPORT. Privileged database users including the DBA can EXPORT specified tables owned by other database users. Examples o In the DBA level EXPORT/IMPORT a full database from a DBA ~]$ Exp system/manager file=fullexp.dmp log=fullexp.log full=y statistics=none Sql>conn scott/tiger (this is the user name and password) Sql>drop table emp purge; ~]$ imp system/manager file=fullexp.dmp log=fullimp.log full=y ignore=y Sql>conn scott/tiger Sql>select * from tab; In the statement “drop table emp purge;”. The word purge means shift+delete if we don’t use purge word then if the table is deleted then the data will be stored in recycle bin and this was introduced from 10g. So when you say “drop table t1;” and then when you say “select * from tab;” you can see the droped table in the tab with the another name. o USER level EXPORT/IMPORT ~]$ exp system/manager file=u1_exp.dmp log=u1_exp.log owner=u1 Sql>conn / as sysdba Sql>create user u2 identified by u2; Sql>grant connect, resource to u2; ~]$ imp system/manager file=u1_exp.dmp log=u1_imp.log fromuser=u1 touser=u2 o Table level EXPORT/IMPORT ~]$ exp system/manager file=u1_exp.dmp log=u1_exp.log owner=u1 Sql>conn scott/tiger; Sql>drop table emp purge; ~]$ imp u1/u1 file=emp.dmp log=emp_imp.log tables=emp o Query level ~]$ exp u1/u1 file=emp_rows.dmp log=emp_rows.log tables=emp query=\’where deptno=10\’ Sql>conn u1/u1 Sql>delete from emp where deptno=10; Sql>commit; ~]$ imp u1/u1 file=emp_rows.dmp log=imp_rows.log Oracle block is a combination of one or more OS blocks. ~]$ exp help When do we use export/import command? o When we want to export one table data into another table data. o One user objects to another user object o One user table to another user table. How can you make your export faster? o When you use the parameter “direct”. Here it won’t use sga it directly stores the data in dump file. Bypass the conventional way.shifting the buffer cache. o You can make export faster by using the parameter “buffer”. When you use export / import operation will occupies some space in SGA. And this buffer is a space of cache area. By giving som higher value .when we use buffer the content is read and kept in SGA and then it goes to dump file. o ~]$ exp system/manager file=u1_exp.dmp log=u1_exp.log owner=u1,u2 buffer=123457890 ~]$ imp help How to make import faster? o By using “buffer” parameter. Parameter “ignore”: when we want to import the same data in to a table. Then it will through the error. So when we use ignore it will ignores the error and starts to import the data. While importing the data it will compares the rows in the existing database and compares the rows in the dump file data. Then imports the data which are not matched. By default ignore=n ~]$ imp system/manager file=full_db.dmp log=imp_u1.log fromuser=u1 touser=u2 ignore=y How can I find out the dump file content? o Parameter “show” is used to see the data from the dump file without using the dump file log. o ~]$ imp system/manager file=u1_exp.dmp log=show.log show=y full=y o Here when we use show=y it wont import the actual content but shows the data in the log file. Physical backups o Physical backup is a copy of control files, redolog files and data files o Cold backups (offline backup) Do a clean shutdown i.e shut immediate or shutdown normal but not shut abort Go to oradatad directory and copy control files, redolog files and datafiles to bkp directory disk1]$ mkdircoldbkp disk1]$ cd oradata/orcl orcl]$ cp *.dbf ../../coldbkp/ orcl]$ cp *.log ../../coldbkp/ orcl]$ cp *.ctl ../../coldbkp/ orcl]$ cd ../../coldbkp/ we don’t use coldbkp very often because when the database is running 24/7 so don’t shutdown the database. Cold backup is consistent back. We can just restore to make the database up and running. o Hot backups (online backup) Hot backups is an online backup we don’t need the downtime to take the hot backup We have to keep the tablespace in begin backup mode. (thedatafile header will get freezed. The datafile header contains the SCN, details of the data) By freezing the datafile header the SCN will not be updated. But the transactions will be updated in the datafile. When the database is up and running when we copy the datafile and control file will be of no use(that would be inconsistent files). When we are performing the hot backup we need to copy the data files from the OS level. Once the datafile is copied we need to say end backup When the tablespace is in begin backup mode and taking the backup what will be the state of the redo log file? When the redo log file is full the data is stored in archive log files. And after we say end backup the data file is updated by archive log files. When you keep your tablespace in begin backup mode accessive redo log files are generated. It is mandatory that archive log mod is enabled for hot backup Sql>startup Sql>archive log list Sql>alter system set db_recovery_fild_dest=’/disk1’; (here my archive log are stored, in 9i log_archive_dest is used) Sql>alter system set db_recovery_file_dest_size=2G; Sql>alter database mount; Sql>alter database archivelog; Sql>archive log list; Sql>alter tablespace system begin backup; Sql>select * from v$backup; Sql>alter tablespace system end backup; How the database consistent will be checked? o Database consistent is checked by comparing the datafiles and redolog files with the control file. This is done by checking by SCN(system change number) this is called sanity checking. What is the check point? o Checkpoint checks the sanity checking o For every 3 seconds checkpoint will generate a new SCN What is SCN? o System change number is the number which is generated by the checkpoint. This number will get updated in the control files , data files and redo log files for every commit transactions.