Oracle Redolog
Oracle Redolog
All You Wanted To Know About Redo Log But Were Afraid To Ask
............................................................................................................................................................................
Page 1
INTRODUCTION .........................................................................................................................................3 TARGET AUDIENCE ...................................................................................................................................3 SIGNIFICANCE.............................................................................................................................................3 WHAT IS COMMIT AND WHAT IT HAS TO DO WITH REDO...........................................................3 WHAT HAS BEEN DONE TILL LINE NUMBER 6 BEFORE CRASH ..................................................................................4 WHAT IS STILL TO BE DONE, BUT LEFT OUT BUT SYSTEM CRASHED.............................................................................4 ONLINE REDO LOG FILES........................................................................................................................4 CIRCULAR USE OF REDO LOG FILES....................................................................................................................5 THE USE OF ONLINE REDO LOGS.......................................................................................................................5 JUST TO NOTE THAT HOW MUCH REDO AM I GENERATING .....................................................5 LETS GET MORE INTO BLOCK SIZE FUNDAMENTALS................................................................13 WHAT IF I DONT WANT REDO?...........................................................................................................14 NOLOGGING OPTION..................................................................................................................................14 ORACLE UTILITIES - LOGMINER .......................................................................................................16 STEP 1: CREATING THE DATA DICTIONARY ........................................................................................................17 OPTIONS AND USAGE .............................................................................................................................20 DBMS_LOGMNR_D.................................................................................................................................20 DBMS_LOGMNR......................................................................................................................................20 LIMITS OF LOGMINER..................................................................................................................................21 WHY REDO LOG SPACE REQUESTS ALWAYS INCREASE ?..........................................................22 HOW TO ESTIMATE THE SIZE OF REDO LOGS...............................................................................22 CANNOT ALLOCATE NEW LOG AND CHECKPOINT NOT COMPLETE ERRORS...........23 PERFORMANCE ISSUES ASSOCIATED...............................................................................................24 MORE NUMBER OF LOG SWITCHES......................................................................................................................25 WHAT SHOULD BE THE SIZE OF THE BUFFER FOR OPTIMUM PERFORMANCE ?..............................................................25 SLOWER LGWR............................................................................................................................................25 HIGH FREQUENCY OF CHECKPOINTS ...................................................................................................................26 SUMMARY ON TUNING .....................................................................................................................................26 REFERENCE(S)...........................................................................................................................................26 AUTHOR(S)..................................................................................................................................................26
Page 2
Introduction
This paper provides detailed information to understand How the Database Redo log buffers works and how to detect and resolve tuning problems related to Oracle memory subsystem. This document also explains in brief, how to use Oracle utility LogMiner Most of the concepts brought to light here are applicable to Oracle8i only, And the usage of scripts are at users discretion only.
Target Audience
Database Administrators and, Oracle Application developers who have knowledge of Oracle8i server and its memory subsystems
Excerpts Are you safe and comfortable with a database that does not have a recovery system . I guess no! One would expect the database to be like a steno writer who logs every thing somewhere so that when the boss says, go and draft it!, the writings comes on to a paper This is just what a Redo Log buffer does, It logs every thing in memory which constitutes System global Area, the Oracle memory subsystem, and when commit is issued the Log writer writes everything on the data files permanently. The redo log buffer is a circular buffer in the SGA that holds information about changes made to the database. This information is stored in redo entries. Redo entries contain the information necessary to reconstruct, or redo, changes made to the database by INSERT, UPDATE, DELETE, CREATE, ALTER, or DROP operations. Redo entries are used for database recovery, if necessary. Redo entries are copied by Oracle server processes from the user's memory space to the redo log buffer in the SGA. The redo entries take up continuous, sequential space in the buffer. The background process LGWR writes the redo log buffer to the active online redo log file (or group of files) on disk.
Significance
Redo Log buffers are extremely crucial to the Oracle Database. These are actually the transaction logs for the database, used for recovery purposes. Their only purpose is to be used in the event of an instance or a media failure. If power goes off on your database machine, causing an instance failure, oracle will use the online redo logs to restore the system exactly to the point it was at, immediately prior to the power outage. In case of sadistic disk failure Oracle will use the archived redo logs as well as online redo logs to recover from the killing accident that happens to be a DBAs nightmare.
Page 3
insert into test ( 1); end loop; commit; dbms_output.put_line(dbms_utility.get_time - time_stat || h seconds ); end;
loop
Say output on my machine is 21h seconds if the commit is done at the end of complete transaction ( I mean after doing all 1000 inserts) If the commit is shifted from Line number 7 to line number 6 i.e. commit is done after every insert statement then the time taken is around 54h seconds Well, these timings are on slow machine but the point to be noted is that this rule holds true. If you want your jobs to run faster, first case might seem correct to you, but your mileage may vary , as this can increase the transaction size. But what if disk fails or instance crashes while running line number 6? If any such thing happens then you can tabulate the following for your references :
As I said, commit doesnt do anything much but just call LGWR. It is the LGWR which performs the lengthiest operation, as it has to do real physical I/O Job. We will see more about LGWR later. Lets first see what Redo log packet has to offer
Page 4
Each atomic change in a redo record is called a change vector. A redo record is composed of all the change vectors involved in a change. For example, if you update all the values in a multi-column row in a table, then Oracle generates a redo record containing change vectors for all the changed blocks corresponding to each updated value. If you then commit the update, Oracle generates another redo record and assigns the change an SCN1. In this way, the system maintains a careful watch over everything that occurs in the database.
The system change number (SCN) is an ever-increasing internal timestamp that uniquely identifies a committed version of the database. Every time a user commits a transaction, Oracle records a new SCN. You can obtain SCNs in a number of ways, for example, from the alert log. You can then use the SCN as an identifier for purposes of recovery. For example, you can perform an incomplete recovery of a database up to SCN 1030. Oracle uses SCNs in control files, datafile headers, and redo records. Every redo log file has both a log sequence number, low and high SCN. The low SCN records the lowest SCN recorded in the log file, while the high SCN records the highest SCN in the log file.
Page 5
SQL> create table t 2 as 3 select * from all_objects; Table created. SQL> insert into t select * from t; 1800 rows created. SQL> insert into t select * from t; 3600 rows created. SQL> insert into t select * from t where rownum < 12000; 7200 rows created. SQL> COMMIT; Commit complete. SQL> create or replace procedure do_commit( p_rows in number ) 2 as 3 l_start number; 4 l_after_redo number; 5 l_before_redo number; 6 begin 7 select v$mystat.value into l_before_redo 8 from v$mystat, v$statname 9 where v$mystat.statistic# = v$statname.statistic# 10 and v$statname.name = 'redo size'; 11 12 l_start := dbms_utility.get_time; 13 insert into t select * from t where rownum < p_rows; 14 dbms_output.put_line 15 ( sql%rowcount || ' rows created' ); 16 dbms_output.put_line 17 ( 'Time to INSERT: ' || 18 to_char( round( (dbms_utility.get_time-l_start)/100, 5 ), 19 '999.99') || 20 ' seconds' ); 21 22 l_start := dbms_utility.get_time; 23 commit; 24 dbms_output.put_line 25 ( 'Time to COMMIT: ' || 26 to_char( round( (dbms_utility.get_time-l_start)/100, 5 ), 27 '999.99') || 28 ' seconds' ); 29 30 select v$mystat.value into l_after_redo 31 from v$mystat, v$statname 32 where v$mystat.statistic# = v$statname.statistic# 33 and v$statname.name = 'redo size'; 34
Page 6
35 36 37 38 39 40 41
end; /
Procedure created. Now when committed at various transactions the amount of redo is going to change, and here we are going to note the same SQL> set serveroutput on format wrapped SQL> begin 2 for i in 1 .. 5 3 loop 4 do_commit( power(10,i) ); 5 end loop; 6 end; 7 / 9 rows created Time to INSERT: .00 seconds Time to COMMIT: .00 seconds Generated 1,344 bytes of redo 99 rows created Time to INSERT: Time to COMMIT: Generated 999 rows created Time to INSERT: Time to COMMIT: Generated .00 seconds .00 seconds 10,536 bytes of redo .01 seconds .00 seconds 106,764 bytes of redo
9999 rows created Time to INSERT: 2.42 seconds Time to COMMIT: .00 seconds Generated 1,086,604 bytes of redo 25506 rows created Time to INSERT: 9.64 seconds Time to COMMIT: .00 seconds Generated 2,763,004 bytes of redo PL/SQL procedure successfully completed. SQL> show parameter log_buffer; NAME ----------------------------log_buffer TYPE VALUE ------- -----------------integer 163840
Page 7
It can be seen from above that transaction changes the redo generated also increases, although this might not be proportional to the transaction size but the redo generated increases with the commit frequency The above is the redo generated having statistics of my session only. Now in order to investigate how we might go about determining the amount of redo a given transaction would generate. Its is straightforward to estimate how much redo will be generated if you know how much data you will be modifying. Now, lets create a table whose row size is about 2010 bytes some bytes, since char data type always consumes the maximum amount of storage , this row is 2000 bytes for the CHAR , seven bytes for DATE and 3 bytes for INT type , so the total comes out to be 2010 bytes , plus some row overheads: SQL> create table t (x int, y char (2000), z date); Table created. Here our motive is to take a look at the scenarios where we are able to find out how much redo is generated if INSERT, UPDATE and DELETE operation is performed on ones, tens and lots of rows. Any way we already know that updation of a table row by row takes more time than one time To measure the redo generated , we will use AUTO TRACE facility , or a direct query against a view of V$MYSTAT/V$STATNAMR that shows our sessions redo size SQL> create or replace view redo_size 2 as 3 select value 4 from v$mystat, v$statname 5 where v$mystat.statistic# = v$statname.statistic# 6 and v$statname.name = 'redo size'; View created
SQL> set serveroutput on SQL> set autotrace traceonly statistics Now before doing any DML operation set the autotrace to traceonly statistics The above statement Set AutoTrace traceonly statistics sets the environment for getting the statistics of the query which is run Now below some DML operations are done to get the statistics such that one can interpret from the results that how much redo can be generated when a DML operation is done in different ways and different quantities
The Operations done are as below : Inserting/Deleting/Updating Inserting/Deleting/Updating Inserting/Deleting/Updating Inserting/Deleting/Updating the values in table t with 1 record in a single statement the values in table t with 10 records in a single statement the values in table t with 200 records in a single statement the values in table t with 200 records record by record
Page 8
SQL> insert into t values ( 1, user, sysdate ); 1 row created. Statistics ---------------------------------------------------------2 recursive calls 8 db block gets 2 consistent gets 0 physical reads 2668 redo size 765 bytes sent via SQL*Net to client 677 bytes received via SQL*Net from client 4 SQL*Net roundtrips to/from client 2 sorts (memory) 0 sorts (disk) 1 rows processed SQL> insert into t select object_id, object_name, created all_objects where rownum <= 10; 10 rows created. Statistics ---------------------------------------------------------7 recursive calls 66 db block gets 57 consistent gets 0 physical reads 25584 redo size 765 bytes sent via SQL*Net to client 727 bytes received via SQL*Net from client 4 SQL*Net roundtrips to/from client 1 sorts (memory) 0 sorts (disk) 10 rows processed SQL> insert into t 2 select object_id, object_name, created 3 from all_objects 4 where rownum <= 200 5 / 200 rows created. Statistics ---------------------------------------------------------474 recursive calls 1031 db block gets 1465 consistent gets 0 physical reads from
Page 9
redo size bytes sent via SQL*Net to client bytes received via SQL*Net from client SQL*Net roundtrips to/from client sorts (memory) sorts (disk) rows processed
declare l_redo_size number; l_cnt number := 0; begin select value into l_redo_size from redo_size; for x in ( select * from all_objects where rownum <= 200 ) loop insert into t values ( x.object_id, x.object_name, x.created ); l_cnt := l_cnt+1; end loop; select value-l_redo_size into l_redo_size from redo_size; dbms_output.put_line( 'redo size = ' || l_redo_size || ' rows = ' || l_cnt ); end; / size = 503888 rows = 200
PL/SQL procedure successfully completed. SQL> update t set y=lower(y) where rownum = 1; 1 row updated. Statistics ---------------------------------------------------------0 recursive calls 5 db block gets 1 consistent gets 0 physical reads 4236 redo size 767 bytes sent via SQL*Net to client 676 bytes received via SQL*Net from client 4 SQL*Net roundtrips to/from client 1 sorts (memory) 0 sorts (disk) 1 rows processed SQL> update t set y=lower(y) where rownum <= 10; 10 rows updated. Statistics ---------------------------------------------------------0 recursive calls 34 db block gets 10 consistent gets
Page 10
physical reads redo size bytes sent via SQL*Net to client bytes received via SQL*Net from client SQL*Net roundtrips to/from client sorts (memory) sorts (disk) rows processed
SQL> update t set y=lower(y) where rownum <= 200; 200 rows updated. Statistics ---------------------------------------------------------0 recursive calls 604 db block gets 200 consistent gets 0 physical reads 867832 redo size 767 bytes sent via SQL*Net to client 679 bytes received via SQL*Net from client 4 SQL*Net roundtrips to/from client 1 sorts (memory) 0 sorts (disk) 200 rows processed SQL> 2 3 4 5 6 7 8 9 10 11 12 13 14 15 redo declare l_redo_size number; l_cnt number := 0; begin select value into l_redo_size from redo_size; for x in ( select rowid r from t where rownum <= 200 ) loop update t set y=lower(y) where rowid = x.r; l_cnt := l_cnt+1; end loop; select value-l_redo_size into l_redo_size from redo_size; dbms_output.put_line( 'redo size = ' || l_redo_size || ' rows = ' || l_cnt ); end; / size = 867872 rows = 200
PL/SQL procedure successfully completed. SQL> delete from t where rownum = 1; 1 row deleted. Statistics ---------------------------------------------------------0 recursive calls 8 db block gets 1 consistent gets
Page 11
physical reads redo size bytes sent via SQL*Net to client bytes received via SQL*Net from client SQL*Net roundtrips to/from client sorts (memory) sorts (disk) rows processed
SQL> delete from t where rownum <= 10; 10 rows deleted. Statistics ---------------------------------------------------------0 recursive calls 44 db block gets 11 consistent gets 0 physical reads 25840 redo size 768 bytes sent via SQL*Net to client 668 bytes received via SQL*Net from client 4 SQL*Net roundtrips to/from client 1 sorts (memory) 0 sorts (disk) 10 rows processed SQL> delete from t where rownum <= 200; 200 rows deleted. Statistics ---------------------------------------------------------0 recursive calls 804 db block gets 211 consistent gets 0 physical reads 519144 redo size 768 bytes sent via SQL*Net to client 669 bytes received via SQL*Net from client 4 SQL*Net roundtrips to/from client 1 sorts (memory) 0 sorts (disk) 200 rows processed SQL> declare 2 l_redo_size number; 3 l_cnt number := 0; 4 begin 5 select value into l_redo_size from redo_size; 6 for x in ( select rowid r from t where rownum <= 200 ) loop 7 delete from t where rowid = x.r; 8 l_cnt := l_cnt+1; 9 end loop; 10 select value-l_redo_size into l_redo_size from redo_size;
Page 12
11 dbms_output.put_line( 'redo size = ' || l_redo_size || 12 ' rows = ' || l_cnt ); 13 end; 14 / redo size = 518904 rows = 200 PL/SQL procedure successfully completed.
The PL/SQL code block used in getting redo sizes in between does the 200 individual ( row by row) insertion , deletion and updation The results are tabulated in the table number 1 below : Tab No.1: Redo Generated for various operations Operations
Type of operation2
ALL ALL ALL ROW BY ROW ALL ALL ALL ROW BY ROW ALL ALL ALL ROW BY ROW
Rows Affected
Total Redo
INSERT
2668 25584 506848 503888 4236 43000 867832 867872 2584 25840 519144 518904
2668 2558 2534.24 2519.44 4236 4300 4339.16 4339.36 2584 2584 2595.72 2594.52
UPDATE
DELETE
Operating Systems Solaris, AIX, NT, Linux, SGI Irix, DG/UX, OpenVMS, NetWare, UnixWare
This refers to the manner in which number of rows are inserted/updated/deleted in the database ie All means all rows are modified/inserted using one statement Row by Row refers means all rows are inserted/modified one row at a time , I has been noted that update of a table row by row is slower than updating in group , and hence is such an exercise
Page 13
The log block size is the unit for the setting of the log_checkpoint_interval, _log_io_size and max_dump_file_size parameters specified in init.ora . Therefore, it is an important constant to know. If your operating system is not listed in the table above, then you can find your log block size using the following query. select max(lebsz) from sys.x$kccle; There is a 16 byte header for each log block, and the size of the data area is approximately the number of bytes of redo generated (redo size) plus the number of bytes of redo space left unused (redo wastage) divided by the number of log blocks written (redo blocks written). Thus the approximate formula is 16 + (redo size + redo wastage) / redo blocks written This formula will commonly understate the log block size by a few bytes, because it does not allow for redo that has been generated but not yet written, and the redo size statistic is commonly a little inaccurate.
NOLOGGING option
No Logging is the option that can be specified when one creates a table or do any DDL operation for which the user do not want to be logged , but as was said earlier this cannot be stopped but can be reduced It should be noted that the LOGGING/NOLOGGING option of table is independent of that of its indexes For a table or table partition, if you omit this clause, the logging attribute of the table or table partition defaults to the logging attribute of the tablespace in which it resides. For LOBs, if you omit this clause, If you specify CACHE, then LOGGING is used (because you cannot have CACHE NOLOGGING). If you specify NOCACHE or CACHE READS, the logging attribute defaults to the logging attribute of the tablespace in which it resides. NOLOGGING does not apply to LOBs that are stored inline with row data. That is, if you specify NOLOGGING for LOBs with values less than 4000 bytes and you have not disabled STORAGE IN ROW, Oracle ignores the NOLOGGING specification and treats the LOB data the same as other table data. For nonpartitioned tables, the value specified for LOGGING is the actual physical attribute of the segment associated with the table. For partitioned tables, the logging attribute value specified is the default physical attribute of the segments associated with all partitions specified in the CREATE statement (and in
Page 14
subsequent ALTER TABLE ... ADD PARTITION statements), unless you specify LOGGING| NOLOGGING in the PARTITION description In NOLOGGING mode, data is modified with minimal logging (to mark new extents INVALID and to record dictionary changes). When applied during media recovery, the extent invalidation records mark a range of blocks as logically corrupt, because the redo data is not fully logged. Therefore, if you cannot afford to lose this table, you should take a backup after the NOLOGGING operation. The size of a redo log generated for an operation in NOLOGGING mode is significantly smaller than the log generated with the LOGGING attribute set. If the database is run in ARCHIVELOG mode, media recovery from a backup taken before the LOGGING operation restores the table. However, media recovery from a backup taken before the NOLOGGING operation does not restore the table. Lets see what happens if NOLOGGING Keyword is used. SQL> column value new_value old_value SQL> select value from redo_size; VALUE ---------0 SQL> create table t as select * from ALL_OBJECTS 2 / Table created. SQL> select value-&old_value from redo_size; old 1: select value-&old_value from redo_size new 1: select value0 from redo_size VALUE-0 ---------45680 SQL> drop table t; Table dropped. SQL> select value from redo_size; VALUE ---------75548 SQL> create table t NOLOGGING as select * from ALL_OBJECTS; Table created. SQL> select value-&old_value from redo_size; old 1: select value-&old_value from redo_size new 1: select value75548 from redo_size
Page 15
VALUE-75548 ----------44904 I tried to get the maximum possible difference in the readings but I have lesser data and hence the difference is less, but in your case this might have a higher value
Page 16
Note: The Log Miner memory allocation comes from the PGA, and Log Miner cannot be used in MTS (Multi threaded Server) systems, This is because in MTS systems for each request a different thread is created and the data associated with it will not be available to any other thread. Also LogMiner output is visible to a single session and suppose if you want it to be available to you again then you are left out with only two options: Either load the data again in next run or make them permanent by storing then in some other tables.
SQL> grant select any table to krs; Grant succeeded. SQL> grant execute any procedure to krs; Grant succeeded. <SQL> connect krs/krs@TEST Connected SQL> begin 1 sys.dbms_logmnr.add_logfile( 2 '/tmp/KRS/redo01.log', 3 sys.dbms_logmnr.NEW); 4 end; 5 / PL/SQL procedure successfully completed. SQL> 2 3 4 begin sys.dbms_logmnr.start_logmnr; end; /
This is not a mandatory condition , but LogMiner gives some error when the redo log file is constantly updated
Page 17
SQL> column sql_undo format a30; SQL> select scn,sql_redo,sql_undo from V$LOGMNR_CONTENTS; SCN ---------2.1999E+12 2.1999E+12 SQL_REDO ---------------------------commit; delete from " UNKNOWN "." Objn:58" where "Col[1]"= HEXTORAW ('c3024b46') and "Col[2]" IS NULL and " Col[3]" = HEXTORAW( '5444 5251444154') and "Col[4]"= HEXTORAW('424153494e5354') and ROWID = 'AAAAA6AABAAAC/CADB'; SQL_UNDO --------------------insert into " UNKNOWN" ."Objn:58"("Col[1]","Col[2]" ,"Col[3]","Col[4]") values (HEXTORAW('c3024b46'),NULL,H EXTORAW('54445251444154'),HE XTORAW('424153494e5354'));
As can be seen this output is fairly unreadable but one can trace out information what V$LOGMNR_CONTENTS want to say if the Objn:58 is mapped with the exact object name This can be manually done using the following query Select OBJECT_NAME from ALL_OBJECTS where DATA_OBJECT_ID=58;
There is another way of doing this, Create a Data Dictionary for the same For creating the data dictionary the pre-requisites is to have at least one directory in which LogMiner can write into , Hence for this the UTL_FILE has to be configured in your init.ora And a execute privilege on SYS.DBMS_LOGMNR_D to the user/role who wants to use Log Miner Lets start with it : Add a UTL_FILE parameter in init.ora namely utl_file_dir = (/home/t104css/rnd,/home/t104css/sudhakar) and also the write permission on the specified directories to sysdba user on your system ( oracle8i in my case) and run the following PL/SQL block SQL> 1 2 3 4 / LogMnr LogMnr TABLE: TABLE: TABLE: TABLE: begin end; sys.dbms_logmnr_d.build('miner_dictionary.dat', '/home/t104css/rnd');
Dictionary Procedure started Dictionary File Opened OBJ$ recorded in LogMnr Dictionary TAB$ recorded in LogMnr Dictionary COL$ recorded in LogMnr Dictionary SEG$ recorded in LogMnr Dictionary
Page 18
TABLE: UNDO$ recorded in LogMnr Dictionary File TABLE: UGROUP$ recorded in LogMnr Dictionary File TABLE: TS$ recorded in LogMnr Dictionary File TABLE: CLU$ recorded in LogMnr Dictionary File TABLE: IND$ recorded in LogMnr Dictionary File TABLE: ICOL$ recorded in LogMnr Dictionary File TABLE: LOB$ recorded in LogMnr Dictionary File TABLE: USER$ recorded in LogMnr Dictionary File TABLE: FILE$ recorded in LogMnr Dictionary File TABLE: PARTOBJ$ recorded in LogMnr Dictionary File TABLE: PARTCOL$ recorded in LogMnr Dictionary File TABLE: TABPART$ recorded in LogMnr Dictionary File TABLE: INDPART$ recorded in LogMnr Dictionary File TABLE: SUBPARTCOL$ recorded in LogMnr Dictionary File TABLE: TABSUBPART$ recorded in LogMnr Dictionary File TABLE: INDSUBPART$ recorded in LogMnr Dictionary File TABLE: TABCOMPART$ recorded in LogMnr Dictionary File ERROR -6532 ORA-06532: Subscript outside of limit begin * ERROR at line 1: ORA-06532: Subscript outside of limit ORA-06512: at "SYS.DBMS_LOGMNR_D", line 793 ORA-06512: at line 2
NOTE : If you get an error such as above then need not worry , as this is a bug in Oracle 8.1.7 ( refer to Bug description Doc ID: 142354.996 on Oracle Metalink at http://support.oracle.com. For this you need to run the script at $ORACLE_HOME/rdbms/admin/dbmslmd.sql by modifying following line TYPE col_desc_array IS VARRAY(38) OF col_description; To TYPE col_desc_array IS VARRAY(700) OF col_description; And rerun the script
Now after this a rerun the above PL/SQL block, This block create a dictionary file in the directory specified by you in UTL_FILE_DIR parameter and in ur PL/SQL block 1 begin 2 sys.dbms_logmnr.add_logfile('/tmp/KRS/redo01.log',sys.dbms_logmnr.NEW); 3* end; SQL> / PL/SQL procedure successfully completed. SQL> begin 2 sys.dbms_logmnr.start_logmnr( dictFileName => '/home/t104css/rnd/miner_dictionary.dat'); 3 end; 4 /
Page 19
PL/SQL procedure successfully completed. SQL> column sql_redo format a30; SQL> column sql_undo format a30 SQL> select scn,sql_redo,sql_undo from V$logmnr_contents 2 / SCN SQL_REDO delete from "SYS"."SOURCE$" where "OBJ#" = 2794 and LINE" = 230 and "SOURCE" = ' || '''''''' || db_version_time || '''''''' || '','' ' and ROWID = 'AAAABAAABAAAC0hAAC'; delete from "SYS"."SOURCE$" where "OBJ#" = 2794 and "LINE" = 231 and "SOURCE" = ' SQL_UNDO insert into SYS"."SOURCE$"("O BJ#","LINE","SOURCE") values (2794,230,' || '''''''' || db_version_time || '''''''' || '',''');
1430518
1430518
insert into "SYS" ."SOURCE$" ("OBJ#", "LINE", "SOURCE") values (2794,231,' || '''''''
DBMS_LOGMNR_D
Package DBMS_LOGMNR_D contains exactly one procedure namely BUILD which builds the data dictionary for LogMiner to perform , Using DBMS_LOGMNR_D is very straightforward , It takes two parameters DICTIONARY_FILENAME - File Name by which the ASCII text dictionary can be created And DICTIONARY_LOCATION Where the file specified by DICTIONARY_FILENAME is created
DBMS_LOGMNR
Package DBMS_LOGMNR itself has three routines : ADD_LOGFILE To register the set of log files to be analyzed START_LOGMNR To populate the V$LOGMNR_CONTENTS view END_LOGMNR To release all resources allocated by LogMiner processing. This should be called before you exit from the session where you are analyzing the log file
Page 20
LOGFILENAME.NEW - The fully qualified filename of the archived redo log file you want to analyze OPTIONS Specifies ghow to add ( or remove ) this file. We use DBMS_LOGMNR constants: DBMS_LOGMNR.NEW,DBMS_LOGMNR.ADD,DBMS_LOGMNR.REMOVEFILE Now the problem is everytime you wish to analyze the redo log you need to invoke LogMiner must be invoked first using SYS.DBMS_LOGMNR.START_LOGMNR()
LIMITS of LogMiner
LogMiner has some limits which one should be aware of, LogMiner cannot support all object types . This can be illustrated by a small example as shown below create or replace type myScalarType as object ( x int, y date, z varchar2(25) ); / create or replace type myArrayType as varray(25) of myScalarType / create or replace type myTableType as table of myScalarType / drop table t; create table t ( a int, b myArrayType, c myTableType ) nested table c store as c_tbl / begin sys.dbms_logmnr_d.build( 'miner_dictionary.dat', '/home/t104css/rnd' ); end; / alter system switch logfile; insert into t values ( 1,myArrayType( myScalarType( 2, sysdate, 'hello' ) ),myTableType( myScalarType( 3, sysdate+1, 'GoodBye' ) ) ); alter system switch logfile; begin sys.dbms_logmnr.add_logfile( '/tmp/KRS/redo01.log', sys.dbms_logmnr.NEW ); end; / begin sys.dbms_logmnr.start_logmnr (/home/t104css/rnd/miner_dictionary.dat); end; /
Page 21
select scn, sql_redo from v$logmnr_contents Now this will result in following output SCN -----824288 SQL_REDO -------------------------------------------------Insert into KRS.C_TBL(NESTED_TABLE_ID,X,Y,Z) Values HEXRAW(252cb5fad8784d2ca93eb432c2d35 7c),3,TO_DATE(23-JAN-2001 16:21:44,DD-MON-YYYY HH24:MI:SS) GoodBye); Insert into KRS.T(A,B,STS_NC00300004$) values(1,Unsupported Type,HEXTORAW(252cb5fad8784e2ca93eb432c2d357b7c));
824288
As can be seen the single insert statement issued by us is transformed into two different insert statements ,One for the nested table and one for the parent table, Also the results contain the information related to the unsupported data type error, which is in fact out VARRAY , LogMiner is not cpable of reproducing this particular construct , This does not actually remove all pf the usefulness of LogMiner but it does prevent us in assuming that we can generated exact constructs from the V$LOGMNR_CONTENTS, But Still LogMiner is a very good tool in analyzing the historical trends and analyzing the queries
Page 22
Redo logs should switch approximately every 30 minutes. If they switch sooner, look at the time between switches. For example: a.) If your time between switches is 10 min, increase the size of redo log files at least three times. b.) If your time between switches is 15 min, increase the size twice. c.) If the switch time is more than 30 min, it means there is not much activity on the database and it's OK.
NOTE: A checkpoint occurs at every log switch. If a previous checkpoint is already in progress, the checkpoint forced by the log switch will override the current checkpoint. This necessitates well-sized redo logs to avoid unnecessary checkpoints as a result of frequent log switches
Recording of checkpoints can also impact redo log files. The LGWR is responsible for recording checkpoints in the datafile headers. If checkpoints are occurring frequently and you have many datafiles, system performance can be degraded as the LGWR can become overloaded. Oracle suggests that you enable the checkpoint process in 'init.ora', CKPT to separate the work of performing a checkpoint from other work performed by LGWR. CHECKPOINT_PROCESS = true If you have a very stable environment where the computer very rarely crashes, increasing the size of redo logs will make your checkpoints occur less often (remember, a checkpoint causes your SGA to be flushed). After the redo logs are recreated with a larger size, set CHECK_POINT_INTERVAL greater than your redo log size.
Page 23
This situation may also be checked by tracing two statistics : background checkpoint started time and background checkpoint completed time. These two statistics must not be different more than once. If this is not true, your database hangs on checkpoints. LGWR is unable to continue writing the next transactions until the checkpoints complete. The difference can be the result of either because of high frequency of checkpoints or the checkpoints are starting but not completing , or Even if the DBWR is very slow can result in the above difference. Now for resolving above problem following can be done: a) increase size of Redo logs b) Reduce the frequency of checkpoints i.e increase the LOG_CHECKPOINT_INTERVAL parameter value set CHECKPOINT_PROCESS=TRUE set LOG_CHECKPOINT_TIMEOUT = 0. as this disables the check pointing based on time interval. c) Increase the value of DB_BLOCK_CHECKPOINT_BATCH so that DBWR can write buffers on disk at fast rate
SGA
Shared Pool
Large Pool
Block Buffers
The main performance catch related to redo log buffers are as below:
Page 24
a) b) d) c)
More number of log switches Slower LGWR High frequency of checkpoints Contention of redo log buffer
the
size
of
the
buffer
for
optimum
Log buffer is set based on your transaction volume and redo write size. There is no such thing as optimal value, but for most systems, a value between 240K and 512K seems to work fine. Values of 1MB and above do work well for large systems but Note people arrive at that value after tuning and is usually not a startup value.
Optimization of the log buffer is not a one time and single step process but a continual one. You need to collect statistics (ULTBSTAT/ESTAT) at regular intervals and then attempt to tune the buffer. Use a large online redo log file size (ex. 100MB) but if your application is too small and you have slower, disks it would not be a very good idea to use 100MB log file size. Therefore, you could start with 160K - 240K size for your log buffer and then check the statistics for the average redo write size. One reliable school of thought suggests that the log buffer size should be 3 times the average redo I/O size. You can use this benchmark, which has worked for many installations
Slower LGWR
Imagine a situation where a slow receiver is bombarded with large amount of data, which he is incapable of handling, Possibility is that the receiver will queue up all stuff and wait till he processes all of it one by one, This may take a lot of time and making your application run slower. Also to be noted is a fact that redo logs are written to synchronously and sequentially. There is only one LGWR process. Also in case if your application makes frequent commits which in turn makes LGWR work furiously at writing logs, So the best part is to reduce the applications thrusting LGWR in deep well by reducing the commit rate
Page 25
There is one more case when the LGWR fully flushed the buffers, it starts writing to all files available in group causing Log file parallel waits if files are improperly placed Finally you have an init.ora parameter log_buffer which can be set at a very low value of 160K-260K so that the timely cleanout of buffers take place and there are not large dumps to be written at once.
Summary on tuning
A good way to start tuning will be to find the average redo write size from V$SYSSTAT. Divide redo blocks written by redo writes. Use a small log_buffer size (start with 240K). This is ensure that the buffer will be cleaned and the writes don't have large chunks to dump. A combination of well placed and sized redo logs, a small log_buffer and tuning commit rates can rid you of most problems with redo generation.
Reference(s)
1. 2. 3. 4. 5. 6. Oracle 8i documentation Oracle Metalink 2.01 (http://support.oracle.com.sg) Oracle Technology Network (http://technet.oracle.com) IXORA web-site (http://www.ixora.com.au) Oracle one-on-one Thomas Kyte (Wrox Publications) Practical Oracle 8i - Jonathan Lewis ( Addison Wesley )
Author(s)
Kambhampati Ravi Sudhakar (mailto:[email protected]) is currently working in Technical Team of Tata Consultancy Services. He has experience in various fronts of Software Development in 2.5 years of his software career. He has written this paper primarily from his experience in DBA and Technical Team on Oracle Performance Tuning.
Page 26