0% found this document useful (0 votes)
86 views7 pages

Oracle Scene 63 N Chandler Getting Started With Oracle Goldengate

This document provides an overview of how to set up basic uni-directional data replication between Oracle databases using Oracle GoldenGate. It discusses the key GoldenGate processes involved in replication including the EXTRACT, DATAPUMP, COLLECTOR, and REPLICAT processes. It then provides step-by-step instructions for installing GoldenGate and performing the initial configuration on the source and target databases to get a simple replication process running between the two databases.

Uploaded by

Ashr
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
86 views7 pages

Oracle Scene 63 N Chandler Getting Started With Oracle Goldengate

This document provides an overview of how to set up basic uni-directional data replication between Oracle databases using Oracle GoldenGate. It discusses the key GoldenGate processes involved in replication including the EXTRACT, DATAPUMP, COLLECTOR, and REPLICAT processes. It then provides step-by-step instructions for installing GoldenGate and performing the initial configuration on the source and target databases to get a simple replication process running between the two databases.

Uploaded by

Ashr
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

OracleScene

SPRING 17
Technology

Getting Started With


Oracle GoldenGate
Replicating data between databases in a timely fashion can be a surprisingly tricky
thing to do. There are many ways to replicate data, from home-grown code and
database trigger based solutions, to Oracle Streams, Materialized Views over Database
Links and several 3rd Party replication products, such as Dell Shareplex and DBVisit
Replicate. The more timely and resilient you want your solution to be, the harder it
becomes to implement.
Neil Chandler, Chandler Systems

Whatever your reasons for moving a test environment. For the purposes Architectural Overview
data; migrating to the Cloud or to a of this introduction, I will concentrate GoldenGate consists of several processes.
new system/platform, feeding a Data on showing how we can setup a simple
Warehouse, performing an upgrade Oracle-to-Oracle Master-to-Slave The EXTRACT process connects to the
with minimal or zero downtime, or replication of an entire schema. database, captures transactions and writes
even implementing a bespoke Business This example replication will be done on the transactions to a TRAIL FILE. The TRAIL
Continuity system where only a fraction Oracle VM VirtualBox “Developer Days” FILE can either be local to the EXTRACT to
of the data is required for DR, GoldenGate Servers, downloadable here: be used by a DATAPUMP, or directly sent to
will allow you to implement data a remote destination server.
movement across platforms and different http://www.oracle.com/technetwork/
storage engines easily and quickly, with database/enterprise-edition/ The DATAPUMP process is used to pick
built-in resilience. databaseappdev-vm-161299.html up transactions which have been written
to a local TRAIL FILE and sends them to a
GoldenGate is a platform independent
data extraction, transformation and
load tool (ETL). I have used it to reliably
replicate data from Mainframe SQL/
MX databases, transform it into Oracle,
and modify and transform it again into
SQL Server, as well as designing and
implementing one-to-one and one-
to-many data migrations and feeds
on Oracle-centric systems. I have also
implemented a multi-master ACTIVE/
ACTIVE solution.
Basic uni-directional replication, which
will encompass most GoldenGate
implementations, is straightforward
to setup. The best way to understand
GoldenGate is to install it and use it in

46 www.ukoug.org
Technology: Neil Chandler

$ cd /home/oracle/install
$ unzip fbo_ggs_Linux_x64_shiphome.zip
$ cd /home/oracle/install/fbo_ggs_Linux_x64_shiphome/Disk1/ogg.rsp

$ cat ogg.rsp
#-------------------------------------------------------------------------------
# Do not change the following system generated value.
#-------------------------------------------------------------------------------
oracle.install.responseFileVersion=/oracle/install/rspfmt_ogginstall_response_schema_v12_1_2

# Specify a release and location to install Oracle GoldenGate


INSTALL_OPTION=ORA12c
SOFTWARE_LOCATION=/home/oracle/app/goldengate
INVENTORY_LOCATION=/home/oracle/app/oraInventory
UNIX_GROUP_NAME=oracle


$./runInstaller -silent -responseFile /home/oracle/install/fbo_ggs_Linux_x64_shiphome/Disk1/ogg.rsp
Starting Oracle Universal Installer...

Checking Temp space: must be greater than 120 MB. Actual 23701 MB Passed
Checking swap space: must be greater than 150 MB. Actual 2063 MB Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2017-01-07_01-30-23PM. Please wait ...
You can find the log of this install session at:
/home/oracle/app/oraInventory/logs/installActions2017-01-07_01-30-23PM.log

The installation of Oracle GoldenGate Core was successful.


Please check ‘/home/oracle/app/oraInventory/logs/silentInstall2017-01-07_01-30-23PM.log’ for more details.
Successfully Setup Software.

FIGURE 1

remote destination server. NOTE: The DATAPUMP is an optional Installing GoldenGate


step, but it is best practice to use a DATAPUMP. This is to protect GoldenGate is a straightforward install and is easily done via a
against running out of memory should the remote destination responseFile as there are few parameters to supply.
have any availability issues. It is technically a specialised Unzip the downloaded installation file to an appropriate
EXTRACT process, and runs as an extract. installation directory on each server, setup a response file [ogg.
rsp] to identify the SOFTWARE_LOCATION of the GoldenGate
The COLLECTOR process on the remote destination server writes install and perform a silent install. You will need to install
the transactions to a TRAIL FILE on the destination server. The GoldenGate for both the source and destination database servers.
COLLECTOR process is spawned automatically by the MANAGER
when the EXTRACT or DATAPUMP connects to the remote Example GoldenGate Install on Target Server (See Figure 1).
servers. It requires no other configuration.

The REPLICAT process reads the TRAIL FILE on the destination Initial Configuration
server and applies the change records to the destination First of all we need to configure some global settings, directories
database. and the GoldenGate Manager on the source and target servers.
We should create a GLOBALS file in the GoldenGate Home
The MANAGER process looks after all of the other processes, installation directory /home/oracle/app/goldengate. The
and can start and restart them automatically. It also cleans up GLOBALS file is read each time we use the GoldenGate
old TRAIL FILES and listens on a TCP port (7809) for incoming Command Interpreter “ggsci” and contains parameters which
connections from source EXTRACT and DATAPUMP processes. apply to the entire GoldenGate instance.

The TRAIL FILE is a series of binary files in a canonical format


-- /home/oracle/app/goldengate/GLOBALS
which contains all of the transactions we have captured in the
EXTRACT. The TRAIL FILE format is identical, regardless of the ggschema goldengate
checkpointtable goldengate.checkpoint_table
source or destination system type. It is written-to by EXTRACT
processes, and read by DATAPUMP and REPLICAT processes.
You can only name the file using 2 characters, so there’s no real We need to ensure all appropriate GoldenGate sub-directories
opportunity for a meaningful naming standard. The TRAIL FILE have been created underneath the GoldenGate Home. We can
is defined with a maximum size which should relate to the use the “create subdirs” command within “ggsci” to do this.
number of transactions you are putting through the system. The 3 key subdirectories are:
Once the maximum size is reached, or if you stop and start the
extract process, a new TRAIL FILE will be started. The file format dirprm – this contains all of the parameter files group extract,
of the TRAIL FILE is XXnnnnnnnnn, where XX is your 2 character datapump and replicat groups as well as the parameter file for
name and nnnnnnnnn is the incrementing sequence number the manager and any include files
(note: this is restricted to 6 characters pre v12.2 of GoldenGate).
The old filename format - using the FORMAT keyword in the dirrpt – this contains all of the report log files from each group,
EXTRACT - may be required depending upon the platform to showing information relating to group and manager processing
which you are replicating data.

www.ukoug.org 47
OracleScene

SPRING 17
Technology: Neil Chandler

dirdat – this is the default directory for all of the trail files contain a unique identifier, and you are able to modify the
produced by the extract and datapump/collector processes. schema, that a surrogate key column be added and populated.
The files in this directory will contain all of the data for every It can be a simple population using a DEFAULT sequence
transaction which is replicated, and so we need to ensure it has next-value to capture new values automatically ( e.g. create
sufficient size and performance resources sequence <table_seq>; alter table <table> add
surrogate_unique_col default <table_seq>.
To complete the basic setup, we need to create the Manager nextval).
parameter file and start the Manager.
There are a few other schema-based problems which may need
$ ggsci
to be overcome, such as deferred constraints. There is a script
Oracle GoldenGate Command Interpreter for Oracle within MOS article 1296168.1 which performs a check of your
schema for replication compliance and provides some advice
GGSCI 1 > create subdirs
Creating subdirectories under current directory /home/oracle/ and metrics too.
app/goldengate

Parameter files /home/oracle/app/goldengate/


If you need to replicate sequences, you must run the sequence.
dirprm: created sql script in the target database. This script is located in the
Report files /home/oracle/app/goldengate/ GoldenGate Home directory.
dirrpt: created
Checkpoint files /home/oracle/app/goldengate/
dirchk: created
Process status files /home/oracle/app/goldengate/
dirpcs: created
In the Database – Initialization parameters and
SQL script files /home/oracle/app/goldengate/ database settings
dirsql: created There are a few recommended and mandatory settings for the
Database definitions files /home/oracle/app/goldengate/
dirdef: created source and target databases:
Extract data files /home/oracle/app/goldengate/
dirdat: created
Temporary files /home/oracle/app/goldengate/ Setting / initialization
dirtmp: created Description
parameter
Credential store files /home/oracle/app/goldengate/
dircrd: created alter database force To ensure that you do not miss any
Masterkey wallet files /home/oracle/app/goldengate/ logging
dirwlt: created NOLOGGING operations. You may
Dump files /home/oracle/app/goldengate/ wish to do this at a more granular
dirdmp: created level, such as tablespace.
GGSCI 2> edit param mgr alter database add This adds required supplemental
supplemental log logging on at a database level. Whilst
GGSCI 3> start mgr data this is low impact to the redo logs,
Manager started.
you may wish to do this at a more
GGSCI 4> info mgr granular level within ggsci using
Manager is running (IP port DevDaysSourceGG.7809, Process ID
11713). add trandata <schema>.<table>
alter database add This always includes the primary
The Manager Parameter File looks like this: supplemental log and/or unique key data in the redo
data (primary key) stream to ensure we can identify
columns; each row, even if those columns
--/home/oracle/app/goldengate/dirprm/mgr.prm alter database have not been referenced by the
PORT 7809 -- listener port add supplemental
DYNAMICPORTLIST 7810-7830 -- port range for spawned server transaction.
log data (unique)
“collector” processes columns;
--Uncomment the below once everything is configured and running alter system set Keep undo for as long as it may be
smoothly undo_retention=28801
--PURGEOLDEXTRACTS /u01/app/gg12/dirprm/AA, USECHECKPOINTS needed for the longest database
scope=both sid=’*’ transaction. This should be balanced
--AUTOSTART ER *
--AUTORESTART ER *, RETRIES 5, WAITMINUTES 1, RESETMINUTES 60 with Bounded Recovery, which
defaults to 4 hours, so at least 8
-- Interval at which problems and errors should be written to hours of UNDO is needed. Oracle
the ggserr.log file
DOWNREPORTMINUTES 15 recommends keeping 1 day of undo
LAGREPORTMINUTES 30 -- Interval at which lag is checked if possible.
LAGINFOMINUTES 5 -- Threshold at which lag is reported
LAGCRITICALMINUTES 15 -- Critical threshold reporting value alter system set From Oracle 11.2.0.4 / 12.1.0.2+ this
enable_goldengate_ is mandatory. It enables access to
replication=TRUE certain internals for GoldenGate
scope=both sid=’*’; in relation to TDE, LOGREADER
In the Database – Check the schema to be replicated access, trigger suppression, deferred
There is one key requirement for replicating data. It must constraints and other integration
be possible to uniquely identify each row of data. There are points.
3 ways to do this; a Primary Key (PK), a Unique Key(UK) or a alter system Integrated EXTRACT and REPLICAT
combination of up to 38 columns which can be concatenated set streams_ use Streams technology. The streams
pool_size=200M pool should be at least 200M. Use the
together to form a unique value. By default this will be the first scope=spfile sid=’*’;
38 columns of any given table with no PK or UK, but you can advisors to determine the optimal size
for throughput for your system.
define any 38 columns for this using the KEYCOLS parameter. If
one of these conditions cannot be met, you cannot successfully
replicate the data. I would recommend that if a table does not

48 www.ukoug.org
Technology:Neil Chandler

In the Database – GoldenGate accounts The EXTRACT parameter file looks like this:
There needs to be a GoldenGate account in both the source and
the target databases. The Source EXTRACT will be mainly reading -- /home/oracle/app/goldengate/dirprm/e_hr.prm
from the in-memory REDO stream to capture transactions.
The target REPLICAT will be playing those transaction into the -- Setup Environment Variables so we login to the database
correctly
database. SETENV (ORACLE_HOME=’/home/oracle/app/oracle/product/12.1.0/
dbhome_1’)
SETENV (ORACLE_SID=’cdb1’)
The source database consists of a container database called
“cdb1” and a pluggable database called “orcl” (this pre-exists in -- Name the extract
the downloaded VM image). EXTRACT e_hr

-- Login Details. These can be encrypted.


The target database consists of a container database called USERID c##goldengate, PASSWORD goldengate
“cdb1” and a pluggable database called “orcltarget” (this is a -- Add our standard reporting options for every extract and
newly created PDB in the downloaded VM image). replicat
In the source container database, we need a common user for include /home/oracle/app/goldengate/dirprm/i_report.prm

the EXTRACT, with GoldenGate-specific and PDB-level privileges: -- Name the Trail File. We only get 2 characters!
EXTTRAIL ./dirdat/AA

SYS@cdb1 > create user c##goldengate identified by goldengate ; -- Makes trail files smaller and helps with primary key updates
User created. UPDATERECORDFORMAT COMPACT

SYS@cdb1 > exec dbms_goldengate_auth.grant_admin_ -- We want to replicate DDL too


privilege(‘C##GOLDENGATE’,container=>’ALL’); DDL INCLUDE MAPPED
PL/SQL procedure successfully completed.
-- And report all DDL operations in full in the report logs.
SYS@cdb1 > grant dba to c##goldengate container=all; DDLOPTIONS REPORT
Grant succeeded.
-- Finally list the objects we are replicating: pdb.schema.
object
In the target database, we need a GoldenGate user within the -- Wildcard matching is OK as long as we aren’t doing any data
-- transformation in this step.
PDB itself: SEQUENCE orcl.hr.*;
TABLE orcl.hr.*;

SYS@orcltarget > create user goldengate identified by


goldengate; For the DATAPUMP, we need to create, configure the remote
User created. TRAIL FILES, and start it
SYS@orcltarget > exec dbms_goldengate_auth.grant_admin_
privilege(‘GOLDENGATE’);
PL/SQL procedure successfully completed. GGSCI 1> info all
Program Status Group Lag at Chkpt Time Since
SYS@orcltarget > grant DBA to goldengate; Chkpt
Grant succeeded. MANAGER RUNNING
EXTRACT RUNNING E_HR 00:00:10 00:00:06

NOTE: If you are not using Pluggable Databases, setup both GGSCI 2> edit param p_hr

GoldenGate users the same as the target GoldenGate user in the GGSCI 3> add extract p_hr, exttrailsource /home/oracle/app/
PDB. goldengate/dirdat/AA
EXTRACT added.

GGSCI 4> add rmttrail /home/oracle/app/goldengate/dirdat/AA,


Setup the EXTRACT and DATAPUMP extract p_hr, megabytes 20
RMTTRAIL added.
Before we synchronise the data between source and target, we
should configure and start the Extract Process. This will ensure GGSCI 5> start p_hr
Sending START request to MANAGER ...
data overlap and means we will not miss any transactions. EXTRACT P_HR starting
We need to create it, configure TRAIL FILES, register and start it
GGSCI 6> info all
Program Status Group Lag at Chkpt Time Since
$ ggsci Chkpt
Oracle GoldenGate Command Interpreter for Oracle MANAGER RUNNING
EXTRACT RUNNING E_HR 00:00:02 00:00:01
GGSCI 1> edit param e_hr EXTRACT RUNNING P_HR 00:00:00 00:00:08

GGSCI 2> dblogin userid c##goldengate, password goldengate


Successfully logged into database CDB$ROOT.

GGSCI 3> add extract e_hr, integrated tranlog, begin now


EXTRACT (Integrated) added.

GGSCI 4> add exttrail ./dirdat/AA, extract E_HR, megabytes 20


EXTTRAIL added.

GGSCI 5> register extract e_hr database container (orcl)

2017-01-08 15:34:05 INFO OGG-02003 Extract E_HR


successfully registered with database at SCN 6252571.

We have ensured that all transactions after SCN 6252571 will be


captured to the TRAIL FILES.

www.ukoug.org 49
OracleScene

SPRING 17
Technology: Neil Chandler

The DATAPUMP parameter file looks like this: HANDLECOLLISIONS parameter in the REPLICAT. This temporary
parameter may be used to get your REPLICAT started should
-- /home/oracle/app/goldengate/dirprm/p_hr.prm
you have transactions in your TRAIL FILE which are already in
the database, and it endeavours to align your source and target
-- Setup Environment Variables so we login to the database using some sensible rules to handle clashes e.g. if a record to
correctly
SETENV (ORACLE_HOME=’/home/oracle/app/oracle/product/12.1.0/ be deleted does not exist, just ignore the fact it is not there
dbhome_1’) as it will have the same outcome as if we deleted it. However
SETENV (ORACLE_SID=’cdb1’)
HANDLECOLLISION does not cope with all potential scenarios
-- Name the datapump. Note that its’ really a special type of and it should be switched off [NOHANDLECOLLISIONS] as soon
extract as possible after initial REPLICAT synchronisation otherwise it
EXTRACT p_hr
may slowly corrupt the target dataset.
-- Add our standard reporting options for every extract and
replicat
include /home/oracle/app/goldengate/dirprm/i_report.prm
Setup the REPLICAT
-- Specify where the trail file is being transmitted-to First of all we need to check that the GoldenGate DATAPUMP
RMTHOST DevDaysTargetGG , MGRPORT 7809
RMTTRAIL /home/oracle/app/goldengate/dirdat/AA is transmitting changes to the target server by looking for
the TRAIL FILE. Running multiple ls -l commands will show if
-- If you are not doing any transformation in the datapump
-- this parameter increases performance by up to 30%
transactions are being transmitted as the TRAIL FILE grows:
PASSTHRU

-- This is needed to capture any data issues. $ ls -l /home/oracle/app/goldengate/dirdat


-- It is useful when debugging problems. total 4
DISCARDFILE ./dirrpt/p_hr.dsc, PURGE -rw-r-----. 1 oracle oracle 2227 Jan 11 14:22 AA000000000

-- And capture the relevant objects. $ ls -l /home/oracle/app/goldengate/dirdat


SEQUENCE orcl.hr.*; total 4
TABLE orcl.hr.*; -rw-r-----. 1 oracle oracle 2690 Jan 11 14:23 AA000000000

Excellent, the TRAIL FILE exists and is growing! We can now


Initial Loading and Data Synchronisation use this to start at the correct SCN, playing transactions into
This can be the hardest part of any replication. Seeding the the target database. In Oracle, we refer to the “SCN” or System
target to match the source can be difficult, especially across DB Change Number to keep track of transactional changes.
formats. This is possible within GoldenGate using a “Special GoldenGate refers to the “CSN” or Change System Number as
Replicat”, which is beyond the scope of this introduction. Special it needs to cope with multiple formats of CSN from different
Replicats can be slow to run with large data volumes but may be source databases. These terms can be used interchangeably.
your best option if replicating between different database types.
With Oracle-to-Oracle, my two preferred initialisation methods For the REPLICAT we need to create a checkpoint table (used by
are to either: all replicats to keep track of where they are), register the replicat
and start it after the SCN we used for the Export Datapump:
• Create a Physical Standby, start the extract, stop Data Guard,
and force open the standby R/W noting the V$DATABASE. GGSCI 1> info all
STANDBY_BECAME_PRIMARY_SCN
Program Status Group Lag at Chkpt Time Since
• Use Export Datapump to extract the source data as of a Chkpt
particular SCN MANAGER RUNNING

GGSCI 2> edit param r_hr


I will be using the Export Datapump method here as I wish
to rename the schema in the target system. Note the use of GGSCI 3> dblogin userid goldengate password goldengate
Successfully logged into database ORCLTARGET.
FLASHBACK_SCN in the Export Datapump to fix the point in
time of the data extraction. We will use this SCN when starting GGSCI 4> add checkpointtable
the playback of transactions in the target later.
No checkpoint table specified. Using GLOBALS specification
(goldengate.checkpoint_table)...
Logon catalog name ORCLTARGET will be used for table
SYS @ cdb1 > select current_scn from v$database; specification ORCLTARGET.goldengate.checkpoint_table.
CURRENT_SCN Successfully created checkpoint table ORCLTARGET.goldengate.
----------- checkpoint_table.
6302652
GGSCI 5> add replicat r_hr integrated exttrail /home/oracle/
$ expdp c##goldengate/goldengate@orcl directory=gg dumpfile=gg. app/goldengate/dirdat/AA
dmp logfile=gg_exp.log schemas=hr flashback_scn=6302652 REPLICAT (Integrated) added.
Copy the .dmp file to the target and import, with relevant re- GGSCI 6> info all
mappings
Program Status Group Lag at Chkpt Time Since
$ impdp goldengate/goldengate directory=gg dumpfile=gg. Chkpt
dmp logfile=gg_imp.log remap_schema=hr:hr_target remap_
tablespace=users:hr MANAGER RUNNING
REPLICAT STOPPED R_HR 00:00:00 00:05:03

If you are unable to ensure definitive extract points, such GGSCI 7> start r_hr aftercsn 6302652
as when you are using flat files extract and load to perform
Sending START request to MANAGER ...
an initial population, it may be necessary to use the

50 www.ukoug.org
Technology: Neil Chandler

REPLICAT R_HR starting Total updates 0.00


Total deletes 0.00
GGSCI 8> info all Total discards 0.00
Total operations 93605.00
Program Status Group Lag at Chkpt Time Since
Chkpt End of Statistics.

MANAGER RUNNING
REPLICAT RUNNING R_HR 00:00:00 00:00:02 REPLICAT r_hr STATS

The REPLICAT parameter file looks like this: GGSCI 1> stats r_hr, total

Sending STATS request to REPLICAT R_HR ...


-- /home/oracle/app/goldengate/dirprm/r_hr.prm Start of Statistics at 2017-01-11 15:31:41.

-- Setup Environment Variables so we login to the database Integrated Replicat Statistics:


correctly
-- NOTE the TWO_TASK to connect to the correct PDB directly Total transactions 4.00
SETENV (ORACLE_HOME=’/home/oracle/app/oracle/product/12.1.0/ Redirected 0.00
dbhome_1’) DDL operations 0.00
SETENV (TWO_TASK=’orcltarget’) Stored procedures 0.00
Datatype functionality 0.00
-- name the replicat Event actions 0.00
REPLICAT r_hr Direct transactions ratio
75.00%
-- Login to the DB
USERID goldengate PASSWORD goldengate Replicating from ORCL.HR.JOBS to ORCLTARGET.HR_TARGET.JOBS:

-- Add our standard reporting options for every extract and *** Total statistics since 2017-01-11 15:04:59 ***
replicat Total inserts 3.00
include /home/oracle/app/goldengate/dirprm/i_report.prm Total updates 0.00
Total deletes 0.00
-- Controlling REPLICAT memory use and parallelism Total discards 0.00
DBOPTIONS INTEGRATEDPARAMS (max_sga_size 200, parallelism 1) Total operations 3.00

-- Key file used to show failed records. Needed when Replicating from ORCL.HR.JOB_SUBTASKS to ORCLTARGET.HR_TARGET.
troubleshooting problems. JOB_SUBTASKS:
DISCARDFILE ./dirrpt/p_orcl2.dsc, PURGE
*** Total statistics since 2017-01-11 15:04:59 ***
-- This is how we map the tables across from source to target Total inserts 93605.00
MAP orcl.hr.*, TARGET hr_target.*; Total updates 0.00
Total deletes 0.00
Total discards 0.00
And the standard reporting i_report.prm file we have included in Total operations 93605.00
End of Statistics.
every group parameter file looks like this:

You may notice that there is one less insert in HR_TARGET.JOBS


-- configure reporting to provide throughput stats
REPORT AT 23:59 than were extracted. This is because the EXTRACT was started
REPORTROLLOVER AT 00:01 ON MONDAY before the Export Datapump extracted the full schema. Between
REPORTCOUNT EVERY 30 MINUTES, RATE the starting of the EXTRACT and the Export Datapump, there
REPORTCOUNT EVERY 100000 RECORDS, RATE
was one insert transaction in the HR.JOBS table, but this was
ignored in the REPLICAT as we started it from the SCN at the
point of export datapump, not the point that the initial EXTRACT
And What has GoldenGate been Doing? started.
We can use the stats command to see how much traffic has
been going through each group. Here we look at what went
through the EXTRACT and the REPLICAT since they started. Can you Prove That? Sure.
The EXTRACT was started at SCN: 6252571. Most rows were
EXTRACT e_hr STATS already in-place at this time, having an SCN of 6165829.

GGSCI 1> stats e_hr, total Row “IT_DBA” was inserted at SCN: 6284117 and therefore
captured by the EXTRACT, but not needed as the Export
Sending STATS request to EXTRACT E_HR ...
Start of Statistics at 2017-01-11 15:37:00. Datapump was executed with SCN: 6302652.

Output to ./dirdat/AA: Rows “IT_SDBA”, “IT_VSDBA”, “IT_GGDBA” were inserted at SCN:


Extracting from ORCL.HR.JOBS to ORCL.HR.JOBS: 6310258 and were therefore captured by the EXTRACT and used
by the REPLICAT.
*** Total statistics since 2017-01-08 15:04:01 ***
Total inserts 4.00
Total updates 0.00
Total deletes 0.00
Total discards 0.00
Total operations 4.00

Extracting from ORCL.HR.JOB_SUBTASKS to ORCL.HR.JOB_SUBTASKS:

*** Total statistics since 2017-01-08 15:04:01 ***


Total inserts 93605.00

www.ukoug.org 51
OracleScene

SPRING 17
Technology: Neil Chandler

1* select ora_rowscn,job_id,job_title,min_salary,max_salary from jobs order by 2;


Conclusion
ORA_ROWSCN JOB_ID JOB_TITLE MIN_SALARY MAX_SALARY
---------- ---------- ----------------------------------- ---------- ----------
At a basic level, GoldenGate is very
6165829 AC_ACCOUNT Public Accountant 4200 9000 straightforward to implement but
6165829 AC_MGR Accounting Manager 8200 16000 you need to take care. It is highly
6165829 AD_ASST Administration Assistant 3000 6000
6165829 AD_PRES President 20000 40000 configurable and programmable,
6165829 AD_VP Administration Vice President 15000 30000 and a badly configured set of
6165829 FI_ACCOUNT Accountant 4200 9000
6165829 FI_MGR Finance Manager 8200 16000
transformations will corrupt your
6165829 HR_REP Human Resources Representative 4000 9000 target dataset.
6284117 IT_DBA Database Admin 4000 10000
6310258 IT_GGDBA GoldenGate DBA 3000 9000 You don’t “switch on” GoldenGate,
6165829 IT_PROG Programmer 4000 10000
6310258 IT_SDBA Senior DBA 8000 20000
like you switch on Data Guard. It
6310258 IT_VSDBA Very Senior DBA 9999 25000 needs to work with the application to
6165829 MK_MAN Marketing Manager 9000 15000 produce the best outcomes.
6165829 MK_REP Marketing Representative 4000 9000
6165829 PR_REP Public Relations Representative 4500 10500
6165829 PU_CLERK Purchasing Clerk 2500 5500
6165829 PU_MAN Purchasing Manager 8000 15000
6165829 SA_MAN Sales Manager 10000 20000
6165829 SA_REP Sales Representative 6000 12000
6165829 SH_CLERK Shipping Clerk 2500 5500
6165829 ST_CLERK Stock Clerk 2000 5000
6165829 ST_MAN Stock Manager 5500 8500

ABOUT Neil Chandler


Data Architect, Chandler Systems
THE Neil has been working in IT since 1988, focused primarily within Oracle, SQL Server and
AUTHOR their related Server technologies: UNIX, Linux, Windows and SAN. He has been a
successful technical lead for FTSE 100 Companies with Development and Production
Systems experience gained in the Financial, Real-Time Logistics, Property and Accountancy
sectors. Neil is also an Oracle ACE and is Chairman of the UKOUG RAC, Cloud Infrastructure
and Availability SIG, and is a regular presenter at Oracle conferences around the world.

Blog: https://chandlerdba.com
www.linkedin.com/in/nchandler
@ChandlerDBA

Feature in the next

Being part of the Oracle user technical and/or functional perspective.


Whatever your story, whether it’s
community, we invite you to
a great tip, use of a product’s new
share your Oracle experiences features, lessons learnt, innovative use
& insight in print with our of your applications or integrations
readers. Knowledge sharing with other solutions, we want to know
is a valuable and rewarding about it.
experience and those that Send your submissions for the Summer
take part feel a sense of giving edition by 3rd April or Autumn edition
back to the industry that they by 26th June to: [email protected]
have made their career in.
Did you know?
If you have an idea for an article, but
We’re looking for compelling stories would like some feedback on the topic
about your experiences with your before you start to write, you can
Oracle technology, analytics & submit the proposal to the editorial
reporting and applications – from a team at [email protected].

52 www.ukoug.org

You might also like