Data Loading Sybase Database To Oracle Database
Data Loading Sybase Database To Oracle Database
Contents
Introduction
Problem Definition
Implementation
BCP Export
bcp_tables_nm.ksh
bcp_tables_m.ksh
Benefits
10
Conclusion
11
12
16
17
19
Appendix E: keywords.lst
21
24
Problem Definition
Introduction
Data Migration is an important part of application migration. Some of the common drivers for data
migrations are:
Data load across various applications with different data source requirements
After business impact analysis and setting up data mappings and DDL conversion, the next step is
to convert the data definitions from one platform to another. Data migrations can be very
challenging, particularly if data is migrated from heterogeneous data sources and large amount of
tables are involved.
This whitepaper will talk about the process and tasks involved to simplify the data migration from
Sybase to Oracle.
Problem Definition
Problem Definition
There are many ways to perform a data load, including external vendor tools.
But, there could be issues with vendor tools such as
Data structure incompatibility
Licensing Cost
Insufficient support between the external tool vendor and the database vendor etc.
If the internal database tools are used for data load, then the aforementioned problems do not arise.
But, another major issue could be the compatibility and coherence between the data load tools of
the source and target database vendors. One of the options in such cases is to follow the below
steps during data load:
1.Extractthedatafromthesourcedatabaseusingthecorrespondingdataloadutilityandexportitintoa
datafile.
2.Loadthedatafromthedatafileintothetargetdatabaseusingthecorrespondingdataloadutility
For data loads from Sybase to Oracle, the same steps translate as below:
1. ExtractdatafromaSybasetableusingBCPutilityinapredefinedformat.
2. MakecontrolfileforthecorrespondingtableinOracleasperthedesiredloadconditions,formatofthe
datafilecreatedfromBCPutility,andtablestructuredetails.
3.RunSQLLoadertoloadthedatafromthedatafileusingthecontrolfileintothetargettableinOracle.
Some of the major challenges observed in the above process are as stated
below:
1. DefiningtheformatoftheflatfilewhichcanbewrittenbytheBCPutilityandreadbySQLLoader
utility.
Itshouldbekeptinmindthattheformat,especiallytherowandcolumndelimiters,arenotmixedup
withthedataitselfandremainunique.
2.DefiningtheparametersfortheBCPextractsothattheloadontheSybaseserverisoptimized.
3.Sybasedatabasestorestrailingwhitespacesattheendofcharacterdataifthecharacterfieldshave
additionalbytesthantheactualdata.Sometimes,applicationsalsoinsertrecordswithleadingand
trailingspacesalongwiththecharacterdata.But,whencharacterdataisqueried,Sybaseignoresthe
trailingwhitespacesandincludestherecordswiththetrailingspacesinitsresultset;whileOracle
expectsthedatatobeexacttobeincludedinthequeryresults.Itisimportanttonotethatleading
whitespacesaretreatedthesamewaybothinSybaseandOracle.
Problem Definition
4.WhenBCPutilityexportsdatawithMONEY,SMALLMONEYandMONEYNdatatypesfrom
Sybasetables,itextractsdatatill2decimalplacesonlyandroundsoffanyextradecimalspresent.
Sincedataisextractedonlytill2decimalplaces,SQLLoaderloadsonly2decimalplaces.But,while
queryingonSybase,thedataisaccurateto4decimalspaces.Hence,thereisadiscrepancyinthedata
betweenSybaseandOracle.
5.TheSybaseuserwhichisusedtoextractdatamighthaveatimeoutlimitonthedurationforwhicha
querycanrun.ThiscouldleadtosuddendisconnectbetweenBCPutilityanddatabasewhenextracting
data.
6.Definingthecolumnformatsandloadingconditionsinthecontrolfile.
7.Automationofthecompletedataloadprocesswithminimalmanualintervention.
8.Identifyingandresolvingadhocerrorsduetodifferenceindatastorageanddataformats.Thesecould
beraresincedatastructureissuesareassumedtoberesolvedbeforehand.
Problem Definition
1. Ifthenumberoftablestoloadisless,thentheautomationcanbeatthetablelevelwhereintheload
parametersarepreestablishedandtheloadisrunforeachtable.
2.Ifthenumberoftablestoloadisveryhigh,thentheautomationcanbeatdatabaseorschemalevel
whereintheBCPrunsinparallelwiththeSQLLoaderandSQLLoaderkicksoffassoonasBCPis
completeforatable.
Inthiscase,ifanexportthroughBCPtakeslongforaparticulartable,thenitdoesnotholdtheSQL
Loaderfromstartingtheloadfortableswithlessdata,thereby,releasingburdenontheusertosort
theBCPexportsasperthetablesdata.
Problem Definition
Implementation
In this paper, the discussion is mainly based on automating the data loads from Sybase to
Oracle when the number of tables is extremely high to the extent of migrating databases of
Sybase itself. The basic analysis around the data loads culminated in the below decisions on
Sybase side:
1. Regardingtheformatofthedatafile,itwasdeemedpreferabletousecharacterformatforBCPexport
(coptionintheBCPcommand)sincethenativeformatisnotinreadableformandcannotbe
understoodbySQLLoaderorBCPitselfincaseofimportingdatabacktoSybase.
2.ThedefaultdelimitersofBCPare\t(TAB)forcolumndelimiterand\n(NewLine)forrow
delimiter.BothofthecharactersinterferedwiththeactualdatapresentinSybasedatabase.Sincethis
couldresultinerrorproneandsometimesinvaliddataloads,thedelimiters,<EOFD>and
<EORD>wereusedascolumn(toption)androw(roption)delimitersforBCPexport.
ThesamewereusedintheControlfiledefinitionofOracleLoads.
3.Thebatchsizeoption(b)ofBCPcommandoffersanoptiontospecifycommitintervalswhile
insertingdataintoatablesinSybase.ThoughthishasnoeffectontheexportingofdatausingBCP,a
valueof50000wasdecidedontheoptionvaluesoastomaintainacommonBCPcommandforexport
andimport.
4.InordertohelptheSybaseserverhandleparallelexportsoftablesfromSybase,itwasdecidedtorun
2550BCPexportsatapointoftimeforagivendatabase.Dependingonthesizeoftablesina
particulardatabase,thenumbercanbechangedtosuitetheneed.
5.Inthecaseoftableshavingmorethan1billionrecordsorhavingmorethan10millionrecordswith
over3040columns,theuseofoneBCPthreadtoexportthewholedataintoasingledatafileseemed
veryloadintensive.Hence,insuchcasestheuseofParallelBCPthreadstoexportdataintomultiple
datafileswasimplemented.
Inthisscenario,itisnecessarytospecifytheF(thenumberofthefirstrowinthetabletostartthe
export)andLoptions(thenumberofthelastrowinthetablewhichistobeexported).Thishelps
inquickerexportandreducesthechancesoflargesizesofdatafiles.
Further,themultipledatafilesthuscreatedcanbeagainloadedinparallelintoOracle.
6.InordertocountertheissueofdatawithMONEY,SMALLMONEY,andMONEYNdatatypes,such
tableswereinitiallyfoundoutbyqueryingthesystemtablesofSybase.Sinceitisknownthatthedata
incolumnswiththesedatatypesisaccurateupto4decimals,viewswerecreatedoverthesetables
whileconvertingthedatatypeofthecorrespondingcolumnstoNUMERIC(19,4).Aftercreatingthe
views,thedatawasexportedfromtheviewsandthismadeitpossibletogetthedatatill4decimal
valuesfromtheSybasetables,therebyeliminatingthedatavalidationissues.
7.ThereisalsoanoptiontospecifytheinterfacesfileandtheSybaseservernamefromwhichtheBCP
exportistotakeplace.
TheinterfacesfilecontainstheconnectioninformationforvariousSybaseserversandtheSybase
servernameshouldbespecifiedexactlyaspresentintheinterfacesfilefortheBCPutilitytomake
theconnection.
Problem Definition
1. TheDirectPathLoadoptionisusedtoloaddataintoOracle.SincethisrequiressettinguptheNLS
parameter,theNLS_LANGenvironmentvariableisalsoset.Theloadisruninunrecoverablemodeto
speedupthedataload.
2. Thecommitpointfordataloadissetat1000000.Thiswouldreducetheloadontheserverasitneed
notputthewholeincomingdatainbufferbeforeacommit.
3. Sincethedataloadisperformedatdatabaselevelwithlotoftablesbeingloaded,theindex
maintenanceisskippedduringthedataloadintoOracle.Itisperformedafterthecompletionofalldata
loadsatthedatabaseorindexlevelasperconvenience.
4. Duringthedataload,severalconstraintsaredisabledsoastoaidinfasterloads.Theseconstraintsare
reenabledattheendofthedataload.Thesuccess/failureoftheenablingprocessdependsonthedata
consistencyandthestatusisindicatedintheSQLLoaderlogfile.
5. Thefieldandrowdelimitersarespecifiedas<EOFD>and<EORD>toremainconsistentwiththe
Sybasespecifications.
6. Columnformattingforvariousdatatypesislistedbelow:
Column
Data type
TIMESTAMP(3
)
CLOB
DATE
Column Format
TIMESTAMP "Mon dd yyyy
HH12:MI:SS:FF3AM"
TRANSLATE (:<COLUMN_NAME in lower
case>,'1,','1')
CHAR(2147483647) TERMINATED BY
{'<EORD>'|'<EOFD>'}
DATE "Mon dd yyyy HH12:MIAM"
VARCHAR2
CHAR
Others
NUMBER
TheformattingforVARCHAR2datatypetakescareofthetrailingspacesissuethatwas
encounteredbetweenSybaseandOracle.
7. TheloadisdoneinINSERTINTOformat.Hence,itisimperativetocheckthatthetargettableis
emptybeforetheSQLLoaderloadsdataintothetable.Ifthetargettableisnotempty,theSQLLoader
willfail.
8. InordertohelptheOracleserverhandleparallelloadingintotables,itwasdecidedtorun200300
SQLLoaderloadsatapointoftimeforagivendatabase.Dependingonthesizeoftablesina
particulardatabase,thenumbercanbechangedtosuitetheneed.
9. ThereisalsoanoptiontospecifythepathoftheTNSNAMES.ORAfileandtheOracledatabasename
towhichtheSQLLoaderloads.
TheTNSNAMES.ORAcontainstheconnectioninformationforvariousOracledatabasesandthe
OracledatabasenameshouldbespecifiedexactlyaspresentinTNSNAMES.ORAfortheSQL
Loaderutilitytomaketheconnection.
Thedecisionsmentionedfrom17areappliedduringtheControlfilecreationoftheOracleTable.The
BCPandSQLLoaderscriptsarestartedindependentofeachother.TheSQLLoaderscriptiswrittenin
suchawaythatitwaitsforthesuccessfulcompletionoftheBCPexportforaparticulartablebefore
startingtheloadintoOracle.
Problem Definition
BCP Export
The export of data from Sybase tables to data file using BCP utility is primarily kicked off by using
the bcp_tables.ksh script. The script takes the name of the database for which the export is to be
done as a parameter. The detailed script is given in Appendix A. It further uses the tables_syb.lst as
the list of tables for which data should be exported. It further uses setup.env file to look up the
environment settings, the typical example of which is present in Appendix B. As far as BCP utility is
concerned, the variables of importance in the environment setup file are:
a)
uid: The user using which the BCP utility logs into Sybase database
b)
c)
d)
e)
data_dir: The directory where the data file and BCP log is stored. For each database, a directory
by the database name is made in this location and the data files and BCP logs for the particular
database are present in the corresponding folder.
f)
bcp_tables_nm.ksh
The bcp_tables_nm.ksh creates one BCP export file for each table in tables_syb.lst in temp_syb
folder and runs them in parallel as per the settings in setup.env file. For tables having the names
specified in a file, keywords.lst, present in Appendix E, this script renames them to <table_name>_1.
This is performed since it was decided not to have any object names in Oracle with the listed
keywords and all such words shall be renamed as <NAME>_1. The data file name is maintained as
<TABLE NAME>.dat and the BCP log file is maintained as bcp_<TABLE NAME>.log at
<data_dir>/<Sybase DB name> folder. The data_dir parameter is obtained from setup.env file.
Depending on status of the BCP export the BCP log file name is renamed as:
1. TABLENAME>.readyincasetheBCPexportissuccessful.
2. TABLENAME>.errorincasetheBCPexportisnotsuccessful.
Depending on the information in the log file for the errored exports, the BCP export is rectified and
run again by the user.
Problem Definition
bcp_tables_m.ksh
The bcp_tables_m.ksh creates one BCP export file for each of the views created by the
bcp_tables.ksh script. If the view is created in a database called viewdb, the data file and the BCP
log file for the export are created in <data_dir>/<viewdb>_<Sybase db> as <VIEW NAME>.dat and
bcp_<VIEW NAME>.log. Depending on the success or failure of the BCP exports, the BCP export
log file is renamed as <VIEW NAME>.ready and <VIEW NAME>.error as specified earlier.
As soon as the BCP export is complete for the view, and the .ready or .error file is
created, the bcp_tables.ksh script moves the data file and the log file to <data_dir>/<Sybase DB>
directory as <TABLE NAME>.dat and <TABLE NAME>.{error|ready} by using the mapping created in
change_tnames.lst file.
1. OnlycreatecontrolfilesandSQLLoaderscripts.ThesyntaxforrunningtheSQLLoaderwiththis
optionisasfollows:
nohup./sqlldr_tables_trim.kshs<Oracleschemaname>C&
2. OnlyruntheSQLLoaderscripts.ThesyntaxforrunningtheSQLLoaderwiththisoptionisas
follows:
nohup./sqlldr_tables_trim.kshs<Oracleschemaname>L&
3. CreatethecontrolfileandSQLLoaderscriptsandthenruntheSQLLoaderscripts.Thesyntaxfor
runningtheSQLLoaderwiththisoptionisasfollows:
nohup./sqlldr_tables_trim.kshs<Oracleschemaname>&
The script uses setup.env, the environment settings file for the following parameters:
a)
b)
ORA_USR: The user using which SQL Loader logs into the Oracle database.
c)
d)
ORA_DATA_DIR: The path wherein the SQL Loader looks for the data file and the successful
completion of the BCP log file. For each database, a directory by the database name is made in
this location and the SQL Loader looks for the files in the corresponding database folder.
e)
CTL_FILE_DIR: The path wherein the SQL Loader creates the control files. For each database, a
directory by the database name is made in this location and the control files for the particular
database are present in the corresponding folder.
f)
LOG_FILE_DIR: The path wherein the SQL Loader creates the SQL Loader log files. For each
database, a directory by the database name is made in this location and the log files for the
particular database are present in the corresponding folder.
g)
BAD_FILE_DIR: The path wherein the SQL Loader creates the bad files in which the rejected
records of the data load are sent. For each database, a directory by the database name is made
in this location and the bad files for the particular database are present in the corresponding
folder.
Problem Definition
h)
NLS_LANG: The NLS_LANG parameter for the SQL Loader is specified here.
i)
j)
Once the script is started, as long as -L option is not specified, it starts of by creating
Control Files in <CTL_FILE_DIR>/<DB name> folder and SQL Loader scripts in temp_ora folder. If C option is specified, the script stops after this point.
In case -C option is not specified, the script starts to look for *.ready files in the
<ORA_DATA_DIR>/<DB Name> directory. Typically, this directory points to the same location as the
<data_dir>/<Sybase DB Name> from the BCP exports. Once it finds a file with the .ready
extension and the file name matches a table name present in tables_ora.lst, it does the following:
1)
2)
3)
4)
2)
3)
Change the extension of the BCP log file at <ORA_DATA_DIR>/<DB Name> from .skip to
.ready.
Data Loading: Sybase Database to Oracle Database
Problem Definition
This will make the sqlldr_tables_trim.ksh to automatically pick up the table again for the load and the
table is loaded.
But, this is only possible as long as the main script, sqlldr_tables_trim.ksh, is still running. In case,
by the time the issue is resolved and the target table is truncated, the main script is completed, then,
it is enough run the script, sqlldr_tables_trim.ksh, in the Load-Only mode (with -L option) since the
control files and SQL Loader scripts are already created in the first round.
10
Problem Definition
Benefits
The following benefits can be realized after following the process mentioned above:
ImprovesAccuracy:Asitisevident,theprocessrequiresalotofroutinechecksfilemanipulationsto
beperformedforeachTablesload.Iftakenupmanually,therewouldbelotofissueslikeskipping
fewcheckstobeperformed,mistakesinperformingthefilemanipulationsetc.Byperformingallthe
tasksinanautomatedscript,theaccuracyisimproved.
ReducesComplexity:Inthecaseswhenperformingthechecksorfilemanipulationsinvolvesa
complexprocess,theresultcouldbeerrorprone,ifhandledmanually.Byusingtheautomatedscript,
suchcomplexitiesarehandledautomaticallyandaccurately.
Reduces time and effort: In case of a manual effort, the various checks, their resolution, tracking
the completion of the numerous BCP or SQLLDR processes, and a final check on the complete
data load would take a lot of time and effort of the resources. The automation of the process
makes it possible for the resources to utilize their time and effort on more innovative and
challenging problems.
Scalable:AsthewholeprocessisbeingperformedonUNIXbasedscripts,thesamecanbeusedwith
someminimalchangesforDataLoadprocessesacrossvariousSybaseandOracleversions.
11
Problem Definition
Conclusion
Though there are some client specific parts in the BCP export and SQL Loader load scripts, they are
very easy to understand by a person with introductory knowledge in UNIX, Sybase and Oracle.
These can be changed very easily as per the clients requirements or can further be suggested to
the clients as a possible solution.
In conclusion, the BCP export and SQL Loader load of data from Sybase to Oracle are handled in
UNIX scripts which start the BCP exports as per the load bearing capability of Sybase and start the
SQL Loader as and when the BCP export is completed. The extensions of the log files of BCP and
SQL Loader provide a simple way for the user to understand the status of the load of a table. The
script also provides the user the chance to rectify the SQL Loader issues and reload the table
without the need to restart the load. In the event that a restart is necessary, the user has the option
to directly start the load with existing control files and SQL Loader scripts rather than recreating them
every time.
12
Problem Definition
13
Problem Definition
echo ${tab}>>${path}/tables_m.lst
else
echo ${tab}>>${path}/tables_nm.lst
fi
done
if [ ! -e ${path}/tables_nm.lst ];then touch ${path}/tables_nm.lst;fi
if [ ! -e ${path}/tables_m.lst ];then touch ${path}/tables_m.lst;fi
chmod 777 ${path}/tables_m.lst
chmod 777 ${path}/tables_nm.lst
mv ${path}/tables_nm.lst ${path}/tables_syb.lst
## Kick off BCP for Non_money Tables
nohup ./bcp_tables_nm.ksh ${db} &
### Create Views for the Money Tables
cnt=1
for tab in `cat ${path}/tables_m.lst`
do
if [ `echo ${db}_${tab}_vw|wc -c` -gt 29 ]
then
tname=${db}_`echo ${tab}|cut -c1-10`_vw
if [ ! -e ${path}/change_tnames.lst ];then touch ${path}/change_tnames.lst;fi
chmod 777 ${path}/change_tnames.lst
if [ `cat ${path}/change_tnames.lst|grep -iw "${tname}"|wc -l` -eq 0 ]
then
echo ${tab},${tname}>>${path}/change_tnames.lst
else
tname=${db}_`echo ${tab}|cut -c1-10`${cnt}_vw
echo ${tab},${tname}>>${path}/change_tnames.lst
cnt=`expr ${cnt} + 1`
fi
else
tname=${db}_${tab}_vw
echo ${tab},${tname}>>${path}/change_tnames.lst
fi
if [ -d ${path}/logs ];then rm -rf ${path}/logs;fi
mkdir ${path}/logs
chmod -R 777 logs
## Get the Columns to build the view
isql -U ${uid} -P "${passwd}" -w 800 -S${server} -I ${interfaces_file}<<EOF>${path}/logs/$
{tname}.log
use ${db}
go
select columnname from (
select distinct case when a.colid<>d.cid then case when c.name in ('moneyn','money','smallmoney')
then 'convert(numeric(19,4),'||a.name||') as '||a.name||',' else a.name||',' end
else
14
Problem Definition
15
Problem Definition
16
Problem Definition
Appendix B: setup.env
Environment setup file
uid=svc_deloitte
passwd='svc_deloitte'
server=SYBRPTU
interfaces_file=/appl/ora_stagedr/eps/loading/interfaces
data_dir=/appl/epsbackupfs1/dirdat/
ORA_SERVER=ep01pims
ORA_USR=pm_own
ORA_PWD=solongsybase
ORA_DATA_DIR=/appl/epsbackupfs1/dirdat/
CTL_FILE_DIR=/appl/epsbackupfs1/dirctl
LOG_FILE_DIR=/appl/epsbackupfs1/dirlog
BAD_FILE_DIR=/appl/epsbackupfs1/dirbad
export NLS_LANG='AMERICAN_AMERICA.WE8MSWIN1252'
export TNS_ADMIN=/appl/ora_stagedr/eps/loading
max_proc_ora=300
max_proc_syb=25
17
Problem Definition
Appendix C: bcp_tables_nm.ksh
script
#!/bin/ksh
path=`pwd`
schema=$1
. ${path}/setup.env
table_list=tables_syb.lst
data_dir_new=${data_dir}/${schema}
if [ ! -d ${data_dir_new} ];then mkdir ${data_dir_new};chmod -R 777 ${data_dir_new};fi
#if [[ -Z ${schema} ]];then echo "Usage : bcp_tables.ksh schema";exit 1;fi
###Creating the BCP standard file
if [ -e ${path}/bcp_standard.txt ];then rm -f ${path}/bcp_standard.txt;fi
if [ ${server} != DELTA2PMCYC1 ]
then
echo "bcp \"[${schema}].[dbo].[TABLE]\" out \"${data_dir_new}/LTAB.dat\" -b 50000 -c -t \"<EOFD>\"
-r \"<EORD>\" -U ${uid} -P \"${passwd}\" -S${server} -I ${interfaces_file}>$
{data_dir_new}/bcp_TABLE.log" > ${path}/bcp_standard.txt
else
echo "bcp \"[${schema}].[dbo].[TABLE]\" out \"${data_dir_new}/LTAB.dat\" -b 50000 -c -t \"<EOFD>\"
-r \"<EORD>\" -U ${uid} -P \"${passwd}\" >${data_dir_new}/bcp_TABLE.log">$
{path}/bcp_standard.txt
fi
chmod 777 ${path}/bcp_standard.txt
18
Problem Definition
19
Problem Definition
Appendix D: bcp_tables_m.ksh
script
#!/bin/ksh
path=`pwd`/temp
schema=$1
db=$2
. ${path}/setup.env
table_list=tables_syb.lst
data_dir_new=${data_dir}/${schema}_${db}
if [ ! -d ${data_dir_new} ];then mkdir ${data_dir_new};chmod -R 777 ${data_dir_new};fi
#if [[ -Z ${schema} ]];then echo "Usage : bcp_tables.ksh schema";exit 1;fi
###Creating the BCP standard file
if [ -e ${path}/bcp_standard.txt ];then rm -f ${path}/bcp_standard.txt;fi
if [ ${server} != DELTA2PMCYC1 ]
then
echo "bcp \"[${schema}]..[TABLE]\" out \"${data_dir_new}/LTAB.dat\" -b 50000 -c -t \"<EOFD>\"
-r \"<EORD>\" -U ${uid} -P \"${passwd}\" -S${server} -I ${interfaces_file} >$
{data_dir_new}/bcp_TABLE.log" > ${path}/bcp_standard.txt
else
echo "bcp \"[${schema}]..[TABLE]\" out \"${data_dir_new}/LTAB.dat\" -b 50000 -c -t \"<EOFD>\"
-r \"<EORD>\" -U ${uid} -P \"${passwd}\" >${data_dir_new}/bcp_TABLE.log">$
{path}/bcp_standard.txt
fi
chmod 777 ${path}/bcp_standard.txt
20
Problem Definition
21
Problem Definition
Appendix E: keywords.lst
access
commit
external
like4
numeric
committe
d
externall
y
likec
object
add
compile
fetch
limit
ocicoll
admin
compiled
file
limited
ocidate
after
compres
s
final
link
ocidatetim
e
agent
connect
fixed
lists
ociduration
aggregat
e
constant
float
local
ociinterval
all
constrain
t
flush
lock
ociloblocat
or
allocate
constrain
ts
for
logfile
ocinumber
alter
construct
or
forall
long
ociraw
analyze
contents
force
loop
ociref
and
context
foreign
manage
ocirefcurso
r
any
continue
fortran
manual
ocirowid
archive
controlfile
found
map
ocistring
archivelo
g
convert
freelist
max
ocitype
array
count
freelists
maxdatafiles
of
as
crash
from
maxextents
off
asc
create
function
maxinstance
s
offline
at
create,
general
maxlen
old
attribute
current
go
maxlogfiles
on
audit
cursor
goto
maxloghistor
y
online
authid
customd
atum
grant
maxlogmem
bers
only
authoriz
ation
cycle
group
maxtrans
opaque
avg
dangling
groups
maxvalue
open
backup
data
hash
member
operator
22
Problem Definition
become
database
having
merge
optimal
before
datafile
heap
min
option
begin
date
hidden
minextents
or
between
date_bas
e
hour
minus
oracle
bfile_bas
e
day
identified
minute
oradata
binary
dba
if
minvalue
order
blob_ba
se
dec
immedia
te
mlslabel
organizatio
n
block
decimal
in
mod
orlany
body
declare
including
mode
orlvary
both
default
increme
nt
modify
others
bound
define
index
module
out
bulk
delete
indexes
month
overlaps
by
desc
indicator
mount
overriding
byte
determini
stic
indices
multiset
own
disable
infinite
name
package
cache
dismount
initial
nan
parallel
call
distinct
initrans
national
parallel_en
able
calling
double
insert
native
parameter
cancel
drop
instance
nchar
parameters
cascade
dump
instantia
ble
new
parent
case
duration
int
next
partition
change
each
integer
nnect
pascal
char
element
interface
noarchivelog
pctfree
char_ba
se
else
intersect
noaudit
pctincreas
e
characte
r
elsif
interval
nocache
pctused
charset
empty
into
nocompress
pipe
charsetf
orm
enable
invalidat
e
nocopy
pipelined
charseti
d
end
is
nocycle
plan
check
escape
isolation
nomaxvalue
pli
checkpoi
nt
events
java
nominvalue
pragma
clob_bas
e
except
key
none
precision
23
Problem Definition
close
exception
languag
e
noorder
primary
cluster
exception
s
large
noresetlogs
prior
clusters
exclusive
layer
normal
private
cobol
exec
leading
nosort
privileges
colauth
execute
length
not
procedure
collect
exists
level
nowait
profile
column
exit
library
null
public
columns
explain
like
number
quota
commen
t
extent
like2
number_bas
e
raise
range
rows
sql
tables
unlimited
raw
rsor
sqlcode
tablespace
unsigned
read
sample
sqldata
tdo
until
real
save
sqlerror
temporary
untrusted
record
savepoint
sqlname
the
update
recover
sb1
sqlstate
then
use
ref
sb2
standard
thread
user
referenc
e
sb4
start
time
using
referenc
es
schema
stateme
nt_id
timestamp
validate
referenci
ng
scn
static
timezone_ab
br
valist
relies_o
n
second
statistics
timezone_ho
ur
value
rem
section
stddev
timezone_mi
nute
values
remaind
er
segment
stop
timezone_re
gion
varchar
rename
select
storage
to
varchar2
resetlog
s
self
stored
tracing
variable
resource
separate
string
trailing
variance
restricte
d
sequenc
e
struct
transaction
varray
result
serializab
le
style
transactional
varying
result_c
ache
session
submulti
set
trigger
view
return
set
subpartit
ion
triggers
views
returning
share
substitut
able
truncate
void
24
Problem Definition
reuse
shared
subtype
trusted
when
reverse
short
successf
ul
type
whenever
revoke
size
sum
ub1
where
role
size_t
switch
ub2
while
roles
smallint
synonym
ub4
with
rollback
snapshot
sysdate
uid
work
row
some
system
under
wrapped
rowid
sort
tabauth
union
write
rownum
sparse
table
unique
year
zone
25
Problem Definition
Appendix F:
sqlldr_tables_trim.ksh script
#!/bin/ksh
###############################################################################
######################
### sqlldr_tables.ksh -s schema - To create the control files and run sqlldr
###
### sqlldr_tables.ksh -s schema -L - To avoid creation of Control Files again
###
### sqlldr_tables.ksh -s schema -C - To just create the control files
###
###############################################################################
######################
path=`pwd`
. ${path}/setup.env
table_list=tables_ora.lst
### Making sure that the tables are in lower case
cat ${path}/${table_list}|tr '[A-Z]' '[a-z]'>${path}/${table_list}_temp
mv ${path}/${table_list}_temp ${path}/${table_list}
###CHECK FOR CORRECTNESS OF ARGUMENTS
while getopts s:LC name
do
case $name in
s) schema="$OPTARG" ;;
L) load="Y" ;;
C) create="Y";;
*) echo "Usage sqlldr_tables.ksh -s schema OR sqlldr_tables.ksh -s schema -C";exit 1 ;;
esac
done
if [[ -z ${create} ]];then create="N";fi
if [[ -z ${load} ]];then load="N";fi
#echo ${create}
#echo ${load}
sc=`echo ${schema}|gawk '{gsub("_own","",$0);print}'`
ORA_DATA_DIR_NEW=${ORA_DATA_DIR}/${sc}
CTL_FILE_DIR_NEW=${CTL_FILE_DIR}/${sc}
LOG_FILE_DIR_NEW=${LOG_FILE_DIR}/${sc}
BAD_FILE_DIR_NEW=${BAD_FILE_DIR}/${sc}
if [ ! -d ${ORA_DATA_DIR_NEW} ];then echo "${ORA_DATA_DIR_NEW} missing" ; exit 1;fi
Data Loading: Sybase Database to Oracle Database
26
Problem Definition
27
Problem Definition
28
Problem Definition
29
Problem Definition
echo "sleeping"
else
x=0
fi
done
echo "Looping external"
done
done
fi
30