0% found this document useful (0 votes)
19 views60 pages

DBA Fundamentals1

Uploaded by

Khairul Asri
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as RTF, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views60 pages

DBA Fundamentals1

Uploaded by

Khairul Asri
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as RTF, PDF, TXT or read online on Scribd
You are on page 1/ 60

Chapter 3 Installing and Maintaining Server

Setting Up Password File Authentication

1. Using the ORAPWD utility, create a password file with the SYS password. Orapwd file=<fname>
password=<password> entries=<users>
2. Set the REMOTE_LOGIN_PASSWORDFILE parameter to either EXCLUSIVE (the password file
is used for only one database, you can add users other than SYS or INTERNAL to the password
file) or SHARED (shared among multiple databases, but you can not add users other than SYS or
INTERNAL to the password file)
3. Grant the appropriate SYSDBA and SYSOPER privileges.

Normally, the FILE is created in the dbs directory.


The view V$PWFILE_USERS has all info about users granted SYSDBA or SYSOPER privileges.

Starting Up the Instance

You can start an Instance with a text based PFILE or binary SPFILE (new for 9i). When the instance is
started with NOMOUNT stage you can only view views that read data from SGA (V$SGA, V$OPTION,
V$PROCESS, V$SESSION, V$VERSION, V$INSTANCE). When the database is mounted, the
information can be read from control file (V$CONTROLFILE, V$THREAD, V$DATABASE, V$DATAFILE,
V$DATAFILE_HEADER, V$LOGFILE).

PFILE
Example:
SQL> STARTUP PFILE=/oracle/admin/ORADB01/pfile/initoradb01.ora RESTRICT;

SPFILE
To create an SPFILE, a PFILE must exist:
SQL> CREATE SPFILE FROM PFILE;

Get Parameters Values


SQL> SHOW PARAMETERS OS
This shows all parameters with OS embedded somewhere in the name.
You can also get the parameters values by querying the V$PARAMETER view. V$PARAMETER shows
values for the current session.

SQL> col name format a30


SQL> col value format a25
SQL> SELECT name, value
FROM v$parameter
WHERE name LIKE ‘os%’;

Set Parameter Values


The parameters that are modified at instance startup can be shown as FALSE in the ISDEFAULT column:

SQL> SELECT name, value


FROM v$parameter
WHERE ISDEFAULT=’FALSE’;

You can alter parameters either in DEFERRED or IMMEDIATE mode:


SQL> ALTER SYSTEM SET timed_staistics = TRUE DEFERRED;

SQL> ALTER SYSTEM SET MAX_DUMP_FILE_SIZE=20000 SCOPE=SPFILE;


There are 2 options for the SCOPE option - MEMORY (for the life of the current instance only) and BOTH,
default (for the current instance and across shutdown and restart).
Managing Sessions

To query sessions:
SQL> SELECT username, program
FROM v$session;

To kill sessions created by John:


SQL> SELECT username, sid, serial#, status
FROM v$session
WHERE username=’JOHN”;

SQL> ALTER SYSTEM KILL SESSION ‘9, 3’;

To allow the user to complete the current transaction:


SQL> ALTER SYSTEM DISCONNECT SESSION ‘9, 3’ POST_TRANSACTION;

To immediately kill or disconnect session:


SQL> ALTER SYSTEM DISCONNECT SESSION ‘9, 3’ IMMEDIATE;
SQL> ALTER SYSTEM KILL SESSION ‘9, 3’ IMMEDIATE;

Instance Messages and Alerts

BACKGROUND_DUMP_DEST - the location to write the debugging trace files generated by the
background processes and alert log files.

USER_DUMP_DEST - location to write trace files generated by user sessions (tuning, deadlock, internal
errorsetc)

CORE_DUMP_DEST - used on UNIX to generate core dump files when the session terminated
abnormally. Not available on WINDOWS.
All databases have an alert log file specified in the BACKGROUND_DUMP_DEST. Stores info about
block corruption errors, internal errors, non-default initialization parameters, startup, shutdown, archiving,
recovery, tablespace modifications, rollback segment modifications, etc)
For UNIX it is called alert_<SID>.log.

OMF (Oracle Managed Files)

Before 9i, dropping a tablespace did not drop the actual OS files. With OMF you can specify 2
initialization parameters: DB_CREATE_FILE_DEST (default location for new data files, the actual OS files
are created with the prefix ora_ and a suffix of .dbf) and DB_CREATE_ONLINE_LOG_DEST_n (specifies
up to 5 locations for online redo log files and control files; they have a suffix of .log and .ctl). Do a periodic
audit against V$CONTROLFILE, V$LOGFILE, V$DATAFILE and OS files with the OMF origin to delete
unused).

Chapter 4
Creating a Database

Prerequisites

1. Preparing OS Resources

On UNIX -
SHHMAX (maximum size of shared memory segment)
SHMNI (maximum number of shared memory identifiers in the system)
SHMSEG (maximum number of shared memory segments to which a user process can attach)
SEMMNI (maximum number of semaphore identifiers in the system)
SHMMAX * SHMSEG (the total maximum shared memory that can be allocated)

If you are creating a database on the server that already running other databases - back them up first.

Parameters

CONTROL_FILES - specifies control file location (s) with full pathname. Specify at least 2 control files on
different disks. You can specify up to 8control file names.

DB_BLOCK_SIZE - database block size in multiples of OS blocks (can not be changed after db is
created). The default is 4K on most platforms, but can be 2K-32K, depending on OS.

DB_NAME - database name which can only be changed by re-creating the control file. Maximum of 8
characters, you can only use alphanumeric characters, _ , # and $. No other characters are valid. The first
character is alphabetic.
The required parameters are DB_CACHE_SIZE, SHARED_POOL_SIZE and LOG_BUFFER which are
added to calculate the SGA, which must fit into real, not virtual memory.

Parameter name Description


OPEN_CURSORS The max number of open cursors a session
can have, default 50
MAX_ENABLED_ROLES A max number of database roles that users can
enable, default 20
DB_CACHE_SIZE The size of the default buffer cache, with blocks
sized by DB_BLOCK_SIZE. Can be
dynamically altered.
SGA_MAX_SIZE The max size allowed for all components of the
SGA. Set an upper limit to prevent dynamically
altered sizes of other parameters to push the
total SGA size over this limit.
SHARED_POOL_SIZE Size of the shared pool in K or M. the default is
16M.
LARGE_POOL_SIZE The large pool size, default is 0
JAVA_POOL_SIZE The size of Java pool the default is 20000K. If
you are not using Java specify 0.
PROCESSES The maximum number of processes that can
connect to the instance. This includes the
background processes.
LOG_BUFFER Size of redo log buffer in bytes
BACKGROUND_DUMP_DEST The location of the background dump directory,
including alert log file
CORE_DUMP_DEST Location of the core dump destination (UNIX
specific)
USER_DUMP_DEST Location of user dump directory
REMOTE_LOGIN_PASSWORDFILE 4. The authentication method. When creating a
database, make sure you either commented
out this parameter or set it to NONE. If you
create the password file before creating the
database, you can specify EXCLUSIVE (the
password file is used for only one database,
you can add users other than SYS or
INTERNAL to the password file) or SHARED
(shared among multiple databases, but you can
not add users other than SYS or INTERNAL to
the password file)
COMPATIBLE The release with which the database must
maintain compatibility (9.0 to current)
SORT_AREA_SIZE Size of the area allocated for temporary sorts.
LICENSE_MAX_SESSIONS Maximum number of concurrent user sessions.
When this limit is reached only users with
RESTRICTED SESSION privilege are allowed
to connect. The default is 0 - unlimited.
LICENSE_SESSION_WARNING A warning limit on the number of concurrent
user sessions. Messages are written to the
alert file when new users connect after this limit
is reached. The new users are allowed to
connect up to the
LICENSE_SESSION_SESSION. The default is
0, unlimited.
LICENSE_MAX_USERS Maximum number of users that can be created
in the database, the default is 0, ulimited.

Environment Variables (OFA Compliant)

ORACLE_BASE - the directory on the top of the tree, for example /u01/apps/oracle.

ORACLE_HOME - the location of the Oracle software, relative to Oracle base. The
OFA compliant location is in the $ORACLE_BASE/product/<release>.

ORACLE_SID - the unique instance name for the database, regardless of number of databases on the
server.

ORA_NLS33 - the character set different from default.

PATH - the standard Unix pathname that should already exist in the Unix environment. You must add the
directory for the Oracle binary executablesto this path variable: $ORACLE_HOME/bin.

LD_LIBRARY_PATH - other program libraries, both Oracle and not - that reside in the directory.

The Create Database Command

You must STARTUP NOMOUNT PFILE= before issuing this command.


Example:

CREATE DATABASE PROD01


CONTROLFILE REUSE
LOGFILE GROUP 1
(‘/oradata02/PROD01/redo0101.log’,
‘/oradata03/PROD01/redo0102.log’) SIZE 5M REUSE,
GROUP 2
(‘/oradata02/PROD01/redo0201.log’,
‘/oradata03/PROD01/redo0202.log’) SIZE 5M REUSE
MAXLOGFILES 4
MAXLOGMEMBERS 2
MAXLOGHISTORY 0
MAXDATAFILES 254
MAXINSTANCES 1
NOARCHIVELOG
CHARACTER SET “WE8MSWIN1252”
NATIONAL CHARACTER SET “AL16UTF16”
DATAFILE ‘/oradata01/PROD01/PROD01/system01.dbf’ SIZE 80M
AUTOEXTEND ON NEXT 5M MAXSIZE UNLIMITED
UNDO TABLESPACE UNDOTBS
DATAFILE ‘/oradata04/PROD01/PROD01/undo01.dbf’ SIZE 35M
DEFAULT TEMPORARY TABLESPACE TEMP
TEMPFILE ‘/oradata05/PROD01/temp01.dbf’ SIZE 20M;

The database must have at least 2 redo log groups. It is recommended that they are the same size.
MAXLOGFILES specifies maximum number of redo log groups that can ever be created in the database.
MAXLOGMEMBERS specifies maximum number of redo log members (copies of redo log files) for each
redo log group. The MAXLOGHISTORY is used with RAC (max number of archived redo log files for
automatic media recovery). MAXDATAFILES specifies maximum number of data files created in the
database. MAXINSTANCES specifies the maximum number of instances that can simultaneously mount
and open the database. If you want to change these parameters you must re-create the control file. The
DB_FILES init parameter specifies the maximum number of data files accessible to the instance. The
MAXDATAFILES clause in the CREATE DATABASE specifies the maximum number of data files allowed
for the database. The DB_FILES parameter can not specify a value larger than MAXDATAFILES.

Using OMF to Create Database

In contrast to creating database with CREATE DATABASE, using Oracle Managed Files is easier. The init
parameters CREATE_FILE_DEST and DB_CREATE_ONLINE_DEST_n are defined with the desired OS
locations for the data files and online redo log files:

CREATE DATABASE DEFAULT TEMPORARY TABLESPACE TMP;

Creating SPFILE

After configuring the init.ora correctly, create the SPFILE while connected as SYSDBA:

SQL> CRETAE SPFILE FROM PFILE;

B ydefault the SPFILE and PFILE reside in the same location. At startup, the server looks for a file named
spfileSID.ora first, if it can not find one - it uses the pfile initSID.ora.

The Data Dictionary

When the data dictionary is created, Oracle creates only 2 users SYS (owner of the data dictionary) and
SYSTEM (DBA account).

The data dictionary tables reside in the SYS schema in the SYSTE tablespace when you run the create
database command. Oracle automatically creates the tablespace and tables using the sql.bsq script
found in the $ORACLE_HOME/rdbms/admin.

This script creates SYSTEM tablespace, rollback segment called SYSTEM for the SYSTEM tablespace,
the SYS and SYSTEM user accounts, the dictionary base tables and clusters, indexes on the dictionary
tables and sequences, the roles PUBLIC, CONNECT, RESOURCE, DBA, DELETE_CATALOG_ROLE,
EXECUTE_CATALOG_ROLE, SELECT_CATALOG_ROLE and the DUAL table.

Running the catalog.sql script creates the data dictionary views. This script creates synonyms on the
views to allow users easy access to the views.
The script catalog.sql create the data dictionary views. The catproc.sql creates the dictionary items
necessary for PL/SQL functionality.

Administering Stored Procedures and Packages

The PL/SQL stored programs are stored in the data dictionary. The code used to create the procedure,
package or function is available in the dictionary views DBA_SOURCE_ALL, ALL_SOURCE and
USER_SOURCE (ecept when you create them with a WRAP utility which creates encrypted code that
only Oracle can interpret.
You manage the privileges using regular GRANT or REVOKE statements. You GRANT or REVOKE
execute privileges if needed.

The DBA_OBJECTS, ALL_OBJECTS and USER_OBJECTS views give information about the status of
the stored program. If a procedure is invalid, you can recompiled it by using the ALTER PROCEDURE
<PROCEDURE_NAME> COMPILE;
To recompile a package, compile the package definition and then the package body:
ALTER PACKAGE <PACKAGE_NAME> COMPILE; ALTER PACKAGE <PACKAGE_BODY> COMPILE
BODY;
To do that for another schema you have to have ALTER ANY PROCEDURE privileges;

Completing the Database Creation

After creating the database and dictionary views, you must create additional tablespaces. Oracle
recommends creating the following tablespaces if they were not create with CREATE DATABASE or
DBCA.

UNDOTBS - holds the undo segments and automatic undo management. When you create a database,
Oracle creates a SYSTEM undo segment in the SYSTEM tablespace. For a database that has multiple
tablespaces you must create at least one undo segment that is not in the SYSTEM tablespace for manual
undo management or one undo tablespace for automatic undo management.

TEMP - holds the temporary segments for sorting and intermediate operations. Oracle uses these
segments when the information to be sorted will not fit into the SORT_AREA_SIZE.

USERS (user tables)

INDX (user indexes)

TOOLS (oracle administrative tools tables and indexes)

After the database is created back it up and change passwords for SYS and SYSTEM.

Querying the Data Dictionary

Data dictionary views categories:

DBA_ - vies with information about all structures in the database, all schemas. Accessible to DBA or
SELECT_CATALOG_ROLE privilege owner.

ALL_ - views on info on all objects the user has access to.

USER_ - structures owned by the user in the user’s schema. They are accessible to all users and do no
have OWNER column.

V$ - dynamic performance views. Continuously updated while the database is open and in use.
GV$ - for almost all V$ views, Oracle has a corresponding GV$ view. These are the global performance
views pertinent to RAC. The corresponding GV$ view has an additional column identifying the instance
number called INST_ID.

You can use the data dictionary information to generate the source code for all the objects created in the
database.

The dictionary view DICTIONARY (DICT) contains names and descriptions of all the data dictionary views
in the database. The DICT_COLUMNS describes columns in DICT.
If you want to query the dictionary vews about tables:

SQL> COL TABLE_NAME FORMAT A25


SQL> COL COMMENTS FORMAT A40
SQL> SELECT * FROM DICT
WHERE TABLE_NAME LIKE ‘%TAB%’;

The dictionary views ALL_OBJECTS, DBA_OBJECTS, and USER_OBJECTS provide all info about the
objects in the database. These views contain the tmestamp of object creation and the last DDL
timestamp. The STATUS column shows whether or not the object is valid (for PL/SQL).

Creating a Database with DBCA

1. Creates a parameter file, starts up the database in NOMOUNT mode, and then creates the
database using the CREATE DATABASE command.
2. Runs catalog.sql.
3. Creates tablespaces for tools TOOLS, undo UNDO, temp TEMP, and index INDX.
4. Runs the following scripts: catproc.sql (sets up PL/SQL), caths.sql (installs the heterogeneous
services (HS) data dictionary, providing the ability to access non-Oracle databases from the
Oracle database), otrcsvr.sql (Oracle trace server SP), utlsampl.sql (sets up sample user SCOTT
and creates demo tables), pubbld.sql (creates product and user profile tables, script runs as
SYSTEM).
5. Runs the scripts necessary to install other options chosen.
Chapter 5

Control and redo Log Files

The control file is updated continuously and should be available at all times. Only Oracle processes
should update control files. It is used on startup to identify data files, redo log files and open them. They
play major role in database recovery. Contain:
5 Database name (a control file can belong to only one database)
6 Database creation timestamp
7 Data files - location, name and online/offline status information
8 Redo log files - name and location
9 Redo log archive information
10 Tablespace information
11 Current log sequence number that is assigned when logs switch occurs
12 Most recent checkpoint information
13 Begin and end of undo segments
14 RMAN’s backup information

The control file size is determined by the MAX clauses of the create database statement -
MAXLOGFILES, MAXLOGMEMBERS, MAXLOGHISTORY, MAXDATAFILES, MAXINSTANCES. The
control file has 2 record sections: reusable (backup data files, etc) and not reusable.

Multiplexing Control Files


Oracle recommends at least 2 control files on 2 separate disks either by multiplexing in Oracle or OS
mirroring. There are 2 ways to multiplex files: init.ora and using SPFILE.

Multiplexing Control Files Using init.ora - copying the control files to multiple locations and changing the
CONTROL_FILES parameter in the init.ora.

CONTROL_FILES = (‘/ora01/oradata/MYDB/ctrlMYDB01.ctl’,
‘/ora02/oradata/MYDB/ctrlMYDB02.ctl’,
‘/ora03/oradata/MYDB/ctrlMYDB03.ctl’)

You can have a maximum of 6 multiplexed copies of control file copies.

If you need to add more comtrol files:

1. Shutdown database.
2. Copy the control file to more remote locations by using an OS command
3. Change the init.ora parameter to add more locations (above)
4. Start up the database

Multiplexing Control Files Using an SPFILE

1. Alter the SPFILE while the database is still open:

SQL> ALTER SYSTEM SET CONTROL_FILES =


‘/ora01/oradata/MYDB/ctrlMYDB01.ctl’,
‘/ora02/oradata/MYDB/ctrlMYDB02.ctl’,
‘/ora03/oradata/MYDB/ctrlMYDB03.ctl’ SCOPE=SPFILE;

This parameter change will only take effect after next instance restart by using SCOPE=SPFILE.

2. Shutdown the database


3. Copy the existing control files to the new location:

$ cp /ora01/oradata/MYDB/ctrlMYDB01.ctl /ora01/oradata/MYDB/ctrlMYDB01.ctl

4. Startup instance

Using OMF to Manage Control Files

To use OMF-created control files do not specify the CONTROL_FILES parameter in init.ora, but instead
make sure that DB_CREATE_ONLINE_LOG_DEST_n is specified n times stating with 1. Therefore, n is
the number of the desired control files to be created. The actual names of the control files are system
generated and can be found in the alert logs located in the $ORACLE_HOME/admin/bdump.

Creating New Control Files

If you change the database name, loose all the old control files, if you want to change any of the MAX
clauses in the CREATE DATABASE.

1. Create CREATE CONTROLEFILE statement:

SQL> CREATE CONTROLFILE SET DATABASE “ORACLE”


NORESETLOGS NOARCHIVELOG
MAXLOGFILES 32
MAXLOGMEMBERS 2
MAXDATAFILES 32
MAXINSTANCES 1
MAXLOGHISTORY 1630
LOGFILE
GROUP 1 ‘C:\ORACLE\DATABASE\LOG2ORCL.ora’ SIZE 500K,
GROUP 2 ‘C:\ORACLE\DATABASE\LOG1ORCL.ora’ SIZE 500K
DATAFILE
‘C:\ORACLE\DATABASE\SYS10ORCL.ora’,
‘C:\ORACLE\DATABASE\USR10ORCL.ora’,
‘C:\ORACLE\DATABASE\RBS10ORCL.ora’,
‘C:\ORACLE\DATABASE\TMP10ORCL.ora’,
‘C:\ORACLE\DATABASE\APPDATA1.ora’,
‘C:\ORACLE\DATABASE\APPINDX1.ora’;

2. Shutdown the database.


3. STARTUP NOMOUNT
4. Create a new control file with a command similar to the one above. The control file will be created
usingthe names and locations in the initialization parameter CONTROL_FILES.
5. Open the database ALTER DATABASE OPEN
6. Shutdown and backup the database

You can also generate the create control file command from the current database bt using the command
ALTER DATABASE BACKUP CONTROLFILE TO TRACE. The control file creation script is written in the
USER_DUMP_DEST.

After creating the control file determine whether any of the data files listed in the dictionary are missing. If
you query the V$DATAFILE view, the missing files will have the name MISSINGnnnn. If you created the
control files by using the RESETLOGS option, the missing data files can not be added back to the
database. If you created the database with NORESTLOGS option the missing data files can be included
with the database by using media recovery.

You can backup the control file while the database is in use:
ALTER DATABASE BACKUP CONTROLFILE TO <FILENAME> REUSE;
ALTER DATABASE BACKUP CONTROLFILE TO TRACE;

Querying the Control File Information

SQL> SELECT * FROM V$CONTROLFILE;

STATUS NAME
________ ________
/ora01/oradata/MYDB/ctrlmydb01.ctl

SQL> SELECT TYPE, RECORD_SIZE, RECORDS_TOTAL, RECORDS_USED


FROM V$CONTROLFILE_RECORD_SECTION;

Maintaining and Monitoring Redo Log Files

The redo log files record all changes to the database. The redo log buffer in the SGA is periodically
written to the redo log file by the LGWR process. Every database has at least 2 redo log files. The LGWR
writes them in a circular fashion.
Every redo entry is made up of a group of a change vectors (each is a description of a change made to a
single block in the database). During recovery, Oracle reads the changes from redo logs and applies them
to the relevant blocks.
LGWR writes redo information from redo log buffer to the online redo log files when:
1. A user commits a transaction even if this is the only transaction in the log buffer.
2. The redo log is 1/3 full
3. when approx 1MB of changed records in buffers

LGWR always writes its records to the online redo log file before DBWn writes new or modified database
buffer cache records to the datafiles.

Each database has its own online redo log groups. A log group can have one or more redo log members
each member is a single OS file). In the RAC environment, each instance will have 1 online redo thread.
That is, the LGWR process of each instance writes to the same online redo log files and oracle has to
keep track of the instance from where the instance changes are coming. For a single instance there is
only one thread. Whenever the transaction is committed, a SCN is assigned to the redo records to identify
the committed transaction.
CREATE DATABASE “MYDB01”
LOGFILE GROUP 1 ‘/ora02/oradata/MYDB01/redo01.log’ SIZE 10M,
GROUP 2 ‘/ora02/oradata/MYDB01/redo01.log’ SIZE 10M;

The LGWR writes to only one redo log file at a time (current log file). The active log file is the one required
for instance recovery. The log files are written in a circular fashion. A log switch happens when Oracle
finishes writing to one file and writing to another. A log switch always occurs when the current log is full.
You can force log switch:

ALTER SYSTEM SWITCH LOGFILE;

Redo lgs are written sequentially on the disk so the I/O will be fast if there is no other activity on the disk.
Keep the redo log files on a separate disk for better performance.

Checkpoints

Checkpoints are closely tied to the redo log switches. A check point is an event that that flushes the
modified data from the buffer cache to the disk and updates the control file and data files. A checkpoint is
initiated when:
1. The redo log file is full and a log switch occurs
2. When the instance is shutdown with other than abort.
3. When a tablespace status is changed to read only or put in a backup mode.
4. When a tablespace or datafile is taken offline
5. When other values are specified in init parameters.

You can force a checkpoint if needed: ALTER SYSTEM CHECKPOINT;

Or ALTER SYSTEM SWITCH LOGFILE;


The size of the redo log affects the checkpoint performance. If the size of the redo log is small, a log
switch occurs too often and so does the checkpoint. The DBWn writes the dirty buffers to data files when
that happens. This might reduce the time for recovery, but affects performance. You can adjust
checkpoints by changing FAST_START_MTTR_TARGET (replaced the old FAST_START_IO_TARGET
and LOG_CHECKPOINT_TIMEOUT in previous versions). It should not exceed a certain number of
seconds. For example, FAST_START_MTTR_TARGET = 600 ensures that instance recovery will not take
more than 10 minutes.
Setting the parameter to LOG_CHECKPOINTS_TO_ALERT to TRUE logs each checkpoint to alert log
file.

Multiplexing Log Files

When multiplexing online redo log files, LGWR concurrently writes the same info to multiple online redo
log files. All copies of the redo file of the same size are know as a group which is identified by an integer.
Each redo log file is identified as a member of a group. You must have at least 2 redo log groups for
normal database operation.
When multiplexing redo log files it is preferable to keep the members on different disks. If LGWR can not
write to at least 1 member of the group, database works as usual; an entry is written to the alert.log. If all
members of the group are not available, Oracle shuts down the instance.
The maximum number of log file groups specified in the MAXLOGFILES, and the maximum number of
members is specified as MAXLOGMEMBERS.

Creating New Log File Groups

ALTER DATABASE ADD LOGFILE


GROUP 3 (‘/ora02/oradata/MYDB01/redo0301.log’,
‘/ora03/oradata/MYDB01/redo0402.log’) SIZE 10M;

If you omit the GROUP clause Oracle assigns the next available number.

Adding New Members

If you forgot to multiplex the redo log files when creating the database, you can add new members. All the
members have the same size. This add new member to group 2.

ALTER DATABASE ADD LOGFILE MEMBER


‘/ora02/oradata/MYDB01/redo0301.log’ TO
(‘/ora03/oradata/MYDB01/redo0402.log’,
‘/ora03/oradata/MYDB01/redo0402.log’);

Renaming Log Members

Before renaming the log files, the new files should already exist. Oracle just points toward the new file, it
does not rename the OS file.
1. Shutdown and backup the entire database
2. Copy/rename the online redo log file member to the new location by using an OS command
3. Startup the instance and mount the database
4. rename the log file member in the control file: ALTER DATABASE RENAME FILE
‘<OLD_REDO_FILE_NAME>’ TO ‘<NEW_REDO_FILE_NAME>’;
5. ALTER DATABASE OPEN
6. Backup the control file

Dropping Redo Log Groups

The group to be dropped should not be active (ALTER SYSTEM SWITCH LOGFILE if needed).

ALTER DATABASE DROP LOGFILE GROUP 3;

The actual OS files are not dropped.


Dropping Redo Log Members

Just like redo log groups you can only drop inactive members of inactive redo log groups. Also, if there
are only 2 groups the log member to be dropped is not the last member of the group.

SQL> ALTER DATABASE DROP LOGFILE MEMBER '/opt/ora9/oradata/orcl/redo0155.log';

Database altered.

The OS file is not removed from the disk.

Clearing Online Redo Log Files


Under certain circumstances the redo log member may become corrupted. This is faster and better.

SQL> ALTER DATABASE CLEAR LOGFILE GROUP 3;

Database altered.

Managing Online Redo Log Files with OMF

If you multiplexing to 3 locations be sure to set DB_CREATE_ONLINE_REDO_LOG_DEST_1 thru _3.

To add a new log file group use:

SQL> ALTER DATABASE ADD LOGFILE;

Archiving Log Files

Copying log files to another location, done by ARCn. You ca recover the database from archives, update
standby database or use in LogMiner. The LGWr waits for the ARCn to complete copying before
overwriting the redo log file.

Setting the Archive Destination

LOG_ARCHIVE_DEST specifies the destination to write archive log files. You can change the destination:
ALTER SYSTEM SET LOG_ARCHIVE_DEST = '<new_location>';

LOG_ARCHIVE_DUPLEX_DEST - sexond destination to write the archive log files. You specify the
minimum successful number of archive destination in the LOG_ARCHIVE_MIN_SUCCEED_DEST. You
can change the location: ALTER SYSTEM SET LOG_ARCHIVE_DUPLEX_DEST = '<new_location>';

LOG_ARCIVE_DEST_n - you can specify as many as 5 destinations. These arcive locations can be
either on the same machine or on a remote machine where the standby database is located. When these
parameters are used you can not use the LOG_ARCHIVE_DEST or _DUPLEX parameters.

The syntax:

LOG_ARCHIVE_DEST_n = "null_string" |

((SERVICE = <TNSNAMES_NAME> |

LOCATION = '<DIRECTORY_NAME>')

[MANDATORY | OPTIONAL]

[REOPEN [=integer]])

Example:

LOG_ARCHIVE_DEST_2 = (SERVICE=STDBY01) OPTIONAL REOPEN;

LOG_ARCHIVE_MIN_SUCCEED_DEST - number of destinationsthe ARCn process should successfully


write at a minimum to proceed with overwriting the online redo log files. The default is 1.

LOG_ARCHIVE_FORMAT specifies the format for names.

The values:

%s - log sequence number


%S - log sequence number zero filled

%t - thread number

%T - thread number zero filled

For example, LOG_ARCHIVE_FORMAT = 'arch_%t_%s' generates the archive log file names as
arch_1_101, arch_1_102, etc. One is the thread number, and 101 and 102 are the log sequence
numbers. Specifying arch_%S generates arch_000000101 and 102.

LOG_ARCHIVE_MAX_PROCESSES - number of ARCn processes Oracle should start (1 is default).

LOG_ARCHIVE_START - specifies if autoarchiving is enabled. If it is FALSE, no ARCn is enabled. You


can override this with archive log start and stop.

Setting Archive Log

1. Shutdown database

2. Startup mount

3. ALTER DATABASE ARCHIVELOG

4. ALTER DATABASE OPEN

To disable:

1. Shutdown database.

2. Startup mouint database

3. Disable archivelog mode ALTER DATABASE NOARCHIVELOG

4. Open database ALTER DATABASE OPEN

You can enable autoarchiving by setting it th LOG_ARCHIVE_START= TRUE.

Querying Archive Log Information

SQL> ALTER DATABASE CLEAR LOGFILE GROUP 3;

Database altered.

SQL> ALTER DATABASE DROP LOGFILE MEMBER '/opt/ora9/oradata/orcl/redo0155.log';

Database altered.

SQL> ALTER DATABASE CLEAR LOGFILE GROUP 5;

Database altered.

SQL> ARCHIVE LOG LIST;


Database log mode No Archive Mode
Automatic archival Disabled
Archive destination /opt/ora9/product/9.2/dbs/arch
Oldest online log sequence 2
Current log sequence 2
SQL> SELECT * FROM V$LOG;

GROUP# THREAD# SEQUENCE# BYTES MEMBERS ARC STATUS


---------- ---------- ---------- ---------- ---------- --- ----------------
FIRST_CHANGE# FIRST_TIM
------------- ---------
1 1 2 104857600 1 NO CURRENT
159352 25-SEP-04

2 1 0 104857600 1 YES UNUSED


0

3 1 0 104857600 1 NO UNUSED
154145 25-SEP-04

GROUP# THREAD# SEQUENCE# BYTES MEMBERS ARC STATUS


---------- ---------- ---------- ----------- ---------- --- ----------------
FIRST_CHANGE# FIRST_TIM
------------- ---------
5 1 0 10485760 2 YES UNUSED
0

SQL> SELECt * FROM V$LOGFILE;

GROUP# STATUS TYPE


---------- ------- -------
MEMBER
--------------------------------------------------------------------------------
3 ONLINE
/opt/ora9/oradata/orcl/redo03.log

2 ONLINE
/opt/ora9/oradata/orcl/redo02.log

1 ONLINE
/opt/ora9/oradata/orcl/redo01.log

GROUP# STATUS TYPE


---------- ------- -------
MEMBER
--------------------------------------------------------------------------------
5 ONLINE
/opt/ora9/oradata/orcl/redo001.log

5 ONLINE
/opt/ora9/oradata/orcl/redo002.log

SQL> SELECT THREAD#, GROUPS, CURRENT_GROUP#, SEQUENCE#


2 FROm V$THREAD;

THREAD# GROUPS CURRENT_GROUP# SEQUENCE#


---------- ---------- -------------- -----------
1412

SQL> SELECT SEQUENCE#, FIRST_CHANGE#, NEXT_CHANGE#,


2 TO_CHAR(FIRST_TIME, 'DD-MM-YY HH24:MI:SS') TIME
3 FROM V$LOG_HISTORY;

SEQUENCE# FIRST_CHANGE# NEXT_CHANGE# TIME


---------- ------------- ------------ -----------------
1 154145 159352 25-09-04 09:57:19

SQL> SELECT NAME, SEQUENCE#, FIRST_CHANGE#, NEXT_CHANGE#,


2 BLOCKS, BLOCK_SIZE
3 FROM V$ARCHIVED_LOG;

no rows selected

SQL> SELECT DESTINATION, BINDING, TARGET, REOPEN_SECS


2 FROM V$ARCHIVE_DEST
3 WHERE STATUS='VALID';

DESTINATION
---------------------------------------------------------------------------------
BINDING TARGET REOPEN_SECS
--------- ------- -----------
/opt/ora9/product/9.2/dbs/arch
MANDATORY PRIMARY 0

SQL> SELECT * FROM V$ARCHIVE_PROCESSES;

PROCESS STATUS LOG_SEQUENCE STAT


---------- ---------- ------------ ----
0 STOPPED 0 IDLE
1 STOPPED 0 IDLE
2 STOPPED 0 IDLE
3 STOPPED 0 IDLE
4 STOPPED 0 IDLE
5 STOPPED 0 IDLE
6 STOPPED 0 IDLE
7 STOPPED 0 IDLE
8 STOPPED 0 IDLE
9 STOPPED 0 IDLE

10 rows selected.

SQL>

Chapter 6. Logical and Physical Database Structures

Tablespaces and Data Files

The database data is stored logically on tablespaces and physically in data files corresponding to
tablespaces. By default all objects are created in SYSTEM tablespace.
By separating data in tablespaces from SYSTEM you will:

1. Separate Oracle dictionary from other objects reducing contention

2. Control I/O by allocating separate physical strorage disks for different tablespaces.

3. Manage space quota for users in tablespaces

4. Have separate tablespaces for TEMP segments and UNDO management. You can also separate them
by activity (heavy updates, indexes).

Group application related and module related data together, so when a maintenance required the rest of
the database is still up.

5. Backup one tablespace at a time.

6. Make part of the database read only.

Managing Tablespaces

When Oracle allocates space to objects it is allocated in chunks of continuous database known as
extents. Each object is allocated a segment which has one or more extents. If the object is partitioned,
each partition has a segment allocated.

If you store the extent management information in the dictionary for the tablespace, the tablespace is
called dictionary-managed tablespace, they generate undo information. If you store the management
information in the tablespace itself by using bitmaps in each data file, it is locally managed tablespace, do
not generate rollback info. Each bit in the bitmap corresponds to a block. When an extent is allocated or
freed for reuse, Oracle changes the bitmap values to show the new status of the blocks. They do not
update data in the data dictionary and do not generate rollback.

Creating Tablespace

Tablespace name can not exceed 30 characters. The name should begin with an alphabetic character
and can contain alphanumeric characters and $, _ and #.

Dictionary Managed Tablespaces

CREATE TABLESPACE APPL_DATA

DATAFILE '/disk3/oradata/DB01/appl_data01.dbf' SIZE 100M,

DATAFILE '/disk4/oradata/DB01/appl_data02.dbf' SIZE 100M;

EXTENT MANAGEMENT DICTIONARY;

Another example:

CREATE TABLESPACE APPL_DATA

DATAFILE '/disk3/oradata/DB01/appl_data01.dbf' SIZE 100M

DEFAULT STORAGE (

INITIAL 256K

NEXT 256K
MINEXTENTS 2

PCTINCREASE 0

MAXEXTENTS 4096)

BLOCKSIZE 4K

MINIMUM EXTENT 256K

LOGGING

ONLINE

PERMANENT

EXTENT MANAGEMENT DICTIONARY

SEGEMENT SPACE MANAGEMENT MANUAL;

The clauses in the create table specify the following:

DEFAULT STORAGE - default storage parameters for new objects that are created in the tablesapce. You
can overwrite these by specifying explicit storage parameters of the objects created.

BLOCKSIZE - block size for the objects in the tablespace. By default it is the database block size. It can
be 2, 4, 8, 16, 32K. For large DSS and OLTP multiple block size can be beneficial.

INITIAL - specifies the size of the object's (segment's) first extent. NEXT specifies the size of the
segment's next and successive extents in bytes. The default for INITIAL and NEXT is 5 database blocks.
The minimum is 3 blocks.

PCTINCREASE - specifies how much the third and subsequent extents grow over proceeding extent. The
default is 50%, meaning that the next extent is 50% larger that the preceding extent. The minimum is 0.
For example, (INITIAL 1M NEXT 2M PCTINCREASE 0) the extent sizes are 1M, 2M, 2M, 2M, etc. If you
specify PCTINCREASE 50, the extent sizes are 1M, 2M, 3M, 4.5M, 6.75M, etc. The actual NEXT is
rounded to a multiple of the block size.

MINEXTENTS - total number of extents allocated to the segment at the time of creation. The default is 1.

MINIMUM EXTENT - specifies that the extent sizes are a multiple of the size specified. You can use this
value to control fragmentation.

LOGGING - specifies that the DDL operations and direct load INSERT statements are recorded into redo
log files. LOGGING is the default and you can omit the clause. When you specify NOLOGGING the data
is modified with minimal logging. Individual object creation setting override these.

ONLINE/OFFLINE - specifies that that the tablespace is online.

PERMANENT/TEMPORARY - specify if you want to store permanent tables or TEMPORARY for sorts.
The default is PERMANENT.

EXTENT MANAGEMENT - until 9i dictionary managed tablespaces were default. In 9i you have to
explicitly specify EXTENT MANAGEMENT DICTIONARY to enable dictionary management. It is LOCAL
by default or when omitted.

SEGEMENT SPACE MANAGEMENT - this is applicable only to locally managed tablespaces (either
manual or auto). If you specify AUTO, Oracle manages free space in the segments using bit maps rather
than free lists. For AUTO, Oracle ignores the storage parameters PCTUSED, FREELISTS, FREELIST
GROUPS when creating objects.

Using Non Standard Block Sizes

You can not alter DB_BLOCK_SIZE after creating a tablespace. The BD_CACHE_SIZE parameter
defines the buffer cache size associated with the standard block size. To create tablespaces with non-
standard block size you set the appropriate buffer cache size for the block size. The parameter is
DB_nKCACHE_SIZE where n is non-standard block size. You can set them to 2,4,8,16,32 but it can not
have the size of the standard block size. If you have to set up a tablespace that uses a different block size
ogf 16K you must set the DB_16K_CACHE_SIZE parameter. By default DB_nK_CACHE_SIZE is 0.
Temporary tablespaces have to have standard block size.

Locally Managed Tablespace

Using CREATE TABLESPACE with the EXTENT MANAGEMENT LOCAL manages space more
efficiently, less fragmentation and reliable. You can not specify default storage, temporary or minimum
extent in a locally managed tablespace. You can specify that Oracle manages extents automatically by
using the AUTOALLOCATE option.

CREATE TABLESPACE USER_DATA

DATAFILE '/disk3/oradata/DB01/appl_data01.dbf' SIZE 100M

EXTENT MANAGEMENT LOCAL UNIFORM SIZE 512K;

You can specify SEGMENT SPACE MANAGEMENT AUTO.

Free Space Management

MANUAL free space management is the only option available in pre-9i databases. In Oracle 9i you can
manage the free space in blocks using the bitmaps if you specify SEGEMENT SPACE MANAGEMENT
AUTO in CREATE TABLESPACE. If it is AUTO, Oracle ignores FREELIST, FREELIST GROUPS and
PCTUSED.

Undo Tablespace

Oracle can manage undo tablespaces automatically. For auto undo management you must have one
undo tablespace.

CREATE UNDO TABLESPACE

DATAFILE '/disk3/oradata/DB01/appl_data01.dbf' SIZE 500M;

Temporary Tablespace

Oracle can manage space for sort operations more efficiently by using temporary tablespaces. More that
one transaction can use the same sort segment, but each extent can be used by one transaction.
Dictionary managed temp tablespace:

CREATE TABLESPACE TEMP

DATAFILE '/disk3/oradata/DB01/appl_data01.dbf' SIZE 500M

DEFAULT STORAGE (INITIAL 2M NEXT 2M PCTINCREASE 0 MAXEXTENTS UNLIMITED)

TEMPORARY;
For a TEMP tablespace the extent size should be a multiple of SORT_AREA_SIZE plus
DB_BLOCK_SIZE to reduce fragmentation. Keep PCTINCREASE = 0. For example, if your sort area size
is 64K and database block size is 8K, provide the default storage of the TEMP tablespace as (INITIAL
136K NEXT 136K PCTINCREASE 0 MAXEXTENT UNLIMITED).

To create a locally managed tablespace:

CREATE TEMPORARY TABLESPACE TEMP

TEMPFILE '/disk3/oradata/DB01/appl_data01.dbf' SIZE 500M

EXTENT MANAGEMENT LOCAL

UNIFORM SIZE 5M;

Tempfiles are always in nologging mode and non recoverable, they can not be renamed, taken offline,
made read only.

Altering a Tablespace

ALTER TABLESPACE APPL_DATA

DEFAULT STORAGE (INITIAL 2M NEXT 2M);

ALTER TABLESPACE TEMP TEMPORARY;

Changing Tablespace Avaiability

When making tablespace unavailable you can specify:

NORMAL (Oracle writes all the dirty buffers blocks in the SGA tot he data blocks of the tablespace and
closes data files)

ALTER TABLESPACE USE_DATA OFFLINE NORMAL;

TEMPORARY (Oracle performs a checkpoint on all online data files, but does not ensure the data files
are available)

ALTER TABLESPACE USE_DATA OFFLINE TEMPORARY;

IMMEDIATE (oracle does not perform a checkpoint and does not make sure that all data file are
available)

ALTER TABLESPACE USE_DATA OFFLINE IMMEDIATE;

FOR RECOVER (this option palces tablespaces offline for point in time recovery. You can copy data files
belonging to the tablespace from a backup and apply the archive redo log files, deprecated in 9i).

You can not place SYSTEM tablespace offline because it holds data dictionary.

To put it back online:

ALTER TABLESPACE USE_DATA ONLINE

Coalescing Free Space

ALTER TABLE COALESCE;

Read Only Tablespace


ALTER TABLESPACE USERS READ ONLY; All pending transactions can commit or rollback and future
transactions go into read only mode. If store the read only data files on a CD_ROM or storage medium
you can set READ_ONLY_OPEN_DELAYED to true to avoid data file availability testing.

Adding Space to a Tablespace

ALTER TABLESPACE USERS ADD DATAFILE

'/disk5/oradata/DB01/users02.dbf' SIZE 25M;

If you are modifying a locally managed tablespace to add more data files:

ALTER TABLESPACE USERS ADD TEMPFILE

'/disk5/oradata/DB01/temp02.dbf' SIZE 125M;

Dropping a Tablespace

DROP TABLESPACE USER_DATA INCLUDING CONTENTS

CASCADE CONSTRAINTS;

The actual datafiles are only removed if this is an Oracle Managed Files server. Otherwise, you have to
remove the data files yourself.

DROP TABLESPACE USER_DATA INCLUDING CONTENTS AND DATAFILES;

Querying Tablespace Information

SQL> SELECT TABLESPACE_NAME, EXTENT_MANAGEMENT,


2 ALLOCATION_TYPE, CONTENTS,
3 SEGMENT_SPACE_MANAGEMENT
4 FROM DBA_TABLESPACES;

TABLESPACE_NAME EXTENT_MAN ALLOCATIO CONTENTS SEGMEN


------------------------------ ---------- --------- --------- ------
SYSTEM LOCAL SYSTEM PERMANENT MANUAL
UNDOTBS1 LOCAL SYSTEM UNDO MANUAL
TEMP LOCAL UNIFORM TEMPORARY MANUAL
CWMLITE LOCAL SYSTEM PERMANENT AUTO
DRSYS LOCAL SYSTEM PERMANENT AUTO
EXAMPLE LOCAL SYSTEM PERMANENT AUTO
INDX LOCAL SYSTEM PERMANENT AUTO
ODM LOCAL SYSTEM PERMANENT AUTO
TOOLS LOCAL SYSTEM PERMANENT AUTO
USERS LOCAL SYSTEM PERMANENT AUTO
XDB LOCAL SYSTEM PERMANENT AUTO

11 rows selected.

SQL> SELECT * FROM V$TABLESPACE;

TS# NAME INC


---------- ------------------------------ ---
3 CWMLITE YES
4 DRSYS YES
5 EXAMPLE YES
6 INDX YES
7 ODM YES
0 SYSTEM YES
8 TOOLS YES
1 UNDOTBS1 YES
9 USERS YES
10 XDB YES
2 TEMP YES

11 rows selected.

SQL> SELECT TABLESPACE_NAME, SUM(BYTES) FREE_SPACE


2 FROM DBA_FREE_SPACE
3 GROUP BY TABLESPACE_NAME;

TABLESPACE_NAME FREE_SPACE
------------------------------ ----------
CWMLITE 11141120
DRSYS 10813440
EXAMPLE 458752
INDX 26148864
ODM 11206656
SYSTEM 2555904
TOOLS 10420224
UNDOTBS1 198049792
USERS 26148864
XDB 196608

10 rows selected.

SQL> SELECT USER, SESSION_ADDR, SESSION_NUM, SQLADDR,


2 SQLHASH, TABLESPACE, EXTENTS, BLOCKS
3 FROM V$SORT_USAGE;

no rows selected

SQL> SELECT TABLESPACE_NAME, SEGMENT_TYPE, SUM(BYTES)


2 FROM DBA_SEGMENTS
3 WHERE OWNER='PM'
4 GROUP BY ROLLUP(TABLESPACE_NAME, SEGMENT_TYPE);

TABLESPACE_NAME SEGMENT_TYPE SUM(BYTES)


------------------------------ ------------------- ----------
EXAMPLE INDEX 196608
EXAMPLE TABLE 131072
EXAMPLE LOBINDEX 1114112
EXAMPLE LOBSEGMENT 1835008
EXAMPLE NESTED TABLE 65536
EXAMPLE 3342336
3342336

7 rows selected.

Managing Data Files


In Oracle 9i you can create files and remove them when the tablespace is removed (OMF). When you
RESUE an existing file it should not belong to an existing database (overwritten). If you specify REUSE,
you can omit the SIZE clause. If you specify a size it should be identical to the actual size. If the does not
exist Oracle creates a new data file even if you specify REUSE.

If you omit the full path, oracle creates the file in the default database directory or current directory,
depending on OS.

Sizing Data Files

CREATE TABLESPACE APPL_DATA

DATAFILE ‘/disk2/oradata/DB01/appl_data001.dbf’

SIZE 500M

AUTOEXTEND ON NEXT 100M MAXSIZE 200M;

If the file already exists in the database and you want to enable auto extension feature:

CREATE TABLESPACE APPL_DATA

DATAFILE ‘/disk2/oradata/DB01/appl_data001.dbf’

AUTOEXTEND ON NEXT 100M MAXSIZE 200M;

To resize the datafile:

ALTER DATABASE

DATAFILE ‘/disk2/oradata/DB01/appl_data001.dbf’

RESIZE 1500M;

If the file size specified is below the actual data size already existing in the datafile Oracle returns an
error.

OMF Data Files

OMFs are appropriate for smaller non production databases or databases that run on disks that use
logical volume manager (LVM). LVM is a software that combine partitions on multiple physical disks to
one logical drive.

The following are advantages of using OMF:

Prevention of errors (Oracle removes files associated with a tablespace, so you can not do that)

Standard naming convention (Oracle names the files)

Space retrieval - same as above

Easy script writing - application vendors need not worry about the syntax of specifying directory names in
the scripts.
Creating Oracle Managed Files

Before you can create OMF you need to set the DB_CREATE_FILE_DEST (directory where Oracle
creates files) in init.ora or ALTER SYSTEM or ALTER SESSION statement. Must be local to the database.
Oracle will not create the directory, it will only create the data file.

You can create data files using the CREATE DATABASE, CREATE TABLESPACE, ALTER DATABASE.
You do need to specify data file names for SYSTEM or UNDO tablespaces, You can omit the DATAFILE
clause in the create tablespace statement.

The datafiles you create using OMF will have a standard format.

For an OMF data file: ora_%t_%u.dbf

For an OMF temp file: ora_%t_%u.tmp

Where %t is the tablespace name and %u is an unique 8 characters string that Oracle derives. If the
tablespace has more than 8 characters, only 8 of them are used. The file names are written in the alert
log file.

You can also use the OMF to create control files and online redo log files for the database. Since those
two types can be multiplexed, oracle provides another parameter to specify the location of files -
DB_CREATE_ONLINE_LOG_DEST_n, in which n can be 1-5. you can also alter these parameters with
ALTER SYSTEM or ALTER SESSION.

The redo log file names will have format ora_%g_%u.log, in which %g is the log group number and u% is
an 8 character string unique to the database. The control file name will have the format of ora_%u.ctl, in
which %u is an 8 character string.

Example of init.ora wit OMF:

UNDO MANAGEMENT = AUTO

DB_CREATE_ONLINE_LOG_DEST_1 = ‘/ora1/oradata/MYDB’

DB_CREATE_ONLINE_LOG_DEST_2 = ‘/ora2/oradata/MYDB’

DB_CREATE_FILE_DEST = ‘/ora1/oradata/MYDB’

The CONTROL_FILES parameter is not set. Create the database using the following:

CREATE DATABASE MYDB

DEFAULT TEMPORARY TABLESPACE TEMP;

The following will be created

The SYSTEM tablespace in ‘/ora1/oradata/MYDB’


The TEMP tablespace in ‘/ora1/oradata/MYDB’

A control file in ‘/ora1/oradata/MYDB’

A control file in ‘/ora2/oradata/MYDB’

One member of the first online redo log group in ‘/ora1/oradata/MYDB’ and a second in
‘/ora2/oradata/MYDB’

One member of the second online redo log group in ‘/ora1/oradata/MYDB’ and a second in
‘/ora2/oradata/MYDB’.

Because we specified the UNDO_MANAGEMENT clause and did not specify a name for the undo
tablespace, Oracle creates SYS_UNDOTBS as undo tablespace and creates its data file under
/ora1/oradata/MYDB. If you omit the DEFAULT TEMPORARY TABLESPACE clause, Oracle does not
create one at all. The data files and temp files oracle creates will have a default size of 100M, which I s
auto extensible with no max size. Each redo log member will be 100M in size by default.

Another example.

ALTER SESSION SET DB_CRETAE_FILE_DEST = /ora5/oradata/mydb’;

CREATE TABLESPACE APP_DATA

EXTENT MANAGEMENT DICTIONARY;

ALTER SESSION SET DB_CRETAE_FILE_DEST = /ora6/oradata/mydb’;

CREATE TABLESPACE APP_INDEX;

Overriding the Default File Size

If you want to a different size for the OMF files you can specify the DATAFILE clause without the file
name.

CREATE TABLESPACE PAY_DATA DTAFILE SIZE 10M

AUTOEXTEND OFF;

CREATE TABLESPACE PAY_INDEX

DATAFILE SIZE 20M AUTOEXTEND OFF,

SIZE 30M AUTOEXTEND ON MAXSIZE 100M,

SIZE 1M;
ALTER SYSTEM SET DB_CREATE_FILE_DEST = ‘/ora5/oradata/MYDB’

ALTER TABLESPACE USERS ADD DATAFILE;

ALTER SYSTEM SET DB_CREATE_FILE_DEST = ‘/ora8/oradata/MYDB’

ALTER TABLESPACE APP_DATA

ADD DATAFILE SIZE 200M AUTOEXTEND OFF;

Renaming and Relocating Files

You can RENAME FILE or use ALTER DATABASE.

1. Take tablespace offline:

ALTER TABLESPACE USER_DATA OFFLINE;

2. Copy or move files to the new location, or rename file with an OS command.

3. Rename the file in the database either with:

ALTER DATABASE RENAME FILE

‘/disk1/oradata/DB01/userdata2.dbf’ TO

‘/disk1/oradata/DB01/user_data2.dbf’;

Or

ALTER TABLESPACE USER_DATA RENAME DATAFILE

‘/disk1/oradata/DB01/userdata2.dbf’ TO

‘/disk1/oradata/DB01/user_data2.dbf’;

4. Bring the tablespace online

ALTER TABLESPACE USER_DATA ONLINE;

To rename a SYSTEM tablespace:

1. Shut down the database. Complete backup.

2. Copy or rename files on the disk with OS command.


3. STARTUP MOUNT

4. ALTER DATABASE RENAME FILE

5. ALTER DATABASE OPEN.

Querying Data File Information

SQL> SELECT FILE#, RFILE#, STATUS, BYTES, BLOCK_SIZE


2 FROM V$DATAFILE;

FILE# RFILE# STATUS BYTES BLOCK_SIZE


---------- ---------- ------- ---------- ----------
1 1 SYSTEM 429916160 8192
2 2 ONLINE 209715200 8192
3 3 ONLINE 20971520 8192
4 4 ONLINE 20971520 8192
5 5 ONLINE 144834560 8192
6 6 ONLINE 26214400 8192
7 7 ONLINE 20971520 8192
8 8 ONLINE 10485760 8192
9 9 ONLINE 26214400 8192
10 10 ONLINE 39976960 8192

10 rows selected.

SQL> SELECT FILE#, RFILE#, STATUS, BYTES, BLOCK_SIZE


2 FROM V$TEMPFILE;

FILE# RFILE# STATUS BYTES BLOCK_SIZE


---------- ---------- ------- ---------- ----------
1 1 ONLINE 41943040 8192

SQL> SELECT TABLESPACE_NAME, FILE_NAME, BYTES,


2 AUTOEXTENSIBLE
3 FROM DBA_DATA_FILES;

TABLESPACE_NAME
------------------------------
FILE_NAME
-------------------------------------------------------------------------------- BYTES AUT
---------- ---
SYSTEM
/opt/ora9/oradata/orcl/system01.dbf
429916160 YES

UNDOTBS1
/opt/ora9/oradata/orcl/undotbs01.dbf
209715200 YES

10 rows selected.

SQL> SELECT TABLESPACE_NAME, FILE_NAME, BYTES,


2 AUTOEXTENSIBLE
3 FROM DBA_TEMP_FILES;

TABLESPACE_NAME
------------------------------
FILE_NAME
-------------------------------------------------------------------------------- BYTES AUT
---------- ---
TEMP
/opt/ora9/oradata/orcl/temp01.dbf
41943040 YES

SQL>

Chapter 7. Segment and Storage Structures

Segments are logical storage units that fit between a tablespace and an extent.

A data block is the smallest logical unit of storage. You define block size with DB_BLOCK_SIZE. Data
block consists of the following:

Common and variable header - information about the type of block and address. The type can be
UNDO, DATA or INDEX. The common block header takes 24 bytes, and the variable (transaction) header
occupies (24XINITRANS) bytes. By default the value of INITRANS for tables is 1 and for indexes it is 2.

Table directory - info about tables that have rows in this block. The table directory takes 4 bytes.

Row directory - row address.

Row data - the actual rows stored in this area

Free space - the space that is available for new rows or for extending the existing rows thru updates.
Deletion and upfdates amy cause fragmentation in the block, coalesce when needed.

Block Storage Parameters

PCTFREE and PCTUSED - these control free space available on for inserts and updates on the reows in
the block. .

INITRANS and MAXTRANS - control the number of concurrent transactions that can modify or create
data in the block. You can specify the parameters when you create the object.

FREELIST - each segment has one or more free lists that list the available blocks for future inserts. The
default is 1 freelist for every segment.

PCTFREE (default 10) - specifies what % of the block should be allocated as free fo future updates. If the
table is expected to undergo a lot of updates that will increase the size of the row, set a higher PCTFREE.

PCTUSED (default 40) - specifies when the block is considered for adding new rows. After block becomes
full as determined by the PCTUSED, Oracle considers adding new rows only if the block used space falls
below PCTUSED. If the value is below PCTUDED the block is added to free list. If the table has a lot of
inserts and deletes and the updates do not cause the row length to increase set the PCTFREE low and
PCTUSED to high. A haigh PCTUSED will help to rreuse space freed by deletes faster. If the table row
length is larger or the rows are never updated set PCTFREE very low so that data row can fit in a single
block and you fill all blocks.

You can specify PCTFREE when you cerate a table, an index, or a cluster. You can specify PCTUSED
while creating tables and clusters, but not indexes.

INITRANS and MAXTRANS - these transaction entry settings reserveer space for transactions in the
block. Base these parameters on the maximum number of that can touch a block at any given time.
INITRANS reserves the space in the block header for the DML. If you do not specify INITRANS, Oracle
defaults to 1 for table data blocks and 2 for index and cluster blocks. MAXTRANS default is OS specific,
but the max is 255.

If the row length is large or the number of users accessing the table is low, set INITRANS to a low value.
Some tables, such as application control tables, are accessed frequently, need a higher INTRANS.

Automatic Space Management

If a segment does not contain LOBs and is locally managed, you can use Automatic Space management
instead of PCTFREE, USE or FREELISTS. Bitmaps are used instead of free lists.

Example of auto space management (can not be done in OEM)

CREATE TABLECPACE APPL_DATA2

DATAFILE '/disk2/oradata/DB01/appl_data02.dbf'

SIZE 200M

EXTENT MANAGEMENT LOCAL UNIFORM SIZE 512K

SEGMENT SPACE MANAGEMENT AUTO;

Extents

An extent is alogical storage unit that is made up of contiguous data blocks. INITIAL, NEXT,
PCTINCREASE, MINEXTENTS, MAXEXTENTS are the parameters for extents. When the extents are
managed locally the storage parameters do not affect the size of extents. Once an bject is created, its
initial and maxextents can not be changed. Changes to NEXt and PCTINCREASE take effect when the
next extent is allocated (the existing extents are not changed).

Allocating extents

Oracle allocates extents when an object is first created or when all the blocks in the segment are full.

In dictionary managed tables, Oracle allocates extents when:

1. If the extent requested more than 5 data blocks. Oracle adds one more block to reduce internal
fragmentation.
2. If an exact match fails, Oracle searches the contiguous free blocks again for a free extent larger than
the required value.

3. If the step 2 fails, Oracle coalesces the free space and repeats step 2.

4. If step 3 fails, Oracle checks if the files are marked autoextensible. If so, Oracle tries to extend the file
and repeats step2. If Oracle can not extend the file it issues an error.

Extents are deallocated when you drop an object. To free up the extents:

TRUNCATE TABLE EMP DROP STORAGE;

this removes all extents higher than MINEXTENTS.

The ... REUSE STORAGE does not dealoocate extents, it hust removes the rows.

You can also manually deallocate extents:

ALTER <TABLE, INDEX, CLUSTER> <NAME> DEALLOCATE UNUSED;

Querying Extents Information

Querying Extents

SQL> select owner, segment_type, tablespace_name, file_id, bytes


2 from dba_extents
3 where owner='SCOTT';

OWNER SEGMENT_TYPE TABLESPACE_NAME


------------------------------ ------------------- ------------------------------ FILE_ID BYTES
---------- ----------
SCOTT TABLE SYSTEM
1 65536

SCOTT TABLE SYSTEM


1 65536

SCOTT TABLE SYSTEM


1 65536

OWNER SEGMENT_TYPE TABLESPACE_NAME


------------------------------ ------------------ ------------------------------ FILE_ID BYTES
---------- ----------
SCOTT TABLE SYSTEM
1 65536

SCOTT INDEX SYSTEM


1 65536

SCOTT INDEX SYSTEM


1 65536

6 rows selected.
SQL> SELECT TABLESPACE_NAME, MAX(bytes) LARGEST,
2 MIN(bytes) SMALLEST, COUNT(*) EXT_COUNT
3 FROM dba_free_space
4 GROUP BY tablespace_name;

TABLESPACE_NAME LARGEST SMALLEST EXT_COUNT


------------------------------ ---------- ---------- ----------
CWMLITE 10878976 262144 2
DRSYS 10813440 10813440 1
EXAMPLE 458752 458752 1
INDX 26148864 26148864 1
ODM 11206656 11206656 1
SYSTEM 2555904 2555904 1
TOOLS 10420224 10420224 1
UNDOTBS1 197525504 65536 4
USERS 26148864 26148864 1
XDB 196608 196608 1

10 rows selected.

SQL>

Segments

A segment is alogical storage unit that is made up of one or more extents. A segment can belong to only
one tablespace, but may spread across multiple data files belonging to the same tablespace.

Types of segments:

Table segment

Table partition segment - each part of the table residing in a diffrent tablespace is a separate segment

Cluster segment - consists of one or more tables. The data is stored in key order and all tables within the
cluster have the same characteristics. Typically, tables stored in a cluster are joined (emp-dept).

Nested table segments - columns in a tables are tables themselves, each column stored in a separate
segment.

Index segment - all indexes are in the same segment

IOT - an index organized table is a table and and index that are combined in a single segment, stored in
index order. Queries to an IOT are very fast because it neerds to move only by a single segment tofind
results.

Index partition segment - similar to table partition

Temporary segment - hold overflow information from sort operations that did not fit into memory.
LOB segment - for LOB segments for a table that is larger than 4KB space is allocated in LOB segments.

UNDO segment - info for restore

Bootstrap - a special system segment that is used to initialize the data dictionary upon instance startup

Querying Segment Information

SQL> SELECT tablespace_name, segment_type, COUNT(*)


2 SEG_CNT FROM dba_segments
3 WHERE owner != 'SYS'
4 GROUP BY tablespace_name, segment_type;

TABLESPACE_NAME SEGMENT_TYPE SEG_CNT


------------------------------ ------------------- ----------
ODM INDEX 41
ODM TABLE 39
ODM LOBINDEX 7
ODM LOBSEGMENT 7
XDB INDEX 34
XDB TABLE 27
XDB LOBINDEX 271
XDB LOBSEGMENT 271
DRSYS INDEX 97
DRSYS TABLE 53
DRSYS LOBINDEX 2

TABLESPACE_NAME SEGMENT_TYPE SEG_CNT


------------------------------ ------------------ ----------
DRSYS LOBSEGMENT 2
SYSTEM INDEX 215
SYSTEM TABLE 171
SYSTEM LOBINDEX 34
SYSTEM LOBSEGMENT 34
SYSTEM NESTED TABLE 1
SYSTEM INDEX PARTITION 24
SYSTEM TABLE PARTITION 27
CWMLITE INDEX 92
CWMLITE TABLE 57
EXAMPLE INDEX 132

TABLESPACE_NAME SEGMENT_TYPE SEG_CNT


------------------------------ ------------------- ----------
EXAMPLE TABLE 61
EXAMPLE LOBINDEX 23
EXAMPLE LOBSEGMENT 23
EXAMPLE NESTED TABLE 3
EXAMPLE INDEX PARTITION 104
EXAMPLE TABLE PARTITION 28

28 rows selected.

SQL> SELECT tablespace_name, extent_size, current_users,


2 total_blocks, used_blocks, free_blocks, max_blocks
3 FROM V$SORT_SEGMENT;
TABLESPACE_NAME EXTENT_SIZE CURRENT_USERS TOTAL_BLOCKS
------------------------------- ------------ ------------- ------------
USED_BLOCKS FREE_BLOCKS MAX_BLOCKS
----------- ----------- ----------
TEMP 128 0 0
000

SQL>

Managing Undo Segments

Undo segments record old va;ues that were changed by a transaction. Undo segments provide read
consistency and the ability to undo changes. When a transaction is complete (COMMIT or ROLLBACK)
Oracle finds a new UNDO segment for this session. For an update or delete, the before-image data is
saved in the UNDO segments, the the corresponding data blocks are modified. For inserts, the UNDO
entries include ROWID, because to undo an insert the row must be deleted. Oracle records the changes
to the original data block and the redo log (important for transactions that are not yet commited or rollback
at a time of a system crash or media recovery).

Creating Undo Segments

When you create a database, Oracle creates the SYSTEM undo segment inthe SYSTEM tablespace.
Every database has to have a non-SYSTEm undo segment. Although multiple undo tablespaces can exist
in a database, only one can be active at any given time. two ini.ora parameters control the use of
automatic undo management in the database: UNDO_MANAGEMENT (AUTO or MANUAL, can not be
changed dinamically) and UNDO_TABLESPACE (default is SYS_UNDOTBS, can not be changed
dinamically.

Maintaining Undo Segment

After you create the database you can create additional tablespaces:

CREATE UNDO TABLESPACE SYS_UNDOTBS_NIGHT

DATAFILE 'undo.dbf' SIZE 15M;

ALTER SYSTEM

SET UNDO_TABLESPACE=SYS_UNDOTBS_NIGHT;

The amount of time that ndo data is retained for consistent reads is controlled with UNDO_RETENTION
specified in seconds.

Snapshot Too Old Error


An ORA-1555 snapshot too old error occurs when Oracle can not produce a read-consistent view of the
data. This error usually happens when a transaction commits after a long running query has started and
the undo information is overwritten or the undo extents are de-allocated.

Querying UNDO

SQL> SELECT segment_name, owner, tablespace_name, initial_extent INI,


2 next_extent NEXT, min_extents MIN, status STAT
3 FROM dba_rollback_segs
4/

SEGMENT_NAME OWNER TABLESPACE_NAME INI


------------------------------ ------ ------------------------------- ----------
NEXT MIN STAT
---------- ---------- ----------------
SYSTEM SYS SYSTEM 114688
1 ONLINE

_SYSSMU1$ PUBLIC UNDOTBS1 131072


2 ONLINE

_SYSSMU2$ PUBLIC UNDOTBS1 131072


2 ONLINE

SEGMENT_NAME OWNER TABLESPACE_NAME INI


------------------------------ ------ ------------------------------ ----------
NEXT MIN STAT
---------- ---------- ----------------_SYSSMU3$ PUBLIC UNDOTBS1 131072
2 ONLINE

_SYSSMU4$ PUBLIC UNDOTBS1 131072


2 ONLINE

_SYSSMU5$ PUBLIC UNDOTBS1 131072


2 ONLINE

SEGMENT_NAME OWNER TABLESPACE_NAME INI


------------------------------ ------ ------------------------------ ----------
NEXT MIN STAT
---------- ---------- ----------------
_SYSSMU6$ PUBLIC UNDOTBS1 131072
2 ONLINE

_SYSSMU7$ PUBLIC UNDOTBS1 131072


2 ONLINE

_SYSSMU8$ PUBLIC UNDOTBS1 131072


2 ONLINE
SEGMENT_NAME OWNER TABLESPACE_NAME INI
------------------------------ ------ ------------------------------ ----------
NEXT MIN STAT
---------- ---------- ----------------
_SYSSMU9$ PUBLIC UNDOTBS1 131072
2 ONLINE

_SYSSMU10$ PUBLIC UNDOTBS1 131072


2 ONLINE

11 rows selected.

SQL> SELECT * FROM v$rollname;

USN NAME
---------- ------------------------------
0 SYSTEM
1 _SYSSMU1$
2 _SYSSMU2$
3 _SYSSMU3$
4 _SYSSMU4$
5 _SYSSMU5$
6 _SYSSMU6$
7 _SYSSMU7$
8 _SYSSMU8$
9 _SYSSMU9$
10 _SYSSMU10$

11 rows selected.

SQL> SELECT * FROM v$rollstat


2 WHERE usn=1;

USN LATCH EXTENTS RSSIZE WRITES XACTS GETS


---------- ---------- ---------- ---------- ---------- ---------- ----------
WAITS OPTSIZE HWMSIZE SHRINKS WRAPS EXTENDS AVESHRINK
---------- ---------- ---------- ---------- ---------- ---------- ----------
AVEACTIVE STATUS CUREXT CURBLK
---------- --------------- ---------- ----------
1 0 8 516096 24076 0 368
0 516096 0 1 0 0
6553 ONLINE 5 0

SQL> SELECT begin_time, end_time, undoblks, maxquerylen


2 from v$undostat;

BEGIN_TIM END_TIME UNDOBLKS MAXQUERYLEN


--------- --------- ---------- -----------
29-SEP-04 29-SEP-04 16 3
29-SEP-04 29-SEP-04 23 3

SQL>
Chapter 8. Managing Indexes, Tables and Constraints

Table types:

Relational - permanent and can be partitioned.

Temporary - store data specific to a session. Store intermediary results. Use CREATE GLOBAL
TEMPORARY TABLE.

Index organized - store data in a structured primary key sorted matter. You must define a primary key for
each IOT. These tables do not use separate segments for the table and a primary key, as they use the
same storage for both. CREATE TABLE … ORGANIZATION INDEX

External tables - store data in outside flat files (new to 9i). These tables are read only, there are no
indexes allowed. The default driver used is SQL loader. CREATE TABLE ORGANIZATION EXTERNAL.

Object tables - each row represents an object.

Create table

CREATE TABLE ORDERS (

ORDER_NUM NUMBER(10,3),

ORDER_DATE DATE);

Oracle data types

CHAR - fixed length character data type. Data padded to fit the column width. Size default to 1, max is
2000 bytes.

VARCHAR2 (1) - Variable length character data. Maximum length in (). You must specify a size, there is
no default size. Max is 4000 bytes. Unlike CHAR, the characters are not blank padded.

NCHAR - similar to CHAR, but used to store Unicode character set data. NCHAR is fixed in length,
maximum size is 2000 default 1 character.

NVARCHAR (1) - same as VARCHAR2, stores Unicode variable length data, default 4000 bytes.

LONG - stores variable length character data up to 2GB. Use CLOB and NCLOB instead. Provided for
backward compatibility. Can have only one LONG column per table.

NUMBER (1,1) - Stores fixed and floating point number. You can optionally specify a precision and a
scale. The default is 38 digits of precision.

DATE - stores date data. You can store dates from Jan 1, 4712 BC to Dec 1 9999 AD.
TIMESTAMP - stores date and time with fractional seconds precision up to 9 digits.

TIMESTAMP WITH TIME ZONE - similar to TIMESTAMP, but also stores time zone displacement (the
difference between the local and the Universal time zone in hours and minutes).

TIMESTAMP WITH LOCAL TIME ZONE - similar to TIMESTAMP, but includes the displacement in the
database time zone, but when the user retrieves the data, it is shown in the users local session time zone.

INTERVAL YEAR TO MONTH - default precision is 2 (can be 0-9)

INTERVAL DAY TO SECOND - used to represent a period of time as days, hours, minutes and seconds,
stores the difference between 2 date values.

RAW - variable length type used to store unstructured data. Provided for backward compatibility. Use
BLOB and BFILE instead.

LONG RAW - same as RAW can store up to 2GB of binary data.

BLOB - stores up to 4GB of unstructured binary data.

CLOB - stores up to 4GB of character data.

NCLOB - stores up to 4GB of Unicode character data

BFILE - stores unstructured binary data in OS files outside the database. The file size can be up to 4GB.
Oracle only stores a pointer to a file.
ROWID - stores binary data representing a physical row address of a row. Occupies 10 bytes.

UROWID - stores binary data representing any type of row address; physical, logical, or foreign. Up to
4000 bytes.

Collection types are used to represent more than one element such as an array - VARRAY and NESTED
TABLES. Elements in VARRAY are ordered and have a maximum limit. Elements in a table type are not
ordered and there is no upper limit to the number of elements.

REF - is the relationship datatype.

Specifying Storage

If the table is too large, create a partitioned table or create a partition in a separate tablespace. Oracle
allocates a segment to the table. This segment will have the number of extents specified in the
MINEXTENTS. The presence of numerous extents affects performance of the table on truncation, full
table scanning, causing additional I/O.

CREATE TABLE JAKE.ORDERS (


ORDER_NUM NUMBER,
ORDER_DATE DATE)
TABLESPACE USER_DATA
PCTFREE 5
PCTUSED 75
INITRANS 1
MAXTRANS 255
STORAGE (INITIAL 512 NEXT 512K PCTINCREASE 0 MINEXTENTS 1 MAXEXTENTS 100
FREELISTS 1 FREELISTS GROUP 1 BUFFER_POOL KEEP);
Storing LOB structures

A table can contain values of CLOB,BLOB and NCLOB (different storage parameters from tables):

CREATE TABLE LICENSE_INFO


(DRIVER_ID VARCHAR2 (20),
PHOTO BLOB)
TABLESPACE APP_DATA STORAGE (INITIAL 4M NEXT 4M PCTINCREASE 0)
LOB (PHOTO) STORE AS PHOTO_LOB
(TABLESPACE APP_LARGE_DATA
DISABLE STORAGE IN ROW
STORAGE (INITIAL 128M NEXT 128M PCTINCREASE 0)
CHUNK 4000
PCTVERSION 20
NOCACHE LOGGING);

The LOB is given the name of PHOTO_LOB. If the LOB column is larger than 400 bytes data is stored in
the LOB segment (out of line storage). The DISABLE/ENABLE STORAGE IN ROW specifies whether
LOB data should be stored inline or out of line. (ENABLE is default). The CHUNK clause specifies
number of bytes of data that will be read during LOB manipulation (has to be in multiple of database
blocks). PCTVERSION specifies the percentage of all used LOB data space that can be occupied by old
versions of LOB data pages. If the LOB read/updatedtly, used the CACHE clause.

Creating table from an existing table

CREATE TABLE ACCEPTED_ORDERS


(ORD_NUMBER, ORD_DATE, PRODUCT_CD, QTY)
TABLESPACE USERS
PCTFREE 0
STORAGE (INITIAL 128K NEXT 128K PCTINCREASE 0)
AS
SELECT ORDER_NUM, ORDER_DATE, PRODUCT_ID, QUANTITY
FROM ORDERS
WHERE STATUS = ‘A’;

Create as select will not work with table with LONG datatype.

Partitioning tables

Partitioning is breaking a large table into managable pieces based on values in a column. Each partition is
allocated a segment in possibly separate tablespaces.

Range partitioning - transaction tableon a transaction date by month.

CREATE TABLE ORDER_TRANSACTION (


ORD_NUMBER NUMBER(12),
ORD_DATE DATE,
PROD_ID VARCHAR2(15),
QUANTITY NUMBER(15,3))
PARTITION BY RANGE (ORD_DATE)
(PARTITION FY2001Q4 VALUES LESS THAN
(TO_DATE('01012002', 'MMDDYYYY'))
TABLESPACE ORD_2001Q4,
(PARTITION FY2001Q1 VALUES LESS THAN
(TO_DATE('04012002', 'MMDDYYYY'))
TABLESPACE ORD_2001Q1
STORAGE (INITIAL 500M NEXT 500M)
INITRANS 2 PCTFREE 0)
NOLOGGING;

Hash partitioning - more appropriate when you do not know how much data will be in arange or how big
the partitions be. Hash partitions use a hash algorithm on the partitioned columns. The number of
partitions should be specified as a power of 2 (2.4.6.8...) Choose a column with uniquie values.

CREATE TABLE DOCUMENTS1 (


DOC_NUMBER NUMBER(12),
DOC_TYPE VARCHAR2 (20),
CONTENTS VARCHAR2(600))
PARTITION BY HASH (DOC_NUMBER, DOC_TYPE)
PARTITION 4 STORE IN (DOC101, DOC102, DOC103)
STORAGE (INITIAL 64K NEXT 64K PCTINCREASE 0 MAXEXTENTS 4096);

or

CREATE TABLE DOCUMENTS2 (


DOC_NUMBER NUMBER(12),
DOC_TYPE VARCHAR2 (20),
CONTENTS VARCHAR2(600))
PARTITION BY HASH (DOC_NUMBER, DOC_TYPE)
(PARTITION DOC201 TABLESPACE DOC201,
PARTITION DOC202 TABLESPACE DOC202,
PARTITION DOC203 TABLESPACE DOC203,
PARTITION DOC204 TABLESPACE DOC204 )
STORAGE (INITIAL 64K NEXT 64K PCTINCREASE 0 MAXEXTENTS 4096);

List partitioning - if you all the values kept in the columnt nad want to create a partition for each value
(you can combine several values into the same partition). NULL can be a separate list. Oracle rejects a
row not included in the list.

CREATE TABLE POPULATION_STATS


(STATE VARCHAR2 (2),
COUNTY VARCHAR2 (30),
CITY VARCHAR2 (30),
MEN NUMBER,
WOMEN NUMBER,
BCHILD NUMBER,
GCHILD NUMBER)
PARTITION BY LIST (STATE)
(PARTITION SC VALUES ('TX', 'LA', 'OK') TABLESPACE SC_DATA,
PARTITION SW VALUES ('MN', 'AZ') TABLESPACE SW_DATA,
PARTITION SE VALUES ('AR', 'MS', 'AL') TABLESPACE SE_DATA);

Composite partitioning - uses range partitioning to create partitions and hash partitions to create
subpartitions. Only subpartitions are created on the disk, partitions are physically representations only.
CREATE TABLE CARES
(MODEL_YEAR NUMBER(4),
MODEL VARCHAR2 (30),
MANUFACTR VARCHAR2 (50),
QUANTITY NUMBER)
PARTITION BY RANGE (MAKE_YEAR)
SUBPARTITIOn BY HASH (MODEL) SUBPARTITIONS 4
STORE IN(TSMK1, TSMK3, TSMK4)
(PARTITION 2001 VALUES LESS THAN (2002),
(PARTITION 2002 VALUES LESS THAN (2003),
(PARTITION 2001 VALUES LESS THAN (MAXVLUE))
STORAGE (INITITAL 64K NEXT 6K PCTINCREASE 0 MAXEXTENTS 4096);

better example:

CREATE TABLE CARS2 (


(MODEL_YEAR NUMBER(4),
MODEL VARCHAR2 (30),
MANUFACTR VARCHAR2 (50),
QUANTITY NUMBER)
PARTITION BY RANGE (MAKE_YEAR)
SUBPARTITION BY HASH (MODEL) SUBPARTITIONS 4
(PARTITION M2001 VALUES LESS THAN (2002)
STORAGE (INITIAL 128K NEXT 128K)
(SUBPARTITION M2001_SP1 TABLESPACE TS011,
SUBPARTITION M2001_SP2 TABLESPACE TS012,
SUBPARTITION M2001_SP3 TABLESPACE TS013,
SUBPARTITION M2001_SP4 TABLESPACE TS014),
PARTITION M2002 VALUES LESS THAN (2003)
(SUBPARTITION M2002_SP1 TABLESPACE TS021,
SUBPARTITION M2002_SP2 TABLESPACE TS022,
SUBPARTITION M2002_SP3 TABLESPACE TS023,
SUBPARTITION M2002_SP4 TABLESPACE TS024))
STORAGE (INITIAL 64K NEXT 64K PCTINCREASE 0 MAXEXTENTS 4096);

Using other create clauses

NOLOGGING - no redo generated, media recovery will not restore the objects. Entire tablespace backup
is advised.

PARALLEL - the parameter PARALLEL_THREADS _PER_CPU defines number of parallel processes


per CPU (ususally 2).

CACHE/NOCACHE - the blocks for these objects are not aged out of the cache.

Creating temporary tables

CREATE GLOBAL TEMPORARY TABLE INVALID_ORDERS


(ORDER# NUMBER (8),
ORDER_DT DATA,
VALUE NUMBER (12,2))
ON COMMIT DELETE ROWS;
To define a session specific table use ON COMMIT PRESERVE ROWS;

Altering tables

If you change NEXT, PCTINCREASE, MAXEXTENTS, FREELISTS and FREELIST GROUPS this will not
affect the extents that already allocated. You can not change INITIAL and MAXEXTENTS with ALTER
TABLE.

ALTER TABLE ORDERS


STORAGE (NEXT 512K PCTINCREASE 0 MAXEXTENTS UNLIMITED);

Allocating and deallocating extents

ALTER TABLE ORDERS ALLOCATE EXTENT SIZE 200K


DATAFILE 'c:\..\...dbf';

You can use the UNUSED_SPACE procedure to find the HVM (high water mark) of the segment. This
shows the highest size the extent ever grown to.

PROCEDURE UNUSED SPACE;

You can also use:

ALTER TABLE ORDERS DEALLOCATE UNUSED KEEP 100K;

Oracle deallocates storage on TRUNCATE unless TRUNCATE TABLE ORDERS REUSE STORAGE;

Reorganizing tables

The old segment is dropped only after you create a new segment.

Moving tables (queries are allowed while moving, but not DML, grants retained).

ALTER TABLE ORDDERS_MOVE


TABLESPACE NEW_DATA
STORAGE (INITIAL 50M NEXT 5M PCTINCREASE 0)
PCTFREE 0 PCTUSED 50
INITRANS 2 NOLOGGING;

Dropping a table

DROP TABLE X CASCADE CONSTRAINTS;

The indexes, constraints, triggers and privileges are dropped. The views or snapshots are not dropped,
but rendered invalid.

TRUNCATE TABLE X DROP|REUSE STORAGE;

You must disable the foreign key constraints prior to truncating.


Dropping columns

ALTER TABLE SCOTT.EMP


DROP COLUMN (empno,ename)
CASCADE CONSTRAINTS;

ALTER TABLE SCOTT.EMP


SET UNUSED COLUMN (empno,ename)
CONTINUE;

Analyzing tables

Validating structure

As aresult of hardawre proplems, and bugs some blocks can become corruupted. Oracle returns
corruption error only the error is accessed. You can use ANALYZE to validate structure of the tables. If the
blocks are not readable it returms an error. the ROWID of the bad blocks are insted into a table. You can
specify a name of the table to inseert the ROWID in. By default the table name is INVALID_ROWS. You
can create table using

SQL> @c:\oracle\ora90\rdbms\admin\utlvalid.sql

SQL> desc invalid rows

This example validates the structure of the ORDERS table:

ANALYZE TABLE ORDERS VALIDATE STRUCTURE


INTO SCOTT.CORRUPTED_ROWS;

ANALYZE TABLE ORDERS VALIDATE STRUCTURE CASCADE;

ANALYZE TABLE GLEDGER PARTITION (MAY2002) VALIDATE STRUCTURE;

Finding chained / migrated rows

By default oracle keeps info about chained rows in the CHAINED_ROWS table created by

SQL> @c:\oracle\ora90\rdbms\admin\utlchain.sql

Here is how to fix migrated rows in a table:

1. ANALYZE TABLE ORDERS LIST CHAINED ROWS;

2. Find the number of migrated rows.


SELECT COUNT(*)
FROM CHAINED_ROWS
WHERE OWNER_NAME='SCOTT'
AND TABLE_NAME='ORDERS';

3. If there are migrated rows, create a temprorary table to hold the migrated rows.
CREATE TABLE TEMP_ORDERS AS
SELECT * FROM ORDERS
WHERE ROWID IN (SELECT HEAD_ROWID
FROM CHAINED_ROWS
WHERE OWNER_NAME='SCOTT'
AND TABLE_NAME='ORDERS';

4. Delete the orders from the orders table


DELETE FROM ORDERS WHERE ROWID IN (SELECT HEAD_ROWID
FROM CHAINED_ROWS
WHERE OWNER_NAME='SCOTT'
AND TABLE_NAME='ORDERS';

5. INSERT INTO ORDERS


SELECT * FROM TEMP_ORDERS;

Before deleting the row make sure you disable all foreign keys to ORDERS.

Collecting statistics

You can calculate the exact stats (COMPUTE) of the table or sample a few rows and estimate the stats
(ESTIMATE) for large tables. When ANALYZing, Oracle collects total number of rows per table and
number of chained rows, number of blocks, unused blocks and average unused space in each block,
average row length.

ANALYZE TABLE ORDERS COMPUTE STATISTICS;

ANALYZE TABLE ORDERS ESTIMATE STATISTICS


SAMPLE 200 ROWS;

- SAMPLE 20 PERCENT;

To remove stats:

ANALYZE TABLE ORDERS DELETE STATISTICS;

Querying tables information

Table descriptions
You primarily use DBA_TABLES, USER_TABLES and ALL_TABLES to query info about the tables.

Column descriptions
DBA_TAB_COLUMNS, USER_TAB_COLUMNS and ALL_TAB_COLUMNS.

Dictionary views with tables information

ALL_ALL_TABLES, DBA_ALL_TABLES, USER_ALL_TABLES - similar to DBA_TABLES, shows


information about relational tables and object tables

ALL_TAB_PARTITIONS, DBA_TAB_PARTITIONS, USER_TAB_PARTITIONS - partitioning information,


storage parameters and partition-level statistics gathered.

ALL_TAB_SUBPARTITIONS, DBA_TAB_SUBPARTITIONS, USER_TAB_SUBPARTITIONS - subpartition


information for composite partitions in the database.

USER_OBJECTS, DBA_OBJECTS, USER_OBJECTS - information about objects (tablespace,


timestamp, etc)

ALL_EXTENTS, DBA_EXTENTS, USER_EXTENTS - info about extents allocated to a table.

The structure of a row

A row piece (a part of a block that comprases an entire row) has two parts - a row header and a columnt
data. A row header is about 3 bytes, describes columns, if they are chained, if they are clustered. After the
row header is the column data. Column data has 2 parts - length and data. The length occupies 1 byte
for data less than 251 bytes and 3 bytes for data over 250 bytes.

Using ROWID

ROWID - is an 18 byte structure than identifies rows.

Categories of ROWIDs:

Physical ROWID - Ids each row in a table, partition, subpartition and cluster.

Logical ROWID - ids the rows of in index organized table

Formats of ROWIDs:

Extended - base-64 encoding (000000FFFBBBBBBRRR)


Where 0000000 is the object number
FFF is the relative datafile number where the block is located.
BBBBBBB is the block ID where the row is located
RRR is the ROW in the block

SQL> SELECT ROWID, ORDER_NUM


FROM ORDERS:

ROWID ORDER_NUM
AAAFqsAADAAAAfTAAA 5945055

Restricted - this is the pre 8 format, base 16. The format is BBBBBBB.RRRR.FFFF
where BBBBBBB is data block, RRRR is the row number, FFFF is the data file.

DBMS_ROWID

Converts ROWIDs between restricted and extended format.

Managing indexes

You vcan specify up to 30 columns per index (composite indexes).


Types of indexes:

Bitmap - does not repeatedly store the index column values. each value is trated as akey, and a bit is set
for curresponding ROWID. Bitmap indexes are for the rows with low cardinality (sex, day or night)

B-tree index - (default)

Non-unique b-tree index

Unique b-tree index

Reverse key b-tree index - if the value is 54321 Oracle reverses it to 12345. These are useful when you
create unique indexes on inserts to the table are always in the ascending order of the indexed column
retrieving fewer blocks.

Function-based indexes - created on columns with expressions (SUBSTR(EMPID,1,2) in the WHERE


clause.

Creating indexes

CREATE UNIQUE INDEX


IND2_ORDERS
ON ORDERS (ORDER_NUM);

CREATE BITMAP INDEX IND3_ORDERS


ON ORDERS (STATUS);

Specifying index storage

If you omit index storage parameters, Oracle assigns default tablespace parameters except for
PCTUSED (can't be used for indexes). Keep the PCTUSED higher than that for the corresponding table
because index can hold larger number of rows than a table.

CREATE UNIQUE INDEX (IND2_ORDERS


ON ORDERS (ORDER_NUM)
TABLESPACE USER_INDEX
PCTFREE 25
INITRANS 2
MAXTRANS 255
STORAGE (initial 128K NEXT 128K PCTINCREASE 0 MINEXTENTS 1 MAXEXTENTS 100 FREELISTS
1 FREELIST GROUPS 1 BUFFER POOL KEEP);

Using other create indexes constraints

CREATE UNIQUE INDEX (IND2_ORDERS


ON ORDERS (ORDER_NUM)
TABLESPACE USER_INDEX
LOGGING
NOSORT
COMPRESS
SORT
COMPUTE STATISTICS
ONLINE (available for DML while being created)
PARALLEL;

Partitioning

Types of partitioned indexes:

Local prefixed - local index with leading columns (leftmost column in index) in the order of partitionnkey

Local non-prefixed - partition key columns are not leading columns, but the index is local

Global prefixed - leading columns in the order of the partition

Global non-prefixed - global index with leading columns not in the partition key order

Reverse key indexes

CREATE UNIQUE INDEX IND2_ORDERS


IN ORDERS (ORDER_DATE, ORDER_NUM)
TABLESPACE USER_INDEX
REVERSE;

Function based indexes

Before creating the fuction based indexes you must enable QUERY_REWRITE_ENABLED and
QUERY_REWRITE_INTEGRITY to TRUSTED, COMPATIBLE > 8.1.0.
CREATE INDEX IND_ORDERS
ON ORDERS (SUBSTR(PRODUCT_ID,1,2))
TABLESPACE USER_INDEX;

This index is pre-calculated for ...WHERE SUBSTR(PRODUCT_ID,1,2)='BT';

Index organized tables (IOTs)

Index organized tables are useful for tables in which the data access is mostly through its primary key
(such as look up tables with descriptions). In this index the entire index is stored as a part of an index.

CREATE TABLE IOT_EXAMPLE (


PK_COL1 NUMBER (4),
PK_COL2 VARCHAR2 (10),
NON-PK_COL1 VARCHAR2 (40),
NON-PK_COL2 DATE,
CONSTRAINT PK_IOT PRIMARY KEY
(PK_COL1, PK_COL2))
ORGANIZATION INDEX
TABLESPACE INDX
STORAGE (INITIAL 32K NEXT 32K PCTINCREASE 0);
Altering indexes

ALTER INDEX SCOTT.ORDERS


STORAGE (NEXT 512K MAXEXTENTS UNLIMITED);

... ALLOCATE EXTENT SIZE 200K;

... DEALLOCATE UNUSED KEEP 100K;

ALTER INDEX IND_IND1_ORDERS COALESCE;

ALTER INDEX IND_ORDERS REBUILD


TABLESPACE NEW_INDEX_TS
STORAGE (INITIAL 25M NEXT 5M PCTINCREASE 0)
PCTFREE 20 INITRANS 4
COMPUTE STATISTICS
ONLINE NOLOGGING;

Dropping indexes

DROP INDEX IND;

Analyzing indexes

ALTER INDEX IND5_IND VALIDATE STRUCTURE;

ALTER INDEX IND5_IND ESTIMATE STATISTICS


SAMPLE 40%;

ALTER INDEX IND5_IND DELETE STATISTICS;

Monitoring index usage

To see if the index is used:

1. ALTER INDEX IND MONITORING USAGE;

2 SELECT * FROM V$OBJECT_USAGE;


The column START_MONITORING has a timestamp showing when the monitoring began

3. SELECT /*+ index (pk_dept) */* FROm DEPT


WHERE deptno = 10;

4. ALTER INDEX pk_dept MONITORING USAGE;

Querying index information


Use DBA_IND_COLUMNS, USER_IND_COLUMNS and ALL_IND_COLUMNS views to display columns
on an index.

Indentity (INDEX_OWNER, INDEX_NAME, TABLE_OWNER, TABLE_NAME, COLUMN_NAME)


Column characteristics (COLUMN_LENGTH, COLUMN_POSITION, DESCEND)

Views on Indexes:

ALL_IND_PARTITIONS, DBA_IND_PARTITIONS, USER_IND_PARTITIONS - partition level stats on an


index
ALL_IND_SUBPARTITIONS, DBA_IND_SUBPARTITIONS, USER_IND_SUBPARTITIONS - subpartition
info for composite indexes
ALL_IND_EXPRESSIONS, DBA_IND_EXPRESSIONS, USER_IND_EXPRESSIONS - column info or
expressions used to create the function based index
INDEX_STATS - stats from the INDEX VALIDATESTRUCTURE

Managing constraints

Types:

NOT NULL

Only defined at column level.

CREATE TABLE ORDERS


(ORDER NUM NUMBER (4) CONSTRAINT NN_ORDER_NUM NOT NULL);

ALTER TABLE ORDERS MODIFY ORDER_DATE NOT NULL;

CHECK

Check can not use subqueries, SYSDATE or ROWNUM can not be used; one column can have more
than 1 CHECK constraint defined and can be NULL.

CREATE TABLE BONUS


(BONUS NUMBER (9,2),
CONSTRAINT CK_BONUS CHECK (BONUS > 0));

ALTER TABLE BONUS


ADD CONSTRAINT CK_BONUS2 CHECK (BONUS < SALARY);

UNIQUE

ALTER TABLE BONUS


ADD CONSTRAINT UQ_EMP_ID UNIQUE (DEPT, EMP_ID)
USING INDEX TABLESPACE INDX
STORAGE (INITIAL 32K NEXT 32K PCTINCREASE 0);
ALTER TABLE EMP ADD
SSN VARCHAR2 (11) CONSTRAINT UQ_SSN UNIQUE;

PRIMARY KEY

CREATE TABLE EMPLOYEE


(DEPT VARCHAR2 (2),
EMP_ID NUMBER (4),
NAME VARCHAR2 (20) NOT NULL,
SSN VARCHAR2 (11),
SALARY NUMBER (9,2) CHECK (SALARY > 0),
CONSTRAINT PK_EMPLOYEE PRIMARY KEY (DEPT_NO, EMP_ID)
USING INDEX TABLESPACE INDX
STORAGE (INITIAL 64K NEXT 64K)
NOLOGGING,
CONSTRAINT UQ_SSN UNIQUE (SSN)
USING INDEX TABLESPACE INDX)
TABLESPACE USERS
STORAGE (INITIAL 128K NEXT 64K);

FOREIGN KEY

ALTER TABLE CITY CONSTRAINT FK_STATE


FOREIGN KEY (COUNTRY_CODE, STATUS_CODE)
REFERENCES STATE (COUNTRY_CODE, STATE_CODE)
ON DELETE CASCADE;

... ON DELETE SET NULL;

Creating Disabled Constraints

ALTER TABLE BONUS


ADD CONSTRAINT CK_BONUS CHECK (BONUS > 0) DISABLE;

Dropping constraints

ALTER TABLE BONUS DROP CONSTRAINT CK_BONUS2;

ALTER TABLE EMP DROP UNIQUE (EMP_ID) CASCADE;

ALTER TABLE EMP DROP PRIMARY KEY CASCADE;

Enabling and disabling constraints


When you create constrain6ts they are enabled. If you disable constraints, the indexes and unique keys
are dropped. If you re-enable constraints they indexes and keys are re-created. You can disable any
constraint

ALTER TABLE BONUS DISABLE CONSTRAINT CK_BONUS;

ALTER TABLE BONUS DISABLE PRIMARY KEY CASCADE;

ALTER TABLE STATE ENABLE PRIMARY KEY USING INDEX


TABLESPACE USER_INDEX STORAGE (INITIAL 2M NEXT 2M);

You can use EXCEPTIONS INTO clause to find the rows that violate a referential integrity of uniqueness
condition (ususally table EXCEPTIONS enabled by SQL> @c:\oracle\ora90\rdbms\admin\utlecpt.sql)

ALTER TABLE STATE ENABLE PRIMARY KEY


EXCEPTIONS INTO EXCEPTIONS;

ALTER TABLE BONUS MODIFY CONSTRAINT CK_BONUS DISABLE;

ALTER TABLE STATE MODIFY CONSTRAINT PK_STATE


DISABLE CASCADE;

ALTER TABLE BONUS MODIFY CONSTRAINT CK_BONUS ENABLE;

ALTER TABLE STATE MODIFY CONSTRAINT PK_SATE USING INDEX


TABLESPACE USER_INDEX STORAGE (INITIAL 2M NEXT 2M) ENABLE;

Validated constraints

Constraint states:

ENABLE VALIDATE - default ENABLE, the existing data is checked if it coforms to the standards

ENABLE NOVALIDATE - does not validate existing data, but does validate future data

DISABLE VALIDATE - this constraint is disabled (all indexes to enforce this constraint are dropped) but
the constraint remains valid. No DML alloed on the table because no data can be verified.

DISABLE NOVALIDATE - default DISABLE, constraint disabled, no future or existing data chacked.

Example for large data warehouse load

1. ALTER TABLE WHO1 CONSTRAINT PK_WHO1


DISABLE NOVALIDATE;

2. Load the batch, then

ALTER TABLE WHO1 CONSTRAIINT PK_WHO1


ENABLE NOVALIDATE;
Deferring constraint checks

By default, Oracle checks whether the data conforms to the constraint when the statement is executed.
Oracle allows you to change this behavior if the constraint is created using deferrable clause (NOT
DEFERRABLE is default). INITIALLY IMMEDIATE specifies that the constraint be checked for
conformance at the end of each statement. INITIALLY DEFERRED checks for conformance at the end of
transaction. You have to drop and recreate constraint (can't use ALTER TABLE).
If the constraint is DEFERRABLE by using the SET CONSTRAINTS or by usnig ALTER SESSION SET
CONSTRAINT.

ALTER TABLE CUSTOMER ADD CONSTRAINT PK_CUST_ID


PRIMARY KEY (CUST_ID) DEFERRABLE
INITIALLY IMMEDIATE;

ALTER TABLE ORDERS ADD CONSTRAINT FK_CUST_ID


FOREIGN KEY (CUST_ID)
REFERENCES CUSTOMER (CUST_ID)
ON DELETE CASCADE DEFERRABLE;

SET CONSTRAINTS ALL DEFERRED:

ALTER TABLE CUSTOMER MODIFY CONSTRAINT PK_CUST_ID


INITIALLY DEFERRED;

Querying constraint information

SELECT CONSTRAINT_NAME, CONSTRAINT_TYPE, DEFERRED,


DEFERRABLE, STATUS
FROM DBA_CONSTRAINTS
WHERE TABLE_NAME = 'ORDERS';

SELECT CONSTRAINT_NAME, COLUMN_NAME, POSITION


FROM DBA_CONS_COLUMNS
WHERE TABLE_NAME = 'ORDERS';

Chapter 9. Managing users, security, and globalization support

Most resource limits ae set at he session level. When a user exceeds a limit Oracle aborts the current
operation rollbaks the changes and returns an error.
The following parameters control the session:

SESSIONS_PER_USER - limits the number of concurrent user sessions.

CPU_PER_SESSION - limits amount of CPU time a session can use (hundredths of a second)

CPU_PER_CALL - limits amount of CPU time a single SQL statement can use (hundredths of a second).
Good for runaway queries, but be careful for batch jobs.

LOGICAL_READS_PER_SESSION - limits number of data blocks read in session, including blocks from
memory and physical reads.

LOGICAL_READS_PER_CALL - limits the number of data blocks read by a single SQL statement,
including blocks from memory and physical reads.

PRIVATE_SGA - limits amount of space allocated in the SGA for private areas, per session. Private areas
for SQL and PL/SQL are created in the multithreaded architecture. The limit does not apply to dedicated
server architecture.

CONNECT_TIME - a maximum number of minutes a session can stay connected (total elapsed time, not
CPU time). transactions rollbak and user disconnected.

IDLE_TIME - number of minutes a session can be idle (disconnected afterwards)

COMPOSITE_LIMIT - a weighted sum of four resource limits ; CPU_PER_SESSION,


LOGICAL_READS_PER_SESSION, CONNECT_TIME, and PRIVATE_SGA.

Managing passwords

You can set the following by using profiles

Account locking - number of failed login attempts and number of days the password will be locked

Password expiration - how often password can be changed, whether passwords can be reused, and the
grace period after which the password is to be changed.

Password complexity - should not be the same as user id, simple words, etc.

You can use the following parameters in profiles:

FAILED_LOGIN_ATTEMPTS

PASSWORD_LOCK_TIME - number of days the user account will remain locked

PASSWORD_LIFE_TIME - number of days a user can use a password.

PASSWORD_GRACE_TIME - number of days the user will get a warning before a password expires.

PASSWORD_REUSE_TIME - specifies number of days a password can be used again after changing it.

PASSWORD_REUSE_MAX - specifies number of password changes before a password can be reused.

PASSWORD_VERIFY_FUNCTION - verification of complexity (oracle provided scripts)

Managing profiles

CREATE PROFILE ACCOUNTING_USER


LIMIT SESSIONS_PER_USER 6
CONNECT_TIME 1440
IDLE_TIME 120
LOGICAL_READS_PER_CALL 1000000
PASSWORD_LIFE_TIME 60
PASSWORD_REUSE_TIME 90
PASSWORD_REUSE_MAX UNLIMITED
FAILED_LOGIN_ATTEMPTS 6
PASSWORD_LOCK_TIME UNLIMITED;

ALTER USER SCOTT ACCOUNT UNLOCK;

Composite limit

The composite limit specifies the total resource for a session.


COMPOSITE_LIMIT - a weighted sum of four resource limits ; CPU_PER_SESSION,
LOGICAL_READS_PER_SESSION, CONNECT_TIME, and PRIVATE_SGA.
The cost associated with each of these resources are set at the database level by using ALTER
RESOURCE COST. The default cost is 0, which means the resource are inexpensive.

ALTER RESOURCE COST


LOGICAL_READS_PER_SESSION 10
CONNECT_TIME 2;

ALTER PROFILE ACCOUNTING_USER LIMIT COMPOSITE_LIMIT 1500000;

The cost of the composite limit is calculated as follows:

Cost= (10 X LOGICAL_READS_PER_SESSION) + (2 X CONNECT_TIME)

Cost= (10 X 100,000 block reads) + (2 X 120 minutes) = 1,000,240

Password verification function

Oracle script in the rdbms/admin that verifies password - utlpwdmg.sql

FUNCTION SYS.<FUNCTION_NAME>
( <userid_variable> IN VARCHAR2 (30),
<password_variable> IN VARCHAR2 (30),
<old_PASSWORD_VARIABLE> IN VARCHAR2 (30) )
RETURN BOOLEAN

The password has to be:

Is not the same as the username,


Has a minimum length,
Is not too simple,
Contains at least one letter, one digit, and one punctuation mark
Differs from the previous password by at least 3 letters

Altering profiles
ALTER PROFILE ACCOUNTING _USER LIMIT
PASSWORD_VERIFY_FUNCTION VERIFY_FUNCTION
COMPOSITE_LIMIT 1500

Dropping profiles

DROP PROFILE ACCOUNTING USER CASCADE;

Assigning profiles

ALTER USER SCOTT PROFILE ACCOUNTING_USER;

Querying profile information

SELECT RESOURCE_NAME, LIMIT


FROM DBA_PROFILES
WHERE PROFILE = 'ACCOUNTING_USER'
AND RESOURCE_TYPE = 'KERNEL';

SELECT * FROM USER_PASSWORD_LIMITS;

SELECT * FROM RESOURCE_COST;

Users

CREATE USER JOHN


IDENTIFIED BY "B1S2!"
DEFAULT TABLESPACE USERS
TEMPORARY TABLESPACE TEMP
QUOTA ONLIMITED ON USERS
QUOTA 1M ON INDX
PROFILE ACCOUNTING_USER
PASSWORD EXPIRE
ACCOUNT UNLOCK

ALTER USER JOHN


IDENTIFIED BY PASSWORD
DEFAULT TABLESPACE APP_DATA;

ALTER USER CHOOSER ACCOUNT UNLOCK;

ALTER USER CHOOSER PASSWORD EXPIRE;


ALTER USER JOHN QUOTA 0 ON USERS;

DROP USER JOHN CASCADE;

Authenticating users

The passwords stored in the database are encrypted. By default the password is not encrypted when sent
over network. To encrypt the password you must set the ORA_ENCRYPT_LOGIN to TRUE on the client
machine.
When you use authorization by the OS Oracle verifies the OS login account and connects to the database
(users do not need to specify password). Oracle does not store the passwords of the OS users, but they
must have a username in the database. The parameter OS_AUTHENT_PREFIC determines the prefix for
the OS authorization. By default, the value is OPS$ (for user ALEX the database username will be
OPS$ALEX). When ALEX specifies connect, but does not specify a user name he connects as
OPS$ALEX. You can set the OS_AUTHENT_PREFIX to “”.
To create an OS user:

CREATE USER OPS$ALEX IDENTIFIED EXTERNALLY;

To connect to a remote database using OS authorization set the REMOTE_OS_AUTHENT to TRUE.

Complying with Oracle licensing terms

LICENSE_MAX_SESSIONS – only users with RESTRICTED session privilege are allowed to connect.
The default is 0, unlimited. Set this parameter if your license is based on concurrent database usage.

LICENSE_SESSIONS_WARNING

LICENSE_MAX_USERS – set this parameter if your license is based on a total number of users.

ALTER SYSTEM
SET LICENSE_MAX_SESSIONS = 256
LICENSE_SESSIONS_WARNING = 200;

Querying user information

SELECT USERNAME, DEFAULT_TABLESPACE,


TEMPORARY_TABLESPACE, PROFILE,
ACCOUNT__STATUS, EXPIRY_DATE
FROM DBA_USERS
WHERE USERNAME=’JOHN’;

SELECT * FROM ALL_USERS


WHERE USER_NAME LIKE ‘SYS%’;

SELECT TABLESPACE_NAME, BYTES, MAX_BYTES, BLOCKS,


MAX_BLOCKS
FROM DBA_TS_QIOTAS
WHERE USERNAME=’JOHN’;

SELECT USERNAME, OSUSER, MACHINE, PROGRAM


FROM V$SESSION
WHERE USERNAME = ‘JOHN’;

SELECT A.NAME, B.VALUE


FROM V$STATNAME A, V$SESSTAT B, V$SESSION C
WHERE A.STATISTIC# = B.STATISTIC#
AND B.SID = C.SID
AND C.USERNAME = ‘JOHN’
AND A.NAME LIKE ‘%session%’;

Managing privileges

Object privileges (object level includes execute program or DDL)

GRANT SELECT, UPDATE ON CUSTOMER


TO JAMES WITH GRANT OPTION;

GRANT INSERT (CUSTOMER_ID) ON CUSTOMER TO JAMES;

Some object privileges:

ON COMMIT REFRESH – grants the privilege to create a refresh-on-commit snapshots on the table.

QUERY REWRITE – grants the privilege to create a materialized view for query rewrite using the
specified table.

WRITE – allows external table agent to write a log file or a bad file to the directory. This is associated only
with external tables.

UNDER – grants the privilege to create a sub view under a view.

Any privilege received on a table provides the grantee the privilege to lock the table.

You can specify ALL (GRANT ALL ON CUSTOMER TO JAMES)

Even if you have the DBA privilege to grant privileges on objects owned by another user you must have
been granted the appropriate privilege WITH GRANT OPTION

Multiple privileges can be granted to multiple users – GRANT INSERT, UPDATE, SELECT ON
CUSTOMER TO ADMIN_ROLE, JULIE, SCOTT;

System privileges – granted at a database level.

The difference between SYSOPER and SYSDBA – SYSDBA can create databases.
To protect the dictionary, Oracle provides 07_DICTIONARY_ACCESSIBILITY. If it is set to TRUE, any
user with ANY can use SYS dictionary tables.

SELECT ANY, INSERT ANY, UPDATE ANY are system privileges, they do not apply to any particular
object.

Granting/revoking SYSTEM privileges

GRANT CREATE TABLE TO JOHN WITH ADMIN OPTION;

REVOKE UPDATE ON CUSTOMER FROM JOHN;

REVOKE REFERENCES ON CUSTOMER


FROM JAMES CASCADE CONSTRAINTS;

Some info:

If multiple users granted privileges on an object and only one of them revoked them, the user can still
perform the action.

You can not selectively revoke column privileges

Querying privileges information

SELECT * FROM DBA_TAB_PRIVS


WHERE TABLE_NAME = ‘CUSTOMER’;

SELECT * FROM DBA_SYS_PRIVS


WHERE GRANTEE = ‘JOHN’;

Creating roles

When you create a database, Oracle creates 6 predefined roles. These roles are defined in the sql.bsq
script.

CONNECT, RESOURCE, DBA, SELECT_CATALOG_ROLE (ability to query the dictionary views and
tables), EXECUTE_CATALOG_ROLE (privilege to execute the dictionary packages SYS owned),
DELETE_CATALOG_ROLE (the ability to drop and recreate dictionary packages). When you run
catproc.sql the script executes catexp.sql which creates two more roles:

EXP_FULL_DATABASE – the ability to do full exports; and IMP_FULL_DATABASE;

Removing roles

DROP ROLE HR_UPDATE;


Enabling and disabling roles

If a role is not the default role in for a user, it is not enabled when the user connects.

ALTER USER JOHN DEFAULT ROLE


CONNECT, ACCOUNTS_MANAGER;

… DEFAULT ROLE ALL:

… DEFAULT ROLE ALL EXCEPT RESOURCE, ACCOUNTS_ADMIN;

You enable or disable roles using the SET ROLE command. You can specify the maximum number of
roles that can be enabled in the MAX_ENABLED_ROLES (20 is default).

SET ROLE ACCOUTS_ADMIN IDENTIFIED BY MANAGER;

SET ROLE ALL;

SET ROLE NONE;

Querying role information

SELECT * FROM DBA_ROLES;

SELECT * FROM SESSION_ROLES;

SELECT * FRO DBA_ROLE_PRIVS


WHERE GRANTEE = ‘JOHN’;

SELECT * FRO ROLE_ROLE_PRIVS


WHERE ROLES = ‘DBA’;

SELECT * FROM ROLE_SYS_PRIVS


WHERE ROLE=’CONNECT’;

SELECT * FROM ROLE_TAB_PRIVS


WHERE TABLE_NAME=’CUSTOMER’;

Auditing the database

When you create a database Oracle creates the SYS.AUD$ table called audit trail. To enable auditing set
the AUDIT_TRAIL to TRUE or DB or OS.

Statement auditing – (AUDIT SELECT BY SCOTT audits all SELECTS by the user)

Privilege auditing – (AUDIT CREATE TRIGGER)

Object auditing – (AUDIT SELECT ON JOHN.CUSTOMER)


You can restrict auditing

BY USER

WHENEVER NOT SUCCESSFUL

BY SESSION (per session)

BY ACCESS (each statement execution)

To audit the connection and disconnection from the database use AUDIT SESSION
To audit only successful logins – AUDIT SYSTEM WHENEVER SUCCESSFUL;
To audit only failed logins AUDIT SESSION WHENEVER NOT SUCCESSFUL;
To audit successfult logins of specific users – AUDIT SESSION BY JOHN, ALEX WHENEVER
SUCCESSFUL;
To audit successful updates, deletes on a table
AUDIT UPDATE, DELETE ON JOHN.CUSTOMER
BY ACCESS WHENEVER SUCCESSFUL;

To turn off auditing:

NOAUDIT UPDATE, DELETE ON JOHN.CUSTOMER;

Using globalization support

You define database character set when you create the database using CHARACTER SET (default
AS7ASCII). Other widely used character sets are WE8ISO8859PI (the Western European 8-bit ISO
standard 8859 part I) and UTF8 – both are 8-bit character formats.

You can chane the character set only if the new format is a superset of the old format. ALTER DATABASE
CHARACTER SET WE8ISO8859P1;

Backup database prior to doing that.

The Unicode character set

Unicode is a universal character encoding scheme that allows you to store information using a single
character set, regardless of platform or language.
UTF-16 is the 16-bit processing unicode.

Using NLS parameters

You can specify the init parameters:

By altering Init.ora - NLS_DATE_FORMAT = ‘YYYY-MM-DD’


By Setting parameters as an environment variable – in UNIX or windows registry.

By ALTER SESSION SET NLS_DATE_FORMAT = “YYYY-MM-DD”

By using SQL functions: TO_CHAR (SYSDATE, ‘YYYY-MM-DD’, ‘NLS_DATE_LANGUAGE’ =


AMERICAN’)

NLS Parameters –

NLS_LENGTH_SEMANTICS – specified at a session level or as init parameter. Defines the character


length semantics as byte or character. The default is byte.

NLS_LANG – only as environment variable. NLS_LANG has 3 parts: the language, the territory, and the
character set. AMERICAN_AMERICA.WE8ISO8859P1.

NLS_LANGUAGE - specified at a session level or as init parameter. Sets the language to be used. This
session param overrides the NLS_LANG.

NLS_TERRITORY - specified at a session level or as init parameter. This session param overrides the
NLS_LANG.

NLS_DATE_FORMAT – specified at a session level, as an environment var or as init parameter.

NLS_DATE_LANGUAGE - specified at a session level, as an environment var or as init parameter. Sets a


language explicitly for day and time.

NLS_TIMESTAMP_FORMAT – similar to above.

NLS_TIMESTAMP_TZ_FORMAT - specified at a session level, as an environment var or as init


parameter. Default timestamp with time zone format.

NLS_CALENDAR - specified at a session level, as an environment var or as init parameter. Sets


calendar.

NLS_NUMERIC_CHARACTERS - specified at a session level, as an environment var or as init


parameter. Specifies the decimal character and group separator (comma and period are default).

NLS_CURRENCY - specified at a session level, as an environment var or as init parameter.

NLS_ISO_CURRENCY - specified at a session level, as an environment var or as init parameter.

NLS_DUAL_CURRENCY - at a session level, as an environment var or as init parameter. Alternative


currency. Introduced to support Euro.

NLS_SORT - at a session level, as an environment var or as init parameter. Specifies language to use for
sorting. ALTER SESSION SET NLS_SORT = GERMAN;
SELECT * FROM CUSTOMERS ORDER BY NAME;

Obtaining NLS data dictionary information

SELECT * FROM NLS_DATABASE_PARAMETERS;

ALTER SESSION SET NLS_DATE_FORMAT = ‘DD_MM-YYYY HH24;MI:SS’;


ALTER SESSION SET NLS_DATE_LANGUAGE = ‘GERMAN’;

SELECT TO_CHAR(SYSDATE, ‘Day, Month’), SYSDATE FROM DUAL;

ALTER SESSION SET NLS_CALENDAR = ‘Persian’;

SELECT * FROM NLS_SESSION_PARAMETERS;

Views: NLS_DATABASE_PARAMETERS, NLS_INSTANCE_PARAMETERS,


NLS_SESSION_PARAMETERS, V$NLS_VALID_VALUES.

You might also like