DBA Fundamentals1
DBA Fundamentals1
1. Using the ORAPWD utility, create a password file with the SYS password. Orapwd file=<fname>
password=<password> entries=<users>
2. Set the REMOTE_LOGIN_PASSWORDFILE parameter to either EXCLUSIVE (the password file
is used for only one database, you can add users other than SYS or INTERNAL to the password
file) or SHARED (shared among multiple databases, but you can not add users other than SYS or
INTERNAL to the password file)
3. Grant the appropriate SYSDBA and SYSOPER privileges.
You can start an Instance with a text based PFILE or binary SPFILE (new for 9i). When the instance is
started with NOMOUNT stage you can only view views that read data from SGA (V$SGA, V$OPTION,
V$PROCESS, V$SESSION, V$VERSION, V$INSTANCE). When the database is mounted, the
information can be read from control file (V$CONTROLFILE, V$THREAD, V$DATABASE, V$DATAFILE,
V$DATAFILE_HEADER, V$LOGFILE).
PFILE
Example:
SQL> STARTUP PFILE=/oracle/admin/ORADB01/pfile/initoradb01.ora RESTRICT;
SPFILE
To create an SPFILE, a PFILE must exist:
SQL> CREATE SPFILE FROM PFILE;
To query sessions:
SQL> SELECT username, program
FROM v$session;
BACKGROUND_DUMP_DEST - the location to write the debugging trace files generated by the
background processes and alert log files.
USER_DUMP_DEST - location to write trace files generated by user sessions (tuning, deadlock, internal
errorsetc)
CORE_DUMP_DEST - used on UNIX to generate core dump files when the session terminated
abnormally. Not available on WINDOWS.
All databases have an alert log file specified in the BACKGROUND_DUMP_DEST. Stores info about
block corruption errors, internal errors, non-default initialization parameters, startup, shutdown, archiving,
recovery, tablespace modifications, rollback segment modifications, etc)
For UNIX it is called alert_<SID>.log.
Before 9i, dropping a tablespace did not drop the actual OS files. With OMF you can specify 2
initialization parameters: DB_CREATE_FILE_DEST (default location for new data files, the actual OS files
are created with the prefix ora_ and a suffix of .dbf) and DB_CREATE_ONLINE_LOG_DEST_n (specifies
up to 5 locations for online redo log files and control files; they have a suffix of .log and .ctl). Do a periodic
audit against V$CONTROLFILE, V$LOGFILE, V$DATAFILE and OS files with the OMF origin to delete
unused).
Chapter 4
Creating a Database
Prerequisites
1. Preparing OS Resources
On UNIX -
SHHMAX (maximum size of shared memory segment)
SHMNI (maximum number of shared memory identifiers in the system)
SHMSEG (maximum number of shared memory segments to which a user process can attach)
SEMMNI (maximum number of semaphore identifiers in the system)
SHMMAX * SHMSEG (the total maximum shared memory that can be allocated)
If you are creating a database on the server that already running other databases - back them up first.
Parameters
CONTROL_FILES - specifies control file location (s) with full pathname. Specify at least 2 control files on
different disks. You can specify up to 8control file names.
DB_BLOCK_SIZE - database block size in multiples of OS blocks (can not be changed after db is
created). The default is 4K on most platforms, but can be 2K-32K, depending on OS.
DB_NAME - database name which can only be changed by re-creating the control file. Maximum of 8
characters, you can only use alphanumeric characters, _ , # and $. No other characters are valid. The first
character is alphabetic.
The required parameters are DB_CACHE_SIZE, SHARED_POOL_SIZE and LOG_BUFFER which are
added to calculate the SGA, which must fit into real, not virtual memory.
ORACLE_BASE - the directory on the top of the tree, for example /u01/apps/oracle.
ORACLE_HOME - the location of the Oracle software, relative to Oracle base. The
OFA compliant location is in the $ORACLE_BASE/product/<release>.
ORACLE_SID - the unique instance name for the database, regardless of number of databases on the
server.
PATH - the standard Unix pathname that should already exist in the Unix environment. You must add the
directory for the Oracle binary executablesto this path variable: $ORACLE_HOME/bin.
LD_LIBRARY_PATH - other program libraries, both Oracle and not - that reside in the directory.
The database must have at least 2 redo log groups. It is recommended that they are the same size.
MAXLOGFILES specifies maximum number of redo log groups that can ever be created in the database.
MAXLOGMEMBERS specifies maximum number of redo log members (copies of redo log files) for each
redo log group. The MAXLOGHISTORY is used with RAC (max number of archived redo log files for
automatic media recovery). MAXDATAFILES specifies maximum number of data files created in the
database. MAXINSTANCES specifies the maximum number of instances that can simultaneously mount
and open the database. If you want to change these parameters you must re-create the control file. The
DB_FILES init parameter specifies the maximum number of data files accessible to the instance. The
MAXDATAFILES clause in the CREATE DATABASE specifies the maximum number of data files allowed
for the database. The DB_FILES parameter can not specify a value larger than MAXDATAFILES.
In contrast to creating database with CREATE DATABASE, using Oracle Managed Files is easier. The init
parameters CREATE_FILE_DEST and DB_CREATE_ONLINE_DEST_n are defined with the desired OS
locations for the data files and online redo log files:
Creating SPFILE
After configuring the init.ora correctly, create the SPFILE while connected as SYSDBA:
B ydefault the SPFILE and PFILE reside in the same location. At startup, the server looks for a file named
spfileSID.ora first, if it can not find one - it uses the pfile initSID.ora.
When the data dictionary is created, Oracle creates only 2 users SYS (owner of the data dictionary) and
SYSTEM (DBA account).
The data dictionary tables reside in the SYS schema in the SYSTE tablespace when you run the create
database command. Oracle automatically creates the tablespace and tables using the sql.bsq script
found in the $ORACLE_HOME/rdbms/admin.
This script creates SYSTEM tablespace, rollback segment called SYSTEM for the SYSTEM tablespace,
the SYS and SYSTEM user accounts, the dictionary base tables and clusters, indexes on the dictionary
tables and sequences, the roles PUBLIC, CONNECT, RESOURCE, DBA, DELETE_CATALOG_ROLE,
EXECUTE_CATALOG_ROLE, SELECT_CATALOG_ROLE and the DUAL table.
Running the catalog.sql script creates the data dictionary views. This script creates synonyms on the
views to allow users easy access to the views.
The script catalog.sql create the data dictionary views. The catproc.sql creates the dictionary items
necessary for PL/SQL functionality.
The PL/SQL stored programs are stored in the data dictionary. The code used to create the procedure,
package or function is available in the dictionary views DBA_SOURCE_ALL, ALL_SOURCE and
USER_SOURCE (ecept when you create them with a WRAP utility which creates encrypted code that
only Oracle can interpret.
You manage the privileges using regular GRANT or REVOKE statements. You GRANT or REVOKE
execute privileges if needed.
The DBA_OBJECTS, ALL_OBJECTS and USER_OBJECTS views give information about the status of
the stored program. If a procedure is invalid, you can recompiled it by using the ALTER PROCEDURE
<PROCEDURE_NAME> COMPILE;
To recompile a package, compile the package definition and then the package body:
ALTER PACKAGE <PACKAGE_NAME> COMPILE; ALTER PACKAGE <PACKAGE_BODY> COMPILE
BODY;
To do that for another schema you have to have ALTER ANY PROCEDURE privileges;
After creating the database and dictionary views, you must create additional tablespaces. Oracle
recommends creating the following tablespaces if they were not create with CREATE DATABASE or
DBCA.
UNDOTBS - holds the undo segments and automatic undo management. When you create a database,
Oracle creates a SYSTEM undo segment in the SYSTEM tablespace. For a database that has multiple
tablespaces you must create at least one undo segment that is not in the SYSTEM tablespace for manual
undo management or one undo tablespace for automatic undo management.
TEMP - holds the temporary segments for sorting and intermediate operations. Oracle uses these
segments when the information to be sorted will not fit into the SORT_AREA_SIZE.
After the database is created back it up and change passwords for SYS and SYSTEM.
DBA_ - vies with information about all structures in the database, all schemas. Accessible to DBA or
SELECT_CATALOG_ROLE privilege owner.
ALL_ - views on info on all objects the user has access to.
USER_ - structures owned by the user in the user’s schema. They are accessible to all users and do no
have OWNER column.
V$ - dynamic performance views. Continuously updated while the database is open and in use.
GV$ - for almost all V$ views, Oracle has a corresponding GV$ view. These are the global performance
views pertinent to RAC. The corresponding GV$ view has an additional column identifying the instance
number called INST_ID.
You can use the data dictionary information to generate the source code for all the objects created in the
database.
The dictionary view DICTIONARY (DICT) contains names and descriptions of all the data dictionary views
in the database. The DICT_COLUMNS describes columns in DICT.
If you want to query the dictionary vews about tables:
The dictionary views ALL_OBJECTS, DBA_OBJECTS, and USER_OBJECTS provide all info about the
objects in the database. These views contain the tmestamp of object creation and the last DDL
timestamp. The STATUS column shows whether or not the object is valid (for PL/SQL).
1. Creates a parameter file, starts up the database in NOMOUNT mode, and then creates the
database using the CREATE DATABASE command.
2. Runs catalog.sql.
3. Creates tablespaces for tools TOOLS, undo UNDO, temp TEMP, and index INDX.
4. Runs the following scripts: catproc.sql (sets up PL/SQL), caths.sql (installs the heterogeneous
services (HS) data dictionary, providing the ability to access non-Oracle databases from the
Oracle database), otrcsvr.sql (Oracle trace server SP), utlsampl.sql (sets up sample user SCOTT
and creates demo tables), pubbld.sql (creates product and user profile tables, script runs as
SYSTEM).
5. Runs the scripts necessary to install other options chosen.
Chapter 5
The control file is updated continuously and should be available at all times. Only Oracle processes
should update control files. It is used on startup to identify data files, redo log files and open them. They
play major role in database recovery. Contain:
5 Database name (a control file can belong to only one database)
6 Database creation timestamp
7 Data files - location, name and online/offline status information
8 Redo log files - name and location
9 Redo log archive information
10 Tablespace information
11 Current log sequence number that is assigned when logs switch occurs
12 Most recent checkpoint information
13 Begin and end of undo segments
14 RMAN’s backup information
The control file size is determined by the MAX clauses of the create database statement -
MAXLOGFILES, MAXLOGMEMBERS, MAXLOGHISTORY, MAXDATAFILES, MAXINSTANCES. The
control file has 2 record sections: reusable (backup data files, etc) and not reusable.
Multiplexing Control Files Using init.ora - copying the control files to multiple locations and changing the
CONTROL_FILES parameter in the init.ora.
CONTROL_FILES = (‘/ora01/oradata/MYDB/ctrlMYDB01.ctl’,
‘/ora02/oradata/MYDB/ctrlMYDB02.ctl’,
‘/ora03/oradata/MYDB/ctrlMYDB03.ctl’)
1. Shutdown database.
2. Copy the control file to more remote locations by using an OS command
3. Change the init.ora parameter to add more locations (above)
4. Start up the database
This parameter change will only take effect after next instance restart by using SCOPE=SPFILE.
$ cp /ora01/oradata/MYDB/ctrlMYDB01.ctl /ora01/oradata/MYDB/ctrlMYDB01.ctl
4. Startup instance
To use OMF-created control files do not specify the CONTROL_FILES parameter in init.ora, but instead
make sure that DB_CREATE_ONLINE_LOG_DEST_n is specified n times stating with 1. Therefore, n is
the number of the desired control files to be created. The actual names of the control files are system
generated and can be found in the alert logs located in the $ORACLE_HOME/admin/bdump.
If you change the database name, loose all the old control files, if you want to change any of the MAX
clauses in the CREATE DATABASE.
You can also generate the create control file command from the current database bt using the command
ALTER DATABASE BACKUP CONTROLFILE TO TRACE. The control file creation script is written in the
USER_DUMP_DEST.
After creating the control file determine whether any of the data files listed in the dictionary are missing. If
you query the V$DATAFILE view, the missing files will have the name MISSINGnnnn. If you created the
control files by using the RESETLOGS option, the missing data files can not be added back to the
database. If you created the database with NORESTLOGS option the missing data files can be included
with the database by using media recovery.
You can backup the control file while the database is in use:
ALTER DATABASE BACKUP CONTROLFILE TO <FILENAME> REUSE;
ALTER DATABASE BACKUP CONTROLFILE TO TRACE;
STATUS NAME
________ ________
/ora01/oradata/MYDB/ctrlmydb01.ctl
The redo log files record all changes to the database. The redo log buffer in the SGA is periodically
written to the redo log file by the LGWR process. Every database has at least 2 redo log files. The LGWR
writes them in a circular fashion.
Every redo entry is made up of a group of a change vectors (each is a description of a change made to a
single block in the database). During recovery, Oracle reads the changes from redo logs and applies them
to the relevant blocks.
LGWR writes redo information from redo log buffer to the online redo log files when:
1. A user commits a transaction even if this is the only transaction in the log buffer.
2. The redo log is 1/3 full
3. when approx 1MB of changed records in buffers
LGWR always writes its records to the online redo log file before DBWn writes new or modified database
buffer cache records to the datafiles.
Each database has its own online redo log groups. A log group can have one or more redo log members
each member is a single OS file). In the RAC environment, each instance will have 1 online redo thread.
That is, the LGWR process of each instance writes to the same online redo log files and oracle has to
keep track of the instance from where the instance changes are coming. For a single instance there is
only one thread. Whenever the transaction is committed, a SCN is assigned to the redo records to identify
the committed transaction.
CREATE DATABASE “MYDB01”
LOGFILE GROUP 1 ‘/ora02/oradata/MYDB01/redo01.log’ SIZE 10M,
GROUP 2 ‘/ora02/oradata/MYDB01/redo01.log’ SIZE 10M;
The LGWR writes to only one redo log file at a time (current log file). The active log file is the one required
for instance recovery. The log files are written in a circular fashion. A log switch happens when Oracle
finishes writing to one file and writing to another. A log switch always occurs when the current log is full.
You can force log switch:
Redo lgs are written sequentially on the disk so the I/O will be fast if there is no other activity on the disk.
Keep the redo log files on a separate disk for better performance.
Checkpoints
Checkpoints are closely tied to the redo log switches. A check point is an event that that flushes the
modified data from the buffer cache to the disk and updates the control file and data files. A checkpoint is
initiated when:
1. The redo log file is full and a log switch occurs
2. When the instance is shutdown with other than abort.
3. When a tablespace status is changed to read only or put in a backup mode.
4. When a tablespace or datafile is taken offline
5. When other values are specified in init parameters.
When multiplexing online redo log files, LGWR concurrently writes the same info to multiple online redo
log files. All copies of the redo file of the same size are know as a group which is identified by an integer.
Each redo log file is identified as a member of a group. You must have at least 2 redo log groups for
normal database operation.
When multiplexing redo log files it is preferable to keep the members on different disks. If LGWR can not
write to at least 1 member of the group, database works as usual; an entry is written to the alert.log. If all
members of the group are not available, Oracle shuts down the instance.
The maximum number of log file groups specified in the MAXLOGFILES, and the maximum number of
members is specified as MAXLOGMEMBERS.
If you omit the GROUP clause Oracle assigns the next available number.
If you forgot to multiplex the redo log files when creating the database, you can add new members. All the
members have the same size. This add new member to group 2.
Before renaming the log files, the new files should already exist. Oracle just points toward the new file, it
does not rename the OS file.
1. Shutdown and backup the entire database
2. Copy/rename the online redo log file member to the new location by using an OS command
3. Startup the instance and mount the database
4. rename the log file member in the control file: ALTER DATABASE RENAME FILE
‘<OLD_REDO_FILE_NAME>’ TO ‘<NEW_REDO_FILE_NAME>’;
5. ALTER DATABASE OPEN
6. Backup the control file
The group to be dropped should not be active (ALTER SYSTEM SWITCH LOGFILE if needed).
Just like redo log groups you can only drop inactive members of inactive redo log groups. Also, if there
are only 2 groups the log member to be dropped is not the last member of the group.
Database altered.
Database altered.
Copying log files to another location, done by ARCn. You ca recover the database from archives, update
standby database or use in LogMiner. The LGWr waits for the ARCn to complete copying before
overwriting the redo log file.
LOG_ARCHIVE_DEST specifies the destination to write archive log files. You can change the destination:
ALTER SYSTEM SET LOG_ARCHIVE_DEST = '<new_location>';
LOG_ARCHIVE_DUPLEX_DEST - sexond destination to write the archive log files. You specify the
minimum successful number of archive destination in the LOG_ARCHIVE_MIN_SUCCEED_DEST. You
can change the location: ALTER SYSTEM SET LOG_ARCHIVE_DUPLEX_DEST = '<new_location>';
LOG_ARCIVE_DEST_n - you can specify as many as 5 destinations. These arcive locations can be
either on the same machine or on a remote machine where the standby database is located. When these
parameters are used you can not use the LOG_ARCHIVE_DEST or _DUPLEX parameters.
The syntax:
LOG_ARCHIVE_DEST_n = "null_string" |
((SERVICE = <TNSNAMES_NAME> |
LOCATION = '<DIRECTORY_NAME>')
[MANDATORY | OPTIONAL]
[REOPEN [=integer]])
Example:
The values:
%t - thread number
For example, LOG_ARCHIVE_FORMAT = 'arch_%t_%s' generates the archive log file names as
arch_1_101, arch_1_102, etc. One is the thread number, and 101 and 102 are the log sequence
numbers. Specifying arch_%S generates arch_000000101 and 102.
1. Shutdown database
2. Startup mount
To disable:
1. Shutdown database.
Database altered.
Database altered.
Database altered.
3 1 0 104857600 1 NO UNUSED
154145 25-SEP-04
2 ONLINE
/opt/ora9/oradata/orcl/redo02.log
1 ONLINE
/opt/ora9/oradata/orcl/redo01.log
5 ONLINE
/opt/ora9/oradata/orcl/redo002.log
no rows selected
DESTINATION
---------------------------------------------------------------------------------
BINDING TARGET REOPEN_SECS
--------- ------- -----------
/opt/ora9/product/9.2/dbs/arch
MANDATORY PRIMARY 0
10 rows selected.
SQL>
The database data is stored logically on tablespaces and physically in data files corresponding to
tablespaces. By default all objects are created in SYSTEM tablespace.
By separating data in tablespaces from SYSTEM you will:
2. Control I/O by allocating separate physical strorage disks for different tablespaces.
4. Have separate tablespaces for TEMP segments and UNDO management. You can also separate them
by activity (heavy updates, indexes).
Group application related and module related data together, so when a maintenance required the rest of
the database is still up.
Managing Tablespaces
When Oracle allocates space to objects it is allocated in chunks of continuous database known as
extents. Each object is allocated a segment which has one or more extents. If the object is partitioned,
each partition has a segment allocated.
If you store the extent management information in the dictionary for the tablespace, the tablespace is
called dictionary-managed tablespace, they generate undo information. If you store the management
information in the tablespace itself by using bitmaps in each data file, it is locally managed tablespace, do
not generate rollback info. Each bit in the bitmap corresponds to a block. When an extent is allocated or
freed for reuse, Oracle changes the bitmap values to show the new status of the blocks. They do not
update data in the data dictionary and do not generate rollback.
Creating Tablespace
Tablespace name can not exceed 30 characters. The name should begin with an alphabetic character
and can contain alphanumeric characters and $, _ and #.
Another example:
DEFAULT STORAGE (
INITIAL 256K
NEXT 256K
MINEXTENTS 2
PCTINCREASE 0
MAXEXTENTS 4096)
BLOCKSIZE 4K
LOGGING
ONLINE
PERMANENT
DEFAULT STORAGE - default storage parameters for new objects that are created in the tablesapce. You
can overwrite these by specifying explicit storage parameters of the objects created.
BLOCKSIZE - block size for the objects in the tablespace. By default it is the database block size. It can
be 2, 4, 8, 16, 32K. For large DSS and OLTP multiple block size can be beneficial.
INITIAL - specifies the size of the object's (segment's) first extent. NEXT specifies the size of the
segment's next and successive extents in bytes. The default for INITIAL and NEXT is 5 database blocks.
The minimum is 3 blocks.
PCTINCREASE - specifies how much the third and subsequent extents grow over proceeding extent. The
default is 50%, meaning that the next extent is 50% larger that the preceding extent. The minimum is 0.
For example, (INITIAL 1M NEXT 2M PCTINCREASE 0) the extent sizes are 1M, 2M, 2M, 2M, etc. If you
specify PCTINCREASE 50, the extent sizes are 1M, 2M, 3M, 4.5M, 6.75M, etc. The actual NEXT is
rounded to a multiple of the block size.
MINEXTENTS - total number of extents allocated to the segment at the time of creation. The default is 1.
MINIMUM EXTENT - specifies that the extent sizes are a multiple of the size specified. You can use this
value to control fragmentation.
LOGGING - specifies that the DDL operations and direct load INSERT statements are recorded into redo
log files. LOGGING is the default and you can omit the clause. When you specify NOLOGGING the data
is modified with minimal logging. Individual object creation setting override these.
PERMANENT/TEMPORARY - specify if you want to store permanent tables or TEMPORARY for sorts.
The default is PERMANENT.
EXTENT MANAGEMENT - until 9i dictionary managed tablespaces were default. In 9i you have to
explicitly specify EXTENT MANAGEMENT DICTIONARY to enable dictionary management. It is LOCAL
by default or when omitted.
SEGEMENT SPACE MANAGEMENT - this is applicable only to locally managed tablespaces (either
manual or auto). If you specify AUTO, Oracle manages free space in the segments using bit maps rather
than free lists. For AUTO, Oracle ignores the storage parameters PCTUSED, FREELISTS, FREELIST
GROUPS when creating objects.
You can not alter DB_BLOCK_SIZE after creating a tablespace. The BD_CACHE_SIZE parameter
defines the buffer cache size associated with the standard block size. To create tablespaces with non-
standard block size you set the appropriate buffer cache size for the block size. The parameter is
DB_nKCACHE_SIZE where n is non-standard block size. You can set them to 2,4,8,16,32 but it can not
have the size of the standard block size. If you have to set up a tablespace that uses a different block size
ogf 16K you must set the DB_16K_CACHE_SIZE parameter. By default DB_nK_CACHE_SIZE is 0.
Temporary tablespaces have to have standard block size.
Using CREATE TABLESPACE with the EXTENT MANAGEMENT LOCAL manages space more
efficiently, less fragmentation and reliable. You can not specify default storage, temporary or minimum
extent in a locally managed tablespace. You can specify that Oracle manages extents automatically by
using the AUTOALLOCATE option.
MANUAL free space management is the only option available in pre-9i databases. In Oracle 9i you can
manage the free space in blocks using the bitmaps if you specify SEGEMENT SPACE MANAGEMENT
AUTO in CREATE TABLESPACE. If it is AUTO, Oracle ignores FREELIST, FREELIST GROUPS and
PCTUSED.
Undo Tablespace
Oracle can manage undo tablespaces automatically. For auto undo management you must have one
undo tablespace.
Temporary Tablespace
Oracle can manage space for sort operations more efficiently by using temporary tablespaces. More that
one transaction can use the same sort segment, but each extent can be used by one transaction.
Dictionary managed temp tablespace:
TEMPORARY;
For a TEMP tablespace the extent size should be a multiple of SORT_AREA_SIZE plus
DB_BLOCK_SIZE to reduce fragmentation. Keep PCTINCREASE = 0. For example, if your sort area size
is 64K and database block size is 8K, provide the default storage of the TEMP tablespace as (INITIAL
136K NEXT 136K PCTINCREASE 0 MAXEXTENT UNLIMITED).
Tempfiles are always in nologging mode and non recoverable, they can not be renamed, taken offline,
made read only.
Altering a Tablespace
NORMAL (Oracle writes all the dirty buffers blocks in the SGA tot he data blocks of the tablespace and
closes data files)
TEMPORARY (Oracle performs a checkpoint on all online data files, but does not ensure the data files
are available)
IMMEDIATE (oracle does not perform a checkpoint and does not make sure that all data file are
available)
FOR RECOVER (this option palces tablespaces offline for point in time recovery. You can copy data files
belonging to the tablespace from a backup and apply the archive redo log files, deprecated in 9i).
You can not place SYSTEM tablespace offline because it holds data dictionary.
If you are modifying a locally managed tablespace to add more data files:
Dropping a Tablespace
CASCADE CONSTRAINTS;
The actual datafiles are only removed if this is an Oracle Managed Files server. Otherwise, you have to
remove the data files yourself.
11 rows selected.
11 rows selected.
TABLESPACE_NAME FREE_SPACE
------------------------------ ----------
CWMLITE 11141120
DRSYS 10813440
EXAMPLE 458752
INDX 26148864
ODM 11206656
SYSTEM 2555904
TOOLS 10420224
UNDOTBS1 198049792
USERS 26148864
XDB 196608
10 rows selected.
no rows selected
7 rows selected.
If you omit the full path, oracle creates the file in the default database directory or current directory,
depending on OS.
DATAFILE ‘/disk2/oradata/DB01/appl_data001.dbf’
SIZE 500M
If the file already exists in the database and you want to enable auto extension feature:
DATAFILE ‘/disk2/oradata/DB01/appl_data001.dbf’
ALTER DATABASE
DATAFILE ‘/disk2/oradata/DB01/appl_data001.dbf’
RESIZE 1500M;
If the file size specified is below the actual data size already existing in the datafile Oracle returns an
error.
OMFs are appropriate for smaller non production databases or databases that run on disks that use
logical volume manager (LVM). LVM is a software that combine partitions on multiple physical disks to
one logical drive.
Prevention of errors (Oracle removes files associated with a tablespace, so you can not do that)
Easy script writing - application vendors need not worry about the syntax of specifying directory names in
the scripts.
Creating Oracle Managed Files
Before you can create OMF you need to set the DB_CREATE_FILE_DEST (directory where Oracle
creates files) in init.ora or ALTER SYSTEM or ALTER SESSION statement. Must be local to the database.
Oracle will not create the directory, it will only create the data file.
You can create data files using the CREATE DATABASE, CREATE TABLESPACE, ALTER DATABASE.
You do need to specify data file names for SYSTEM or UNDO tablespaces, You can omit the DATAFILE
clause in the create tablespace statement.
The datafiles you create using OMF will have a standard format.
Where %t is the tablespace name and %u is an unique 8 characters string that Oracle derives. If the
tablespace has more than 8 characters, only 8 of them are used. The file names are written in the alert
log file.
You can also use the OMF to create control files and online redo log files for the database. Since those
two types can be multiplexed, oracle provides another parameter to specify the location of files -
DB_CREATE_ONLINE_LOG_DEST_n, in which n can be 1-5. you can also alter these parameters with
ALTER SYSTEM or ALTER SESSION.
The redo log file names will have format ora_%g_%u.log, in which %g is the log group number and u% is
an 8 character string unique to the database. The control file name will have the format of ora_%u.ctl, in
which %u is an 8 character string.
DB_CREATE_ONLINE_LOG_DEST_1 = ‘/ora1/oradata/MYDB’
DB_CREATE_ONLINE_LOG_DEST_2 = ‘/ora2/oradata/MYDB’
DB_CREATE_FILE_DEST = ‘/ora1/oradata/MYDB’
The CONTROL_FILES parameter is not set. Create the database using the following:
One member of the first online redo log group in ‘/ora1/oradata/MYDB’ and a second in
‘/ora2/oradata/MYDB’
One member of the second online redo log group in ‘/ora1/oradata/MYDB’ and a second in
‘/ora2/oradata/MYDB’.
Because we specified the UNDO_MANAGEMENT clause and did not specify a name for the undo
tablespace, Oracle creates SYS_UNDOTBS as undo tablespace and creates its data file under
/ora1/oradata/MYDB. If you omit the DEFAULT TEMPORARY TABLESPACE clause, Oracle does not
create one at all. The data files and temp files oracle creates will have a default size of 100M, which I s
auto extensible with no max size. Each redo log member will be 100M in size by default.
Another example.
If you want to a different size for the OMF files you can specify the DATAFILE clause without the file
name.
AUTOEXTEND OFF;
SIZE 1M;
ALTER SYSTEM SET DB_CREATE_FILE_DEST = ‘/ora5/oradata/MYDB’
2. Copy or move files to the new location, or rename file with an OS command.
‘/disk1/oradata/DB01/userdata2.dbf’ TO
‘/disk1/oradata/DB01/user_data2.dbf’;
Or
‘/disk1/oradata/DB01/userdata2.dbf’ TO
‘/disk1/oradata/DB01/user_data2.dbf’;
10 rows selected.
TABLESPACE_NAME
------------------------------
FILE_NAME
-------------------------------------------------------------------------------- BYTES AUT
---------- ---
SYSTEM
/opt/ora9/oradata/orcl/system01.dbf
429916160 YES
UNDOTBS1
/opt/ora9/oradata/orcl/undotbs01.dbf
209715200 YES
10 rows selected.
TABLESPACE_NAME
------------------------------
FILE_NAME
-------------------------------------------------------------------------------- BYTES AUT
---------- ---
TEMP
/opt/ora9/oradata/orcl/temp01.dbf
41943040 YES
SQL>
Segments are logical storage units that fit between a tablespace and an extent.
A data block is the smallest logical unit of storage. You define block size with DB_BLOCK_SIZE. Data
block consists of the following:
Common and variable header - information about the type of block and address. The type can be
UNDO, DATA or INDEX. The common block header takes 24 bytes, and the variable (transaction) header
occupies (24XINITRANS) bytes. By default the value of INITRANS for tables is 1 and for indexes it is 2.
Table directory - info about tables that have rows in this block. The table directory takes 4 bytes.
Free space - the space that is available for new rows or for extending the existing rows thru updates.
Deletion and upfdates amy cause fragmentation in the block, coalesce when needed.
PCTFREE and PCTUSED - these control free space available on for inserts and updates on the reows in
the block. .
INITRANS and MAXTRANS - control the number of concurrent transactions that can modify or create
data in the block. You can specify the parameters when you create the object.
FREELIST - each segment has one or more free lists that list the available blocks for future inserts. The
default is 1 freelist for every segment.
PCTFREE (default 10) - specifies what % of the block should be allocated as free fo future updates. If the
table is expected to undergo a lot of updates that will increase the size of the row, set a higher PCTFREE.
PCTUSED (default 40) - specifies when the block is considered for adding new rows. After block becomes
full as determined by the PCTUSED, Oracle considers adding new rows only if the block used space falls
below PCTUSED. If the value is below PCTUDED the block is added to free list. If the table has a lot of
inserts and deletes and the updates do not cause the row length to increase set the PCTFREE low and
PCTUSED to high. A haigh PCTUSED will help to rreuse space freed by deletes faster. If the table row
length is larger or the rows are never updated set PCTFREE very low so that data row can fit in a single
block and you fill all blocks.
You can specify PCTFREE when you cerate a table, an index, or a cluster. You can specify PCTUSED
while creating tables and clusters, but not indexes.
INITRANS and MAXTRANS - these transaction entry settings reserveer space for transactions in the
block. Base these parameters on the maximum number of that can touch a block at any given time.
INITRANS reserves the space in the block header for the DML. If you do not specify INITRANS, Oracle
defaults to 1 for table data blocks and 2 for index and cluster blocks. MAXTRANS default is OS specific,
but the max is 255.
If the row length is large or the number of users accessing the table is low, set INITRANS to a low value.
Some tables, such as application control tables, are accessed frequently, need a higher INTRANS.
If a segment does not contain LOBs and is locally managed, you can use Automatic Space management
instead of PCTFREE, USE or FREELISTS. Bitmaps are used instead of free lists.
DATAFILE '/disk2/oradata/DB01/appl_data02.dbf'
SIZE 200M
Extents
An extent is alogical storage unit that is made up of contiguous data blocks. INITIAL, NEXT,
PCTINCREASE, MINEXTENTS, MAXEXTENTS are the parameters for extents. When the extents are
managed locally the storage parameters do not affect the size of extents. Once an bject is created, its
initial and maxextents can not be changed. Changes to NEXt and PCTINCREASE take effect when the
next extent is allocated (the existing extents are not changed).
Allocating extents
Oracle allocates extents when an object is first created or when all the blocks in the segment are full.
1. If the extent requested more than 5 data blocks. Oracle adds one more block to reduce internal
fragmentation.
2. If an exact match fails, Oracle searches the contiguous free blocks again for a free extent larger than
the required value.
3. If the step 2 fails, Oracle coalesces the free space and repeats step 2.
4. If step 3 fails, Oracle checks if the files are marked autoextensible. If so, Oracle tries to extend the file
and repeats step2. If Oracle can not extend the file it issues an error.
Extents are deallocated when you drop an object. To free up the extents:
The ... REUSE STORAGE does not dealoocate extents, it hust removes the rows.
Querying Extents
6 rows selected.
SQL> SELECT TABLESPACE_NAME, MAX(bytes) LARGEST,
2 MIN(bytes) SMALLEST, COUNT(*) EXT_COUNT
3 FROM dba_free_space
4 GROUP BY tablespace_name;
10 rows selected.
SQL>
Segments
A segment is alogical storage unit that is made up of one or more extents. A segment can belong to only
one tablespace, but may spread across multiple data files belonging to the same tablespace.
Types of segments:
Table segment
Table partition segment - each part of the table residing in a diffrent tablespace is a separate segment
Cluster segment - consists of one or more tables. The data is stored in key order and all tables within the
cluster have the same characteristics. Typically, tables stored in a cluster are joined (emp-dept).
Nested table segments - columns in a tables are tables themselves, each column stored in a separate
segment.
IOT - an index organized table is a table and and index that are combined in a single segment, stored in
index order. Queries to an IOT are very fast because it neerds to move only by a single segment tofind
results.
Temporary segment - hold overflow information from sort operations that did not fit into memory.
LOB segment - for LOB segments for a table that is larger than 4KB space is allocated in LOB segments.
Bootstrap - a special system segment that is used to initialize the data dictionary upon instance startup
28 rows selected.
SQL>
Undo segments record old va;ues that were changed by a transaction. Undo segments provide read
consistency and the ability to undo changes. When a transaction is complete (COMMIT or ROLLBACK)
Oracle finds a new UNDO segment for this session. For an update or delete, the before-image data is
saved in the UNDO segments, the the corresponding data blocks are modified. For inserts, the UNDO
entries include ROWID, because to undo an insert the row must be deleted. Oracle records the changes
to the original data block and the redo log (important for transactions that are not yet commited or rollback
at a time of a system crash or media recovery).
When you create a database, Oracle creates the SYSTEM undo segment inthe SYSTEM tablespace.
Every database has to have a non-SYSTEm undo segment. Although multiple undo tablespaces can exist
in a database, only one can be active at any given time. two ini.ora parameters control the use of
automatic undo management in the database: UNDO_MANAGEMENT (AUTO or MANUAL, can not be
changed dinamically) and UNDO_TABLESPACE (default is SYS_UNDOTBS, can not be changed
dinamically.
After you create the database you can create additional tablespaces:
ALTER SYSTEM
SET UNDO_TABLESPACE=SYS_UNDOTBS_NIGHT;
The amount of time that ndo data is retained for consistent reads is controlled with UNDO_RETENTION
specified in seconds.
Querying UNDO
11 rows selected.
USN NAME
---------- ------------------------------
0 SYSTEM
1 _SYSSMU1$
2 _SYSSMU2$
3 _SYSSMU3$
4 _SYSSMU4$
5 _SYSSMU5$
6 _SYSSMU6$
7 _SYSSMU7$
8 _SYSSMU8$
9 _SYSSMU9$
10 _SYSSMU10$
11 rows selected.
SQL>
Chapter 8. Managing Indexes, Tables and Constraints
Table types:
Temporary - store data specific to a session. Store intermediary results. Use CREATE GLOBAL
TEMPORARY TABLE.
Index organized - store data in a structured primary key sorted matter. You must define a primary key for
each IOT. These tables do not use separate segments for the table and a primary key, as they use the
same storage for both. CREATE TABLE … ORGANIZATION INDEX
External tables - store data in outside flat files (new to 9i). These tables are read only, there are no
indexes allowed. The default driver used is SQL loader. CREATE TABLE ORGANIZATION EXTERNAL.
Create table
ORDER_NUM NUMBER(10,3),
ORDER_DATE DATE);
CHAR - fixed length character data type. Data padded to fit the column width. Size default to 1, max is
2000 bytes.
VARCHAR2 (1) - Variable length character data. Maximum length in (). You must specify a size, there is
no default size. Max is 4000 bytes. Unlike CHAR, the characters are not blank padded.
NCHAR - similar to CHAR, but used to store Unicode character set data. NCHAR is fixed in length,
maximum size is 2000 default 1 character.
NVARCHAR (1) - same as VARCHAR2, stores Unicode variable length data, default 4000 bytes.
LONG - stores variable length character data up to 2GB. Use CLOB and NCLOB instead. Provided for
backward compatibility. Can have only one LONG column per table.
NUMBER (1,1) - Stores fixed and floating point number. You can optionally specify a precision and a
scale. The default is 38 digits of precision.
DATE - stores date data. You can store dates from Jan 1, 4712 BC to Dec 1 9999 AD.
TIMESTAMP - stores date and time with fractional seconds precision up to 9 digits.
TIMESTAMP WITH TIME ZONE - similar to TIMESTAMP, but also stores time zone displacement (the
difference between the local and the Universal time zone in hours and minutes).
TIMESTAMP WITH LOCAL TIME ZONE - similar to TIMESTAMP, but includes the displacement in the
database time zone, but when the user retrieves the data, it is shown in the users local session time zone.
INTERVAL DAY TO SECOND - used to represent a period of time as days, hours, minutes and seconds,
stores the difference between 2 date values.
RAW - variable length type used to store unstructured data. Provided for backward compatibility. Use
BLOB and BFILE instead.
BFILE - stores unstructured binary data in OS files outside the database. The file size can be up to 4GB.
Oracle only stores a pointer to a file.
ROWID - stores binary data representing a physical row address of a row. Occupies 10 bytes.
UROWID - stores binary data representing any type of row address; physical, logical, or foreign. Up to
4000 bytes.
Collection types are used to represent more than one element such as an array - VARRAY and NESTED
TABLES. Elements in VARRAY are ordered and have a maximum limit. Elements in a table type are not
ordered and there is no upper limit to the number of elements.
Specifying Storage
If the table is too large, create a partitioned table or create a partition in a separate tablespace. Oracle
allocates a segment to the table. This segment will have the number of extents specified in the
MINEXTENTS. The presence of numerous extents affects performance of the table on truncation, full
table scanning, causing additional I/O.
A table can contain values of CLOB,BLOB and NCLOB (different storage parameters from tables):
The LOB is given the name of PHOTO_LOB. If the LOB column is larger than 400 bytes data is stored in
the LOB segment (out of line storage). The DISABLE/ENABLE STORAGE IN ROW specifies whether
LOB data should be stored inline or out of line. (ENABLE is default). The CHUNK clause specifies
number of bytes of data that will be read during LOB manipulation (has to be in multiple of database
blocks). PCTVERSION specifies the percentage of all used LOB data space that can be occupied by old
versions of LOB data pages. If the LOB read/updatedtly, used the CACHE clause.
Create as select will not work with table with LONG datatype.
Partitioning tables
Partitioning is breaking a large table into managable pieces based on values in a column. Each partition is
allocated a segment in possibly separate tablespaces.
Hash partitioning - more appropriate when you do not know how much data will be in arange or how big
the partitions be. Hash partitions use a hash algorithm on the partitioned columns. The number of
partitions should be specified as a power of 2 (2.4.6.8...) Choose a column with uniquie values.
or
List partitioning - if you all the values kept in the columnt nad want to create a partition for each value
(you can combine several values into the same partition). NULL can be a separate list. Oracle rejects a
row not included in the list.
Composite partitioning - uses range partitioning to create partitions and hash partitions to create
subpartitions. Only subpartitions are created on the disk, partitions are physically representations only.
CREATE TABLE CARES
(MODEL_YEAR NUMBER(4),
MODEL VARCHAR2 (30),
MANUFACTR VARCHAR2 (50),
QUANTITY NUMBER)
PARTITION BY RANGE (MAKE_YEAR)
SUBPARTITIOn BY HASH (MODEL) SUBPARTITIONS 4
STORE IN(TSMK1, TSMK3, TSMK4)
(PARTITION 2001 VALUES LESS THAN (2002),
(PARTITION 2002 VALUES LESS THAN (2003),
(PARTITION 2001 VALUES LESS THAN (MAXVLUE))
STORAGE (INITITAL 64K NEXT 6K PCTINCREASE 0 MAXEXTENTS 4096);
better example:
NOLOGGING - no redo generated, media recovery will not restore the objects. Entire tablespace backup
is advised.
CACHE/NOCACHE - the blocks for these objects are not aged out of the cache.
Altering tables
If you change NEXT, PCTINCREASE, MAXEXTENTS, FREELISTS and FREELIST GROUPS this will not
affect the extents that already allocated. You can not change INITIAL and MAXEXTENTS with ALTER
TABLE.
You can use the UNUSED_SPACE procedure to find the HVM (high water mark) of the segment. This
shows the highest size the extent ever grown to.
Oracle deallocates storage on TRUNCATE unless TRUNCATE TABLE ORDERS REUSE STORAGE;
Reorganizing tables
The old segment is dropped only after you create a new segment.
Moving tables (queries are allowed while moving, but not DML, grants retained).
Dropping a table
The indexes, constraints, triggers and privileges are dropped. The views or snapshots are not dropped,
but rendered invalid.
Analyzing tables
Validating structure
As aresult of hardawre proplems, and bugs some blocks can become corruupted. Oracle returns
corruption error only the error is accessed. You can use ANALYZE to validate structure of the tables. If the
blocks are not readable it returms an error. the ROWID of the bad blocks are insted into a table. You can
specify a name of the table to inseert the ROWID in. By default the table name is INVALID_ROWS. You
can create table using
SQL> @c:\oracle\ora90\rdbms\admin\utlvalid.sql
By default oracle keeps info about chained rows in the CHAINED_ROWS table created by
SQL> @c:\oracle\ora90\rdbms\admin\utlchain.sql
3. If there are migrated rows, create a temprorary table to hold the migrated rows.
CREATE TABLE TEMP_ORDERS AS
SELECT * FROM ORDERS
WHERE ROWID IN (SELECT HEAD_ROWID
FROM CHAINED_ROWS
WHERE OWNER_NAME='SCOTT'
AND TABLE_NAME='ORDERS';
Before deleting the row make sure you disable all foreign keys to ORDERS.
Collecting statistics
You can calculate the exact stats (COMPUTE) of the table or sample a few rows and estimate the stats
(ESTIMATE) for large tables. When ANALYZing, Oracle collects total number of rows per table and
number of chained rows, number of blocks, unused blocks and average unused space in each block,
average row length.
- SAMPLE 20 PERCENT;
To remove stats:
Table descriptions
You primarily use DBA_TABLES, USER_TABLES and ALL_TABLES to query info about the tables.
Column descriptions
DBA_TAB_COLUMNS, USER_TAB_COLUMNS and ALL_TAB_COLUMNS.
A row piece (a part of a block that comprases an entire row) has two parts - a row header and a columnt
data. A row header is about 3 bytes, describes columns, if they are chained, if they are clustered. After the
row header is the column data. Column data has 2 parts - length and data. The length occupies 1 byte
for data less than 251 bytes and 3 bytes for data over 250 bytes.
Using ROWID
Categories of ROWIDs:
Physical ROWID - Ids each row in a table, partition, subpartition and cluster.
Formats of ROWIDs:
ROWID ORDER_NUM
AAAFqsAADAAAAfTAAA 5945055
Restricted - this is the pre 8 format, base 16. The format is BBBBBBB.RRRR.FFFF
where BBBBBBB is data block, RRRR is the row number, FFFF is the data file.
DBMS_ROWID
Managing indexes
Bitmap - does not repeatedly store the index column values. each value is trated as akey, and a bit is set
for curresponding ROWID. Bitmap indexes are for the rows with low cardinality (sex, day or night)
Reverse key b-tree index - if the value is 54321 Oracle reverses it to 12345. These are useful when you
create unique indexes on inserts to the table are always in the ascending order of the indexed column
retrieving fewer blocks.
Creating indexes
If you omit index storage parameters, Oracle assigns default tablespace parameters except for
PCTUSED (can't be used for indexes). Keep the PCTUSED higher than that for the corresponding table
because index can hold larger number of rows than a table.
Partitioning
Local prefixed - local index with leading columns (leftmost column in index) in the order of partitionnkey
Local non-prefixed - partition key columns are not leading columns, but the index is local
Global non-prefixed - global index with leading columns not in the partition key order
Before creating the fuction based indexes you must enable QUERY_REWRITE_ENABLED and
QUERY_REWRITE_INTEGRITY to TRUSTED, COMPATIBLE > 8.1.0.
CREATE INDEX IND_ORDERS
ON ORDERS (SUBSTR(PRODUCT_ID,1,2))
TABLESPACE USER_INDEX;
Index organized tables are useful for tables in which the data access is mostly through its primary key
(such as look up tables with descriptions). In this index the entire index is stored as a part of an index.
Dropping indexes
Analyzing indexes
Views on Indexes:
Managing constraints
Types:
NOT NULL
CHECK
Check can not use subqueries, SYSDATE or ROWNUM can not be used; one column can have more
than 1 CHECK constraint defined and can be NULL.
UNIQUE
PRIMARY KEY
FOREIGN KEY
Dropping constraints
You can use EXCEPTIONS INTO clause to find the rows that violate a referential integrity of uniqueness
condition (ususally table EXCEPTIONS enabled by SQL> @c:\oracle\ora90\rdbms\admin\utlecpt.sql)
Validated constraints
Constraint states:
ENABLE VALIDATE - default ENABLE, the existing data is checked if it coforms to the standards
ENABLE NOVALIDATE - does not validate existing data, but does validate future data
DISABLE VALIDATE - this constraint is disabled (all indexes to enforce this constraint are dropped) but
the constraint remains valid. No DML alloed on the table because no data can be verified.
DISABLE NOVALIDATE - default DISABLE, constraint disabled, no future or existing data chacked.
By default, Oracle checks whether the data conforms to the constraint when the statement is executed.
Oracle allows you to change this behavior if the constraint is created using deferrable clause (NOT
DEFERRABLE is default). INITIALLY IMMEDIATE specifies that the constraint be checked for
conformance at the end of each statement. INITIALLY DEFERRED checks for conformance at the end of
transaction. You have to drop and recreate constraint (can't use ALTER TABLE).
If the constraint is DEFERRABLE by using the SET CONSTRAINTS or by usnig ALTER SESSION SET
CONSTRAINT.
Most resource limits ae set at he session level. When a user exceeds a limit Oracle aborts the current
operation rollbaks the changes and returns an error.
The following parameters control the session:
CPU_PER_SESSION - limits amount of CPU time a session can use (hundredths of a second)
CPU_PER_CALL - limits amount of CPU time a single SQL statement can use (hundredths of a second).
Good for runaway queries, but be careful for batch jobs.
LOGICAL_READS_PER_SESSION - limits number of data blocks read in session, including blocks from
memory and physical reads.
LOGICAL_READS_PER_CALL - limits the number of data blocks read by a single SQL statement,
including blocks from memory and physical reads.
PRIVATE_SGA - limits amount of space allocated in the SGA for private areas, per session. Private areas
for SQL and PL/SQL are created in the multithreaded architecture. The limit does not apply to dedicated
server architecture.
CONNECT_TIME - a maximum number of minutes a session can stay connected (total elapsed time, not
CPU time). transactions rollbak and user disconnected.
Managing passwords
Account locking - number of failed login attempts and number of days the password will be locked
Password expiration - how often password can be changed, whether passwords can be reused, and the
grace period after which the password is to be changed.
Password complexity - should not be the same as user id, simple words, etc.
FAILED_LOGIN_ATTEMPTS
PASSWORD_GRACE_TIME - number of days the user will get a warning before a password expires.
PASSWORD_REUSE_TIME - specifies number of days a password can be used again after changing it.
Managing profiles
Composite limit
FUNCTION SYS.<FUNCTION_NAME>
( <userid_variable> IN VARCHAR2 (30),
<password_variable> IN VARCHAR2 (30),
<old_PASSWORD_VARIABLE> IN VARCHAR2 (30) )
RETURN BOOLEAN
Altering profiles
ALTER PROFILE ACCOUNTING _USER LIMIT
PASSWORD_VERIFY_FUNCTION VERIFY_FUNCTION
COMPOSITE_LIMIT 1500
Dropping profiles
Assigning profiles
Users
Authenticating users
The passwords stored in the database are encrypted. By default the password is not encrypted when sent
over network. To encrypt the password you must set the ORA_ENCRYPT_LOGIN to TRUE on the client
machine.
When you use authorization by the OS Oracle verifies the OS login account and connects to the database
(users do not need to specify password). Oracle does not store the passwords of the OS users, but they
must have a username in the database. The parameter OS_AUTHENT_PREFIC determines the prefix for
the OS authorization. By default, the value is OPS$ (for user ALEX the database username will be
OPS$ALEX). When ALEX specifies connect, but does not specify a user name he connects as
OPS$ALEX. You can set the OS_AUTHENT_PREFIX to “”.
To create an OS user:
LICENSE_MAX_SESSIONS – only users with RESTRICTED session privilege are allowed to connect.
The default is 0, unlimited. Set this parameter if your license is based on concurrent database usage.
LICENSE_SESSIONS_WARNING
LICENSE_MAX_USERS – set this parameter if your license is based on a total number of users.
ALTER SYSTEM
SET LICENSE_MAX_SESSIONS = 256
LICENSE_SESSIONS_WARNING = 200;
Managing privileges
ON COMMIT REFRESH – grants the privilege to create a refresh-on-commit snapshots on the table.
QUERY REWRITE – grants the privilege to create a materialized view for query rewrite using the
specified table.
WRITE – allows external table agent to write a log file or a bad file to the directory. This is associated only
with external tables.
Any privilege received on a table provides the grantee the privilege to lock the table.
Even if you have the DBA privilege to grant privileges on objects owned by another user you must have
been granted the appropriate privilege WITH GRANT OPTION
Multiple privileges can be granted to multiple users – GRANT INSERT, UPDATE, SELECT ON
CUSTOMER TO ADMIN_ROLE, JULIE, SCOTT;
The difference between SYSOPER and SYSDBA – SYSDBA can create databases.
To protect the dictionary, Oracle provides 07_DICTIONARY_ACCESSIBILITY. If it is set to TRUE, any
user with ANY can use SYS dictionary tables.
SELECT ANY, INSERT ANY, UPDATE ANY are system privileges, they do not apply to any particular
object.
Some info:
If multiple users granted privileges on an object and only one of them revoked them, the user can still
perform the action.
Creating roles
When you create a database, Oracle creates 6 predefined roles. These roles are defined in the sql.bsq
script.
CONNECT, RESOURCE, DBA, SELECT_CATALOG_ROLE (ability to query the dictionary views and
tables), EXECUTE_CATALOG_ROLE (privilege to execute the dictionary packages SYS owned),
DELETE_CATALOG_ROLE (the ability to drop and recreate dictionary packages). When you run
catproc.sql the script executes catexp.sql which creates two more roles:
Removing roles
If a role is not the default role in for a user, it is not enabled when the user connects.
You enable or disable roles using the SET ROLE command. You can specify the maximum number of
roles that can be enabled in the MAX_ENABLED_ROLES (20 is default).
When you create a database Oracle creates the SYS.AUD$ table called audit trail. To enable auditing set
the AUDIT_TRAIL to TRUE or DB or OS.
Statement auditing – (AUDIT SELECT BY SCOTT audits all SELECTS by the user)
BY USER
To audit the connection and disconnection from the database use AUDIT SESSION
To audit only successful logins – AUDIT SYSTEM WHENEVER SUCCESSFUL;
To audit only failed logins AUDIT SESSION WHENEVER NOT SUCCESSFUL;
To audit successfult logins of specific users – AUDIT SESSION BY JOHN, ALEX WHENEVER
SUCCESSFUL;
To audit successful updates, deletes on a table
AUDIT UPDATE, DELETE ON JOHN.CUSTOMER
BY ACCESS WHENEVER SUCCESSFUL;
You define database character set when you create the database using CHARACTER SET (default
AS7ASCII). Other widely used character sets are WE8ISO8859PI (the Western European 8-bit ISO
standard 8859 part I) and UTF8 – both are 8-bit character formats.
You can chane the character set only if the new format is a superset of the old format. ALTER DATABASE
CHARACTER SET WE8ISO8859P1;
Unicode is a universal character encoding scheme that allows you to store information using a single
character set, regardless of platform or language.
UTF-16 is the 16-bit processing unicode.
NLS Parameters –
NLS_LANG – only as environment variable. NLS_LANG has 3 parts: the language, the territory, and the
character set. AMERICAN_AMERICA.WE8ISO8859P1.
NLS_LANGUAGE - specified at a session level or as init parameter. Sets the language to be used. This
session param overrides the NLS_LANG.
NLS_TERRITORY - specified at a session level or as init parameter. This session param overrides the
NLS_LANG.
NLS_SORT - at a session level, as an environment var or as init parameter. Specifies language to use for
sorting. ALTER SESSION SET NLS_SORT = GERMAN;
SELECT * FROM CUSTOMERS ORDER BY NAME;