Module 1 - Oracle Architecture
Module 1 - Oracle Architecture
1/33
9/11/2014
http://www.siue.edu/~dbock/cmis565/module1-architecture.htm
2/33
9/11/2014
http://www.siue.edu/~dbock/cmis565/module1-architecture.htm
3/33
9/11/2014
System users can connect to an Oracle database through SQLPlus or through an application
program like the Internet Developer Suite (the program becomes the system user). This connection
enables users to execute SQL statements.
The act of connecting creates a communication pathway between a user process and an Oracle
Server. As is shown in the figure above, the User Process communicates with the Oracle Server
through a Server Process. The User Process executes on the client computer. The Server Process
executes on the server computer, and actually executes SQL statements submitted by the system
user.
The figure shows a one-to-one correspondence between the User and Server Processes. This is
called a Dedicated Server connection. An alternative configuration is to use aShared
Server where more than one User Process shares a Server Process.
Sessions: When a user connects to an Oracle server, this is termed a session. The User Global
Area is session memory and these memory structures are described later in this document. The
session starts when the Oracle server validates the user for connection. The session ends when the
user logs out (disconnects) or if the connection terminates abnormally (network failure or client
computer failure).
A user can typically have more than one concurrent session, e.g., the user may connect using
SQLPlus and also connect using Internet Developer Suite tools at the same time. The limit of
concurrent session connections is controlled by the DBA.
If a system users attempts to connect and the Oracle Server is not running, the system user
receives the Oracle Not Available error message.
http://www.siue.edu/~dbock/cmis565/module1-architecture.htm
4/33
9/11/2014
5/33
9/11/2014
o DBA can optionally set an aggregate target size for the PGA or managing PGA work
areas individually.
Manual memory management:
o Instead of setting the total memory size, the DBA sets many initialization parameters to
manage components of the SGA and instance PGA individually.
If you create a database with Database Configuration Assistant (DBCA) and choose the basic
installation option, then automatic memory management is the default.
The memory structures include three areas of memory:
System Global Area (SGA) this is allocated when an Oracle Instance starts up.
Program Global Area (PGA) this is allocated when a Server Process starts up.
User Global Area (UGA) this is allocated when a user connects to create a session.
6/33
9/11/2014
7/33
9/11/2014
http://www.siue.edu/~dbock/cmis565/module1-architecture.htm
8/33
9/11/2014
The content of the PGA varies, but as shown in the figure above, generally includes the following:
Private SQL Area: Stores information for a parsed SQL statement stores bind variable values
and runtime memory allocations. A user session issuing SQL statements has a Private SQL Area
that may be associated with a Shared SQL Area if the same SQL statement is being executed by
more than one system user. This often happens in OLTP environments where many users are
executing and using the same application program.
o Dedicated Server environment the Private SQL Area is located in the Program Global
Area.
o Shared Server environment the Private SQL Area is located in the System Global Area.
Session Memory: Memory that holds session variables and other session information.
SQL Work Areas: Memory allocated for sort, hash-join, bitmap merge, and bitmap create types
of operations.
o Oracle 9i and later versions enable automatic sizing of the SQL Work Areas by setting
theWORKAREA_SIZE_POLICY = AUTO parameter (this is the default!)
and PGA_AGGREGATE_TARGET = n(where n is some amount of memory established by
http://www.siue.edu/~dbock/cmis565/module1-architecture.htm
9/33
9/11/2014
the DBA). However, the DBA can let the Oracle DBMS determine the appropriate amount
of memory.
A session that loads a PL/SQL package into memory has thepackage state stored to the
UGA. The package state is the set of values stored in all the package variables at a specific time.
The state changes as program code the variables. By default, package variables are unique to and
persist for the life of the session.
The OLAP page pool is also stored in the UGA. This pool manages OLAP data pages, which are
equivalent to data blocks. The page pool is allocated at the start of an OLAP session and released
at the end of the session. An OLAP session opens automatically whenever a user queries a
dimensional object such as a cube.
Note: Oracle OLAP is a multidimensional analytic engine embedded in Oracle Database
11g. Oracle OLAP cubes deliver sophisticated calculations using simple SQL queries producing results with speed of thought response times.
The UGA must be available to a database session for the life of the session. For this reason, the
UGA cannot be stored in the PGA when using a shared server connection because the PGA is
specific to a single process. Therefore, the UGA is stored in the SGA when using shared server
connections, enabling any shared server process access to it. When using adedicated
server connection, the UGA is stored in the PGA.
10/33
9/11/2014
sga_target=1610612736
With automatic SGA memory management, the different SGA components are flexibly sized to
adapt to the SGA available.
Setting a single parameter simplifies the administration task the DBA only specifies the amount of
SGA memory available to an instance the DBA can forget about the sizes of individual
components. No out of memory errors are generated unless the system has actually run out of
memory. No manual tuning effort is needed.
The SGA_TARGET initialization parameter reflects the total size of the SGA and includes memory
for the following components:
Fixed SGA and other internal allocations needed by the Oracle Database instance
The log buffer
The shared pool
The Java pool
The buffer cache
The keep and recycle buffer caches (if specified)
Nonstandard block size buffer caches (if specified)
The Streams Pool
If SGA_TARGET is set to a value greater thanSGA_MAX_SIZE at startup, then the
SGA_MAX_SIZE value is bumped up to accommodate SGA_TARGET.
When you set a value for SGA_TARGET, Oracle Database 11g automatically sizes the most
commonly configured components, including:
The shared pool (for SQL and PL/SQL execution)
The Java pool (for Java execution state)
The large pool (for large allocations such as RMAN backup buffers)
The buffer cache
There are a few SGA components whose sizes are not automatically adjusted. The DBA must
specify the sizes of these components explicitly, if they are needed by an application. Such
components are:
Keep/Recycle buffer caches (controlled
byDB_KEEP_CACHE_SIZE andDB_RECYCLE_CACHE_SIZE)
Additional buffer caches for non-standard block sizes (controlled by DB_nK_CACHE_SIZE, n =
{2, 4, 8, 16, 32})
Streams Pool (controlled by the new parameterSTREAMS_POOL_SIZE)
The granule size that is currently being used for the SGA for each component can be viewed in the
view V$SGAINFO. The size of each component and the time and type of the last resize operation
performed on each component can be viewed in the view V$SGA_DYNAMIC_COMPONENTS.
SQL> select * from v$sgainfo;
More...
NAME
BYTES RES
-------------------------------- ---------- --Fixed SGA Size
2084296 No
http://www.siue.edu/~dbock/cmis565/module1-architecture.htm
11/33
9/11/2014
Redo Buffers
Buffer Cache Size
Shared Pool Size
Large Pool Size
Java Pool Size
Streams Pool Size
Granule Size
Maximum SGA Size
Startup overhead in Shared Pool
Free SGA Memory Available
14692352
587202560
956301312
16777216
33554432
0
16777216
1610612736
67108864
0
No
Yes
Yes
Yes
Yes93
Yes
No
No
No
11 rows selected.
Shared Pool
The Shared Pool is a memory structure that is shared by all system users.
It caches various types of program data. For example, the shared pool stores parsed SQL,
PL/SQL code, system parameters, and data dictionary information.
The shared pool is involved in almost every operation that occurs in the database. For
example, if a user executes a SQL statement, then Oracle Database accesses the shared
pool.
It consists of both fixed and variable structures.
The variable component grows and shrinks depending on the demands placed on memory
http://www.siue.edu/~dbock/cmis565/module1-architecture.htm
12/33
9/11/2014
Library Cache
Memory is allocated to the Library Cache whenever an SQL statement is parsed or a program unit
is called. This enables storage of the most recently used SQL and PL/SQL statements.
If the Library Cache is too small, the Library Cache must purge statement definitions in order to have
space to load new SQL and PL/SQL statements. Actual management of this memory structure is
through a Least-Recently-Used (LRU) algorithm. This means that the SQL and PL/SQL
statements that are oldest and least recently used are purged when more storage space is needed.
The Library Cache is composed of two memory subcomponents:
Shared SQL: This stores/shares the execution plan and parse tree for SQL statements, as
well as PL/SQL statements such as functions, packages, and triggers. If a system user
executes an identical statement, then the statement does not have to be parsed again in order
to execute the statement.
Private SQL Area: With a shared server, each session issuing a SQL statement has a
private SQL area in its PGA.
o Each user that submits the same statement has a private SQL area pointing to the
same shared SQL area.
o Many private SQL areas in separate PGAs can be associated with the same shared
SQL area.
o This figure depicts two different client processes issuing the same SQL statement
the parsed solution is already in the Shared SQL Area.
http://www.siue.edu/~dbock/cmis565/module1-architecture.htm
13/33
9/11/2014
14/33
9/11/2014
results.
Buffer Caches
A number of buffer caches are maintained in memory in order to improve system response time.
15/33
9/11/2014
http://www.siue.edu/~dbock/cmis565/module1-architecture.htm
16/33
9/11/2014
Because tablespaces that store oracle tables can use different (non-standard) block sizes, there can
be more than one Database Buffer Cache allocated to match block sizes in the cache with the block
sizes in the non-standard tablespaces.
The size of the Database Buffer Caches can be controlled by the
parameters DB_CACHE_SIZE and DB_nK_CACHE_SIZEto dynamically change the memory
allocated to the caches without restarting the Oracle instance.
You can dynamically change the size of the Database Buffer Cache with the ALTER SYSTEM
command like the one shown here:
ALTER SYSTEM SET DB_CACHE_SIZE = 96M;
You can have the Oracle Server gather statistics about the Database Buffer Cache to help you size
it to achieve an optimal workload for the memory allocation. This information is displayed from
the V$DB_CACHE_ADVICE view. In order for statistics to be gathered, you can dynamically alter
the system by using the ALTER SYSTEM SET DB_CACHE_ADVICE (OFF, ON,
READY) command. However, gathering statistics on system performance always incurs some
overhead that will slow down system performance.
SQL> ALTER SYSTEM SET db_cache_advice = ON;
System altered.
SQL> DESC V$DB_cache_advice;
Name
Null?
Type
----------------------------------------- -------- ------------ID
NUMBER
NAME
VARCHAR2(20)
BLOCK_SIZE
NUMBER
ADVICE_STATUS
VARCHAR2(3)
SIZE_FOR_ESTIMATE
NUMBER
SIZE_FACTOR
NUMBER
BUFFERS_FOR_ESTIMATE
NUMBER
ESTD_PHYSICAL_READ_FACTOR
NUMBER
http://www.siue.edu/~dbock/cmis565/module1-architecture.htm
17/33
9/11/2014
ESTD_PHYSICAL_READS
ESTD_PHYSICAL_READ_TIME
ESTD_PCT_OF_DB_TIME_FOR_READS
ESTD_CLUSTER_READS
ESTD_CLUSTER_READ_TIME
NUMBER
NUMBER
NUMBER
NUMBER
NUMBER
http://www.siue.edu/~dbock/cmis565/module1-architecture.htm
18/33
9/11/2014
The Redo Log Buffer memory object stores images of all changes made to database blocks.
Database blocks typically store several table rows of organizational data. This means that if a
single column value from one row in a block is changed, the block image is stored. Changes
include INSERT, UPDATE, DELETE, CREATE, ALTER, or DROP.
LGWR writes redo sequentially to disk while DBWnperforms scattered writes of data blocks to
disk.
o Scattered writes tend to be much slower than sequential writes.
o Because LGWR enable users to avoid waiting for DBWn to complete its slow writes, the
database delivers better performance.
The Redo Log Buffer as a circular buffer that is reused over and over. As the buffer fills up, copies
of the images are stored to the Redo Log Files that are covered in more detail in a later module.
Large Pool
The Large Pool is an optional memory structure that primarily relieves the memory burden placed
on the Shared Pool. The Large Pool is used for the following tasks if it is allocated:
Allocating space for session memory requirements from the User Global Area where a Shared
Server is in use.
http://www.siue.edu/~dbock/cmis565/module1-architecture.htm
19/33
9/11/2014
Transactions that interact with more than one database, e.g., a distributed database scenario.
Backup and restore operations by the Recovery Manager (RMAN) process.
o RMAN uses this only if the BACKUP_DISK_IO = nand BACKUP_TAPE_IO_SLAVE =
TRUEparameters are set.
o If the Large Pool is too small, memory allocation for backup will fail and memory will be
allocated from the Shared Pool.
Parallel execution message buffers for parallel server
operations. The PARALLEL_AUTOMATIC_TUNING = TRUE parameter must be set.
The Large Pool size is set with the LARGE_POOL_SIZEparameter this is not a dynamic
parameter. It does not use an LRU list to manage memory.
Java Pool
The Java Pool is an optional memory object, but is required if the database has Oracle Java
installed and in use for Oracle JVM (Java Virtual Machine).
The size is set with the JAVA_POOL_SIZE parameter that defaults to 24MB.
The Java Pool is used for memory allocation to parse Java commands and to store data
associated with Java commands.
Storing Java code and data in the Java Pool is analogous to SQL and PL/SQL code cached in
the Shared Pool.
Streams Pool
This pool stores data and control structures to support the Oracle Streams feature of Oracle
Enterprise Edition.
Oracle Steams manages sharing of data and events in a distributed environment.
It is sized with the parameter STREAMS_POOL_SIZE.
If STEAMS_POOL_SIZE is not set or is zero, the size of the pool grows dynamically.
Processes
You need to understand three different types of Processes:
User Process: Starts when a database user requests to connect to an Oracle Server.
Server Process: Establishes the Connection to an Oracle Instance when a User Process
requests connection makes the connection for the User Process.
Background Processes: These start when an Oracle Instance is started up.
Client Process
In order to use Oracle, you must connect to the database. This must occur whether you're using
SQLPlus, an Oracle tool such as Designer or Forms, or an application program. The client process
http://www.siue.edu/~dbock/cmis565/module1-architecture.htm
20/33
9/11/2014
This generates a User Process (a memory object) that generates programmatic calls through your
user interface (SQLPlus, Integrated Developer Suite, or application program) that creates a session
and causes the generation of a Server Process that is either dedicated or shared.
Server Process
A Server Process is the go-between for a Client Process and the Oracle Instance.
Dedicated Server environment there is a single Server Process to serve each Client
Process.
Shared Server environment a Server Process can serve several User Processes, although
with some performance reduction.
Allocation of server process in a dedicated environment versus a shared environment is
covered in further detail in the Oracle11g Database Performance Tuning course offered by
http://www.siue.edu/~dbock/cmis565/module1-architecture.htm
21/33
9/11/2014
Oracle Education.
Background Processes
As is shown here, there are both mandatory, optional, and slave background processes that are
started whenever an Oracle Instance starts up. These background processes serve all system
users. We will cover mandatory process in detail.
Mandatory Background Processes
Process Monitor Process (PMON)
System Monitor Process (SMON)
Database Writer Process (DBWn)
Log Writer Process (LGWR)
Checkpoint Process (CKPT)
Manageability Monitor Processes (MMON and MMNL)
Recover Process (RECO)
Optional Processes
Archiver Process (ARCn)
Coordinator Job Queue (CJQ0)
Dispatcher (number nnn) (Dnnn)
Others
This query will display all background processes running to serve a database:
SELECT PNAME
FROM
V$PROCESS
WHERE PNAME IS NOT NULL
ORDER BY PNAME;
PMON
The Process Monitor (PMON) monitors other background processes.
It is a cleanup type of process that cleans up after failed processes.
Examples include the dropping of a user connection due to a network failure or the abnormal
termination (ABEND) of a user application program.
It cleans up the database buffer cache and releases resources that were used by a failed user
process.
It does the tasks shown in the figure below.
http://www.siue.edu/~dbock/cmis565/module1-architecture.htm
22/33
9/11/2014
SMON
The System Monitor (SMON) does system-level cleanup duties.
It is responsible for instance recovery by applying entries in the online redo log files to the
datafiles.
Other processes can call SMON when it is needed.
It also performs other activities as outlined in the figure shown below.
If an Oracle Instance fails, all information in memory not written to disk is lost. SMON is responsible
for recovering the instance when the database is started up again. It does the following:
Rolls forward to recover data that was recorded in a Redo Log File, but that had not yet been
recorded to a datafile by DBWn. SMON reads the Redo Log Files and applies the changes to
the data blocks. This recovers all transactions that were committed because these were
written to the Redo Log Files prior to system failure.
Opens the database to allow system users to logon.
Rolls back uncommitted transactions.
http://www.siue.edu/~dbock/cmis565/module1-architecture.htm
23/33
9/11/2014
SMON also does limited space management. It combines (coalesces) adjacent areas of free space
in the database's datafiles for tablespaces that are dictionary managed.
It also deallocates temporary segments to create free space in the datafiles.
http://www.siue.edu/~dbock/cmis565/module1-architecture.htm
24/33
9/11/2014
LGWR
The Log Writer (LGWR) writes contents from the Redo Log Buffer to the Redo Log File that is in
use.
These are sequential writes since the Redo Log Files record database modifications based on
the actual time that the modification takes place.
LGWR actually writes before the DBWn writes and only confirms that a COMMIT operation
has succeeded when the Redo Log Buffer contents are successfully written to disk.
LGWR can also call the DBWn to write contents of the Database Buffer Cache to disk.
The LGWR writes according to the events illustrated in the figure shown below.
CKPT
The Checkpoint (CPT) process writes information to update the database control files and headers
of datafiles.
A checkpoint identifies a point in time with regard to theRedo Log Files where instance
recovery is to begin should it be necessary.
It can tell DBWn to write blocks to disk.
A checkpoint is taken at a minimum, once every three seconds.
http://www.siue.edu/~dbock/cmis565/module1-architecture.htm
25/33
9/11/2014
Think of a checkpoint record as a starting point for recovery. DBWn will have completed writing all
buffers from the Database Buffer Cache to disk prior to the checkpoint, thus those records will not
require recovery. This does the following:
Ensures modified data blocks in memory are regularly written to disk CKPT can call the
DBWn process in order to ensure this and does so when writing a checkpoint record.
Reduces Instance Recovery time by minimizing the amount of work needed for recovery since
only Redo Log File entries processed since the last checkpoint require recovery.
Causes all committed data to be written to datafiles during database shutdown.
http://www.siue.edu/~dbock/cmis565/module1-architecture.htm
26/33
9/11/2014
If a Redo Log File fills up and a switch is made to a new Redo Log File (this is covered in more detail
in a later module), the CKPT process also writes checkpoint information into the headers of the
datafiles.
Checkpoint information written to control files includes the system change number (the SCN is a
number stored in the control file and in the headers of the database files that are used to ensure that
all files in the system are synchronized), location of which Redo Log File is to be used for recovery,
and other information.
CKPT does not write data blocks or redo blocks to disk it calls DBWn and LGWR as necessary.
The Manageability Monitor Lite Process (MMNL) writes statistics from the Active Session History
(ASH) buffer in the SGA to disk. MMNL writes to disk when the ASH buffer is full.
The information stored by these processes is used for performance tuning we survey performance
tuning in a later module.
RECO
The Recoverer Process (RECO) is used to resolve failures of distributed transactions in a
distributed database.
http://www.siue.edu/~dbock/cmis565/module1-architecture.htm
27/33
9/11/2014
Consider a database that is distributed on two servers one in St. Louis and one in Chicago.
Further, the database may be distributed on servers of two different operating systems, e.g.
LINUX and Windows.
The RECO process of a node automatically connects to other databases involved in an indoubt distributed transaction.
When RECO reestablishes a connection between the databases, it automatically resolves all
in-doubt transactions, removing from each database's pending transaction table any rows that
correspond to the resolved transactions.
ARCn
While the Archiver (ARCn) is an optional background process, we cover it in more detail because it
is almost always used for production systems storing mission critical information.
The ARCn process must be used to recover from loss of a physical disk drive for systems that
are "busy" with lots of transactions being completed.
It performs the tasks listed below.
http://www.siue.edu/~dbock/cmis565/module1-architecture.htm
28/33
9/11/2014
When a Redo Log File fills up, Oracle switches to the next Redo Log File.
The DBA creates several of these and the details of creating them are covered in a later
module.
If all Redo Log Files fill up, then Oracle switches back to the first one and uses them in a
round-robin fashion by overwriting ones that have already been used.
Overwritten Redo Log Files have information that, once overwritten, is lost forever.
ARCHIVELOG Mode:
If ARCn is in what is termed ARCHIVELOG mode, then as the Redo Log Files fill up, they are
individually written to Archived Redo Log Files.
LGWR does not overwrite a Redo Log File until archiving has completed.
Committed data is not lost forever and can be recovered in the event of a disk failure.
Only the contents of the SGA will be lost if an Instance fails.
In NOARCHIVELOG Mode:
The Redo Log Files are overwritten and not archived.
Recovery can only be made to the last full backup of the database files.
All committed transactions after the last full backup are lost, and you can see that this could
cost the firm a lot of $$$.
When running in ARCHIVELOG mode, the DBA is responsible to ensure that the Archived Redo Log
Files do not consume all available disk space! Usually after two complete backups are made, any
Archived Redo Log Files for prior backups are deleted.
Slave Processes
Slave processes are background processes that perform work on behalf of other processes.
Innn: I/O slave processes -- simulate asynchronous I/O for systems and devices that do not
support it. In asynchronous I/O, there is no timing requirement for transmission, enabling other
processes to start before the transmission has finished.
For example, assume that an application writes 1000 blocks to a disk on an operating system
that does not support asynchronous I/O.
Each write occurs sequentially and waits for a confirmation that the write was successful.
http://www.siue.edu/~dbock/cmis565/module1-architecture.htm
29/33
9/11/2014
With asynchronous disk, the application can write the blocks in bulk and perform other work
while waiting for a response from the operating system that all blocks were written.
Parallel Query Slaves -- In parallel execution or parallelprocessing, multiple processes work
together simultaneously to run a single SQL statement.
By dividing the work among multiple processes, Oracle Database can run the statement more
quickly.
For example, four processes handle four different quarters in a year instead of one process
handling all four quarters by itself.
Parallel execution reduces response time for data-intensive operations on large databases
such as data warehouses. Symmetric multiprocessing (SMP) and clustered system gain the
largest performance benefits from parallel execution because statement processing can be
split up among multiple CPUs. Parallel execution can also benefit certain types of OLTP and
hybrid systems.
Logical Structure
It is helpful to understand how an Oracle database is organized in terms of a logical structure that is
used to organize physical objects.
30/33
9/11/2014
Tablespaces can be brought online and taken offline for purposes of backup and
management, except for theSYSTEM tablespace that must always be online.
Tablespaces can be in either read-only or read-write status.
Datafile: Tablespaces are stored in datafiles which are physical disk objects.
A datafile can only store objects for a single tablespace, but a tablespace may have more than
one datafile this happens when a disk drive device fills up and a tablespace needs to be
expanded, then it is expanded to a new disk drive.
The DBA can change the size of a datafile to make it smaller or later. The file can also grow
in size dynamically as the tablespace grows.
Segment: When logical storage objects are created within a tablespace, for example, an employee
table, a segment is allocated to the object.
Obviously a tablespace typically has many segments.
A segment cannot span tablespaces but can span datafiles that belong to a single tablespace.
Extent: Each object has one segment which is a physical collection of extents.
Extents are simply collections of contiguous disk storage blocks. A logical storage object
such as a table or index always consists of at least one extent ideally the initial extent
allocated to an object will be large enough to store all data that is initially loaded.
As a table or index grows, additional extents are added to the segment.
A DBA can add extents to segments in order to tune performance of the system.
An extent cannot span a datafile.
Block: The Oracle Server manages data at the smallest unit in what is termed a block or data
block. Data are actually stored in blocks.
A physical block is the smallest addressable location on a disk drive for read/write operations.
http://www.siue.edu/~dbock/cmis565/module1-architecture.htm
31/33
9/11/2014
An Oracle data block consists of one or more physical blocks (operating system blocks) so the data
block, if larger than an operating system block, should be an even multiple of the operating system
block size, e.g., if the Linux operating system block size is 2K or 4K, then the Oracle data block
should be 2K, 4K, 8K, 16K, etc in size. This optimizes I/O.
The data block size is set at the time the database is created and cannot be changed. It is set with
the DB_BLOCK_SIZEparameter. The maximum data block size depends on the operating system.
Thus, the Oracle database architecture includes both logical and physical structures as follows:
Physical: Control files; Redo Log Files; Datafiles; Operating System Blocks.
Logical: Tablespaces; Segments; Extents; Data Blocks.
Processing a query:
Parse:
o Search for identical statement in the Shared SQL Area.
o Check syntax, object names, and privileges.
o Lock objects used during parse.
o Create and store execution plan.
Bind: Obtains values for variables.
Execute: Process statement.
Fetch: Return rows to user process.
Processing a DML statement:
Parse: Same as the parse phase used for processing a query.
Bind: Same as the bind phase used for processing a query.
Execute:
o If the data and undo blocks are not already in the Database Buffer Cache, the server
http://www.siue.edu/~dbock/cmis565/module1-architecture.htm
32/33
9/11/2014
process reads them from the datafiles into the Database Buffer Cache.
o The server process places locks on the rows that are to be modified. The undo block is
used to store the before image of the data, so that the DML statements can be rolled
back if necessary.
o The data blocks record the new values of the data.
o The server process records the before image to the undo block and updates the data
block. Both of these changes are made in the Database Buffer Cache. Any changed
blocks in the Database Buffer Cache are marked as dirty buffers. That is, buffers that
are not the same as the corresponding blocks on the disk.
o The processing of a DELETE or INSERT command uses similar steps. The before image
for a DELETE contains the column values in the deleted row, and the before image of an
INSERT contains the row location information.
Processing a DDL statement:
The execution of DDL (Data Definition Language) statements differs from the execution of
DML (Data Manipulation Language) statements and queries, because the success of a DDL
statement requires write access to the data dictionary.
For these statements, parsing actually includes parsing, data dictionary lookup, and
execution. Transaction management, session management, and system management SQL
statements are processed using the parse and execute stages. To re-execute them, simply
perform another execute.
END OF NOTES
http://www.siue.edu/~dbock/cmis565/module1-architecture.htm
33/33