CS409 Complete Handouts
CS409 Complete Handouts
Data are the raw bits and pieces of information with no context. Anything which is not necessarily meaningful to human being.
Raw facts/Un-processed information.
Data are the raw bits and pieces of information with no context. If I told you, “45, 32, 41, 75,” you would not have learned
anything.
The numbers 10, 22, and 119 are data someone can think these are lengths and other may think room numbers. Similarly, 1.5, 2.5,
31.5, MAY represent lengths of some iron rods.
Data can be quantitative or qualitative. Quantitative data is numeric, the result of a measurement, count, or some other
mathematical calculation. Qualitative data is descriptive. “Dark Brown” the color of hair, is an example of qualitative data. A
number can be qualitative too: if I tell you my favorite number is 5, that is qualitative data because it is descriptive, not the result of
a measurement or mathematical calculation.
Information
Returning to the example above, if I told you that “45, 32, 41, 75” are the numbers of students that had registered for upcoming
classes, that would be information. By adding the context – that the numbers represent the count of students registering for specific
classes – I have converted data into information.
Once we have put our data into context, aggregated and analyzed it, we can use it to make decisions for our organization. We can
say that this consumption of information produces knowledge. This knowledge can be used to make decisions, set policies, and
even spark innovation.
The final step up the information ladder is the step from knowledge to wisdom. We can say that someone has wisdom when they
can combine their knowledge and experience to
produce a deeper understanding of a topic. It
often takes many years to develop wisdom on a
particular topic and requires patience.
What is database?
A database is a collection of related data. By data, we mean known facts that can be recorded and that have implicit meaning. For
example, consider the names, telephone numbers, and addresses of the people you know.
You may have recorded this data in an indexed address book or you may have stored it on a hard drive, using a personal computer
and software such as Microsoft Access or Excel. This collection of related data with an
implicit meaning is a database.
Database Management System (DBMS) is a software for storing and retrieving users’ data while considering appropriate security
measures. It consists of a group of programs which manipulate the database. The DBMS accepts the request for data from an
application and instructs the operating system to provide the specific data. In large systems, a DBMS helps users and other third-
party software to store and retrieve data.
DBMS allows users to create their own databases as per their requirement. The term “DBMS” includes the user of the database and
other application programs. It provides an interface between the data and the software application.
Example of DBMS
Oracle
IBM DB2
Ingress
Teradata
MS SQL Server
MS Access
MySQL
Forms
Forms make entering data easier. Working with extensive tables can be confusing, and when you have connected tables, you might
need to work with more than one at a time to enter a set of data. However, with forms it's possible to enter data into multiple tables
at once, all in one place. Database
designers can even set restrictions on individual form components to ensure all of the needed data is entered in the correct format.
All in all, forms help keep data consistent and organized, which is essential for an accurate and powerful database.
Reports
Reports offer you the ability to present your data in print. If you've ever received a computer printout of a class schedule or a
printed invoice of a purchase, you've seen a database report. Reports are useful because they allow you to present components of
your database in an easy-to-read format. You can even customize a report's appearance to make it visually appealing. Access offers
you the ability to create a report from
any table or query. Queries
Queries are a way of searching for and compiling data from one or more tables. Running a query is like asking a detailed question
of your database. When you build a query in Access, you are defining specific search conditions to find exactly the data you
want.
Queries are far more powerful than the simple searches you might carry out within a table. While a search would be
able to help you find the name of one customer at your business, you could run a query to find the name and phone number of
every customer who's made a purchase within the past week. A well-designed query can give information you might not be able to
find just by looking through the data in your tables.
DBMS Languages
A Software Package that enables users to define, create, maintain, and control access to the database. Data Definition Language
(DDL)
Data Manipulation Language (DML) Control access (DCL):
Security, integrity, concurrent access, recovery, support for data communication, etc. Utility services (DCL)
File import/export, monitoring facilities, code generator, report writer, etc. Support Ad Hoc queries
A software system that is used to create, maintain, and provide controlled access to user databases
DBMS manages data resources like an operating system manages hardware resources
Any user can interact with the database. For example, application programmers and end user.
As its name shows, application programmers are the one who writes application programs that uses the database. These application
programs are written in programming languages like COBOL or PL (Programming Language 1), Java and fourth generation
language. These programs meet the user requirement and make according to user requirements. Retrieving information, creating
new information and changing existing information is done by these application programs. They interact with DBMS through
DML (Data manipulation language) calls. And all these functions are performed by generating a request to the DBMS. If
application programmers are not there, then there will be no creativity in the whole team of Database.
End users are those who access the database from the terminal end. They use the developed applications, and they don’t have any
knowledge about the design and working of database. These are the second class of users, and their main motto is just to get their
task done.
Form Processing & Report Processing Applications are built by using VB, DOT Net or PHP programming. Reports can be
developed by using Crystal Reports tool.
Query Processing can be managed by using vendors SQL tool or 3rd party tools such As TOAD, SQL Developer etc.
Processing performed within the same physical computer. User terminals are typically “dumb”, incapable of functioning on their
own, and cabled to the central computer
File-Server
In a file-server environment, the processing is distributed about the network, typically a local area network (LAN).
File-server is connected to several workstations across a network Database resides on file-server. DBMS and applications run on
each workstation
Client-Server (2-tiers)
In relational database management systems (RDBMSs), many of which started as centralized systems, the system components that
were first moved to the client side were the user interface and application programs. Because SQL provided a standard language
for RDBMSs, this created a logical dividing point between client and server. Hence, the query and transaction functionality related
to
SQL processing remained on the server side. In such an architecture, the server is often called a query server or transaction
server because it provides these two functionalities. In an RDBMS, the server is also often called an SQL server. The user interface
programs, and application programs can run on the client side.
When DBMS access is required, the program establishes a connection to the DBMS (Which is on the server side); once the
connection is created, the client program can communicate with the DBMS. A standard called Open Database Connectivity
(ODBC) provides an application programming interface (API), which allows client-side programs to call the DBMS, as long as
both client and server machines have the necessary software installed. Most DBMS vendors provide ODBC drivers for their
systems. A client program can actually connect to several RDBMSs and send query and transaction requests using the ODBC API,
which are then processed at the server sites. Any query results are sent back to the client program, which can process and display
the results as needed. A related standard for the Java programming language, called JDBC, has also been defined.
This allows Java client programs to access one or more DBMSs through a standard interface.
The different approach to two-tier client/server architecture was taken by some object-oriented DBMSs, where the software
modules of the DBMS were divided between client and server in a more integrated way. For example, the server level may include
the part of the DBMS software responsible for handling data storage on disk pages, local concurrency control and recovery,
buffering and caching of disk pages, and other such functions. Meanwhile, the client level may handle the user interface; data
dictionary functions; DBMS interactions with programming language compilers; global query optimization, concurrency control,
and recovery across multiple servers; structuring of complex objects from the data in the buffers; and other such functions. In this
approach, the client/server interaction is more tightly coupled and is done internally by the DBMS modules—some of which reside
on the client and some on the server—rather than by the users/programmers. The exact division of functionality can vary from
system to system. In such a client/server architecture, the server has been called a data server because it provides data in disk pages
to the client. This data can then be structured into objects for the client programs by the client-side DBMS software.
The architectures described here are called two-tier architectures because the software components are distributed over two
systems: client and server. The advantages of this architecture are its simplicity and seamless compatibility with existing systems.
The emergence of the Web changed the roles of clients and servers, leading to the three-tier architecture.
Client server (3-tier)
Many Web applications use an architecture called the three-tier architecture, which adds an intermediate layer between the client
and the database server, this intermediate layer or middle tier is called the application server or the Web server, depending on the
application. This server plays an intermediary role by running application programs and storing business rules (procedures or
constraints) that are used to access
data from the database server. It can also improve database security by checking a client’s credentials before forwarding a request
to the database server. Clients contain GUI interfaces and some additional application- specific business rules. The intermediate
server accepts requests from the client, processes the request and sends database queries and commands to the database server, and
then acts as a conduit for passing (partially) processed data from the database server to the clients, where it may be processed
further and filtered to be presented to users in GUI format. Thus, the user interface, application rules, and data access act as the
three tiers. Figure 2.7(b) shows another architecture used by database and other application package vendors. The presentation
layer displays information to the user and allows data entry. The business logic layer handles intermediate rules and constraints
before data is passed up to the user or down to the DBMS. The bottom
Layer includes all data management services. The middle layer can also act as a Web server, which retrieves query results from the
database server and formats them into dynamic Web pages that are viewed by the Web browser at the client side. Other
architectures have also been proposed. It is possible to divide the layers between the user and the stored data further into finer
components, thereby giving rise to n-tier architectures, where n may be four or five tiers. Typically, the business logic layer is
divided into multiple layers. Besides distributing programming and data throughout a network, n-tier applications afford the
advantage that any one tier can run on an appropriate processor or operating system platform and can be handled independently.
Vendors of ERP (enterprise resource planning) and CRM (customer relationship management) packages often use a middleware
layer, which accounts for the front-end modules (clients) communicating with a number of back-end databases (servers).
A conceptual data model is a model that helps to identify the highest-level relationships between the different entities, while a
logical data model is a model that describes the data as much detail as possible, without regard to how they will be physically
implemented in the database.Query Processing is a translation of high-level queries into low-level expression. It is a step wise
process that can be used at the physical level of the file system, query optimization and actual execution of the query to get the
result. It requires the basic concepts of relational algebra and file structure. It refers to the range of activities that are involved in
extracting data from the database. It includes translation of queries in high-level database languages into expressions that can be
implemented at the physical level of the file system. In query processing, we will actually understand how these queries are
processed and how they are optimized.
Physical database design is the process of transforming logical data models into physical data models. An experienced database
designer will make a physical database design in parallel with conceptual data modeling if they know the type of database
technology that will be used.
1. The internal level has an internal schema, which describes the physical storage structure of the database. The internal schema
uses a physical data model and describes the complete details of data storage and access paths for the database.
2. The conceptual level has a conceptual schema, which describes the structure of the whole database for a community of users.
The conceptual schema hides the details of physical storage structures and concentrates on describing entities, data types,
relationships, user operations, and constraints.
Usually, a representational data model is used to describe the conceptual schema when a database system is implemented. This
implementation
conceptual schema is often based on a conceptual schema design in a high-level data model.
3. The external or view level includes a number of external schemas or user views. Each external schema describes the part of
the database that a particular user group is interested in and hides the rest of the database from that user group. As in the previous
level, each external schema is typically implemented using a representational data model, possibly based
on an external schema design in a high-level data model.
The three-schema architecture is a convenient tool with which the user can visualize the schema levels in a database system. Most
DBMSs do not separate the three levels completely and explicitly but support the three- schema architecture to some extent.
Some older DBMSs may include physical-level details in the conceptual schema.
The three-level ANSI architecture has an important place in database technology development because it clearly separates the
users’ external level, the database’s conceptual level, and the internal storage level for designing a database. It is very much
applicable in the design of DBMSs, even today. In most DBMSs that support user views, external schemas are specified in the
same data model that describes the conceptual-level information (for example, a relational DBMS like Oracle uses SQL for this).
Some DBMSs allow different data models to be used at the
conceptual and external levels. An example is Universal Data Base (UDB), a DBMS from IBM, which uses the relational model to
describe the conceptual schema, but may use an object-oriented model to describe an external schema.
Notice that the three schemas are only descriptions of data; the stored data that actually exists is at the physical level only. In a
DBMS based on the three-schema architecture, each user group refers to its own external schema. Hence, the DBMS must
transform a request specified on an external schema into a request against the conceptual schema, and then into a request on the
internal schema for processing over the stored database. If the request is a database retrieval, the data extracted from the stored
database must be reformatted to match the user’s external view. The processes of transforming requests and results between levels
are called mappings.
These mappings may be time-consuming, so some DBMSs—especially those that are meant to support small databases—do not
support external views. Even in such systems, however, a certain amount of mapping is necessary to transform requests between
the conceptual and internal levels.
Data Independence
The three-schema architecture can be used to further explain the concept of data independence, which can be defined as the
capacity to change the schema at one level of a database system without having to change the schema at the next higher level. We
can define two types of data independence:
Logical data independence is the capacity to change the conceptual schema without having to change external schemas or
application programs. We may change the conceptual schema to expand the database (by adding a record type or data item), to
change constraints, or to reduce the database (by removing a record type or data item).
Physical data independence is the capacity to change the internal schema without having to change the conceptual schema.
Hence, the external schemas need not be changed as well. Changes to the internal schema may be needed because some physical
files were reorganized—for example, by creating additional access structures—to improve the performance of retrieval or update.
If the same data as before remains in the database, we should not have to change the conceptual schema.
Generally, physical data independence exists in most databases and file environments where physical details such as the exact
location of data on disk, and hardware details of storage encoding, placement, compression, splitting, merging of
records, and so on are hidden from the user. Applications remain unaware of these details. On the other hand, logical data
independence is harder to achieve because it allows structural and constraint changes without affecting application programs— a
much stricter requirement.
Whenever we have a multiple-level DBMS, its catalog must be expanded to include information on how to
map requests and data among the various levels. The DBMS uses additional software to accomplish these mappings by referring to
the mapping information in the catalog. Data independence occurs because when the schema is changed at some level, the schema
at the next higher level remains unchanged; only the mapping between the two levels is changed. Hence, application programs
referring to the higher-level schema need not be changed.
The three-schema architecture can make it easier to achieve true data independence, both physical and logical. However, the two
levels of mappings create an overhead during compilation or execution of a query or program, leading to inefficiencies in the
DBMS. Because of this, few DBMSs have implemented the full three schema architecture.
Database schema A database schema is a blueprint or architecture of how our data will look. It doesn’t hold data itself, but instead
describes the shape of the data and how it might relate to other tables or models. An entry in our database will be an instance of the
database schema. It will contain all of the properties described in the schema.
Schema types There are two main database schema types that define different parts of the schema: logical and physical. A logical
database schema represents how the data is organized in terms of tables. It also explains how attributes from tables are linked
together. Different schemas use a different syntax to define the logical architecture and constraints.
To create a logical database schema, we use tools to illustrate relationships between components of your data. This is called entity-
relationship modeling (ER Modeling). It specifies what the relationships between entity types are.
The physical database schema represents how data is stored on disk storage. In other words, it is the actual code that will be used
to create the structure of your database. In MongoDB with mongoose, for instance, this will take the form of a mongoose model. In
MySQL, you will use SQL to construct a database with tables.
Schema objects
A schema is a collection of schema objects. Examples of schema objects include tables, views,
sequences, synonyms, indexes, clusters, database links, procedures, and packages. This chapter explains tables, views, sequences,
synonyms, indexes, and clusters.
Schema objects are logical data storage structures. Schema objects do not have a one- to-one correspondence to physical files on
disk that store their information. However, Oracle stores a schema object logically within a tablespace of the database. The data of
each object is physically contained in one or more of the tablespace's data files. For some objects such as tables, indexes, and
clusters, you can specify how much disk space Oracle allocates for the object within the tablespace's data files.
Who is DBA?
A database administrator (DBA) is a person or group of persons who maintains a successful database environment by directing or
performing all related activities to keep the data secure. The top responsibility of a DBA professional is to maintain data integrity.
This means the DBA will ensure that data is secure from unauthorized access but is available to users.
DBA job requires a high level of expertise by a person or group of persons.
There are very rare chances that only a single person can manage all the database system activities, so companies always have a
group of people who take care of database system.
DBA & Databases
Network Administrator
A network administrator is a person designated in an organization whose responsibility includes maintaining computer
infrastructures with emphasis on local area networks (LANs) up to wide area networks (WANs). Responsibilities may vary
between organizations, but installing new hardware, on- site servers, enforcing licensing agreements, software-network
interactions, as well as network integrity/resilience, are some of the key areas of focus.
Network administrator coordinates with the DBA for database connections and other issues such as storage, OS and hardware.
Some sites have one or more network administrators. A network administrator, for example, administers Oracle networking
products, such as Oracle Net Services.
Application Developers
DBA’s Tasks
DBA’s Responsibilities
Installing and upgrading the Oracle Database server and application tools
Allocating system storage and planning future storage requirements for the database system
Creating primary database storage structures (tablespaces) after application developers have designed an application
Creating primary objects (tables, views, indexes) once application developers have designed an application
Modifying the database structure, as necessary, from information given by application developers
Enrolling users and maintaining system security
Ensuring compliance with Oracle license agreements
Controlling and monitoring user access to the database
Monitoring and optimizing the performance of the database
Planning for backup and recovery of database information
Maintaining archived data on tape
Backing up and restoring the database
Contacting Oracle for technical support
Physical database design is the process of transforming logical data models into physical data models. An experienced database
designer will make a physical database design in parallel with conceptual data modeling if they know the type of database
technology that will be used.
Purposes
Meeting the expectations of Database Designer for the database, following are two main purposes of Physical Database Design
for a DBA.
Before undertaking the physical database design, we must have a good idea of the intended use of the database by defining in a
high-level form the queries and transactions that are expected to run on the database. For each retrieval query, the following
information about the query would be needed:
4. The attributes on which any join conditions or conditions to link multiple tables or objects for the query are specified.
The attributes listed in items 2 and 4 above are candidates for the definition of access structures, such as indexes, hash keys, or
sorting of the file.
For each update operation or update transaction, the following information would be needed:
3. The attributes on which selection conditions for a delete or update are specified.
Again, the attributes listed in item 3 are candidates for access structures on the files, because they would be used to locate the
records that will be updated or deleted. On the other hand, the attributes listed in item 4 are candidates for avoiding an access
structure, since modifying them will require updating the access structures.
Besides identifying the characteristics of expected retrieval queries and update transactions, we must consider their expected rates
of invocation. This frequency information, along with the attribute information collected on each query and transaction, is used to
compile a cumulative list of the expected frequency of use for all queries and transactions. This is expressed as the expected
frequency of using each attribute in each file as a selection attribute or a join attribute, over all the queries and transactions.
Generally, for large volumes of processing, the informal 80–20 rule can be used: approximately 80 percent of the processing is
accounted for by only 20 percent of the queries and transactions. Therefore, in practical situations, it is rarely necessary to collect
exhaustive statistics and invocation rates on all the queries and transactions; it is sufficient to determine the 20 percent or so most
important ones.
Some queries and transactions may have stringent performance constraints. For example, a transaction may have the constraint that
it should terminate within 5 seconds on 95 percent of the occasions when it is invoked, and that it should never take more than 20
seconds. Such timing constraints place further priorities on the attributes that are candidates for access paths. The selection
attributes used by queries and transactions with time constraints become higher-priority candidates for primary access structures for
the files because the primary access structures are generally the most efficient for locating records in a file.
A minimum number of access paths should be specified for a file that is frequently updated, because updating the access paths
themselves slows down the update operations. For example, if a file that has frequent record insertions has 10 indexes on 10
different attributes, each of these indexes must be updated whenever a new record is inserted. The overhead for updating 10
indexes can slow down the insert operations.
The attributes whose values are required in equality or range conditions (selection operation) are those that are keys or that
participate in join conditions (join operation) requiring access paths, such as indexes.
The performance of queries largely depends upon what indexes or hashing schemes exist to expedite the processing of selections
and joins. On the other hand, during insert, delete, or update operations, the existence of indexes adds to the overhead. This
overhead must be justified in terms of the gain in efficiency by expediting queries and transactions. The physical design decisions
for indexing fall into the following categories:
1. Whether to index an attribute. The general rules for creating an index on an attribute are that the attribute must either be a
key (unique), or there must be some query that uses that attribute either in a selection condition (equality or range of values) or in a
join condition. One reason for creating multiple indexes is that some operations can be processed by just scanning the indexes,
without having to access the actual data file.
2. What attribute or attributes to index on. An index can be constructed on a single attribute, or on more than one attribute if it
is a composite index. If multiple attributes from one relation are involved together in several queries, (for example,
(Garment_style_#, Color) in a garment inventory database), a Multi attribute (composite) index is warranted. The ordering of
attributes within a multiattribute index must correspond to the queries. For instance, the above index assumes that queries would be
based on an ordering of colors within a Garment_style_# rather than vice versa.
3. Whether to set up a clustered index. At most, one index per table can be a primary or clustering index, because this implies
that the file be physically ordered on that attribute. In most RDBMSs, this is specified by the keyword CLUSTER. (If the attribute
is a key, a primary index is created, whereas a clustering index is created if the attribute is not a key.) If a table requires several
indexes, the decision about which one should be the primary or clustering index depends upon whether keeping the table ordered
on that attribute is needed. Range queries benefit a great deal from clustering. If several attributes require range queries, relative
benefits must be evaluated before deciding which attribute to cluster on. If a query is to be answered by doing an index search only
(without retrieving data records), the corresponding index should not be clustered, since the main benefit of clustering is achieved
when retrieving the records themselves. A clustering index may be set up as a multi attribute index if range retrieval by that
composite key is useful in report creation (for example, an index on Zip_code, Store_id, and Product_id may be a clustering index
for sales data).
4. Whether to use a hash index over a tree index. In general, RDBMSs use B+- trees for indexing. However, ISAM and hash
indexes are also provided in some systems (see Chapter 18). B+-trees support both equality and range queries on the attribute used
as the search key. Hash indexes work well with equality conditions, particularly during joins to find a matching record(s), but they
do not support range queries
5. Whether to use dynamic hashing for the file. For files that are very volatile—that is, those that grow and shrink
continuously.
The process of continuing to revise/adjust the physical database design by monitoring resource utilization as well as internal
DBMS processing to reveal bottlenecks such as contention for the same data or devices.
Tuning Indexes
The initial choice of indexes may have to be revised for the following reasons:
Certain queries may take too long to run for lack of an index.
Certain indexes may not get utilized at all.
Certain indexes may undergo too much updating because the index is on an attribute that undergoes frequent changes.
Most DBMSs have a command or trace facility, which can be used by the DBA to ask the system to show how a query was
executed—what operations were performed in what order and what secondary access structures (indexes) were used. By analyzing
these execution plans, it is possible to diagnose the causes of the above problems.
Some indexes may be dropped and some new indexes may be created based on the tuning analysis.
The goal of tuning is to dynamically evaluate the requirements, which sometimes fluctuate seasonally or during different times of
the month or week, and to reorganize the indexes and file organizations to yield the best overall performance. Dropping and
building new indexes is an overhead that can be justified in terms of performance improvements. Updating of a table is generally
suspended while an index is dropped or created; this loss of service must be accounted for. Besides dropping or creating indexes
and changing from a nonclustered to a clustered index and vice versa, rebuilding the index may improve performance. Most
RDBMSs use B+-trees for an index. If there are many deletions on the index key, index pages may contain wasted space, which
can be claimed during a rebuild operation. Similarly, too many insertions may cause overflows in a clustered index that affect
performance. Rebuilding a clustered index amounts to reorganizing the entire table ordered on that key.
The available options for indexing and the way they are defined, created, and reorganized varies from system to system. As an
illustration, consider the sparse and dense indexes. A sparse index such as a primary index will have one index pointer for each
page (disk block) in the data file; a dense index such as a unique secondary index will have an index pointer for each record.
Sybase provides clustering indexes as sparse indexes in the form of B+-trees, whereas INGRES provides sparse clustering indexes
as ISAM files and dense clustering indexes as B+-trees. In some versions of Oracle and DB2, the option of setting up a clustering
index is limited to a dense index (with many more index entries), and the DBA has to work with this limitation.
Storage statistics: Data about allocation of storage into tablespaces, index spaces, and buffer pools.
I/O and device performance statistics: Total read/write activity (paging) on disk extents and disk hot spots.
Query/transaction processing statistics: Execution times of queries and transactions, and optimization times during query
optimization.
Locking/logging related statistics: Rates of issuing different types of locks, transaction throughput rates, and log records activity
Index statistics: Number of levels in an index, number of noncontiguous leaf pages, and so on.
The number of times a particular query or transaction is submitted/executed in an interval of time The times required for different
phases of query and transaction processing
Problems in Tuning
How to avoid excessive lock contention, thereby increasing concurrency among transactions.
How to minimize the overhead of logging and unnecessary dumping of data.
How to optimize the buffer size and scheduling of processes.
How to allocate resources such as disks, RAM, and processes for most efficient utilization.
Most of the previously mentioned problems can be solved by the DBA by setting appropriate physical DBMS parameters,
changing configurations of devices, changing operating system parameters, and other similar activities. The solutions tend to be
closely tied to specific systems. The DBAs are typically trained to handle these tuning problems for the specific DBMS.
For the given set of tables, there may be alternative design choices, all of which achieve 3NF or BCNF.We illustrated alternative
equivalent designs. One normalized design may be replaced by another.
A relation of the form R(K,A, B, C, D, ...)—with K as a set of key attributes— that is in BCNF can be stored in multiple tables
that are also in BCNF—for example, R1(K, A, B), R2(K, C, D, ), R3(K, ...)—by replicating the key K in each table. Such a process
is known as vertical partitioning. Each table groups sets of attributes that are accessed together. For example, the table
EMPLOYEE(Ssn, Name, Phone, Grade, Salary) may be split into two tables: EMP1(Ssn, Name, Phone) and EMP2(Ssn, Grade,
Salary). If the original table has a large number of rows (say 100,000) and queries about phone numbers and salary information are
totally distinct and occur with very different frequencies, then this separation of tables may work better.
Attribute(s) from one table may be repeated in another even though this creates redundancy and a potential anomaly. For example,
Part_name may be replicated in tables wherever the Part# appears (as foreign key), but there may be one master table called
PART_MASTER(Part#, Part_name, ...) where the Partname is guaranteed to be up-to-date.
Just as vertical partitioning splits a table vertically into multiple tables, horizontal partitioning takes horizontal slices of a table and
stores them as distinct tables. For example, product sales data may be separated into ten tables based on ten product lines. Each
table has the same set of columns (attributes) but contains a distinct set of products (tuples). If a query or transaction applies to all
product data, it may have to run against all the tables and the results may have to be combined.
Tuning Queries
Some typical instances of situations prompting query tuning include the following:
1. Many query optimizers do not use indexes in the presence of arithmetic expressions (such as Salary/365 > 10.50), numerical
comparisons of attributes of different sizes and precision (such as Aqty = Bqty where Aqty is of type
INTEGER and Bqty is of type SMALLINTEGER), NULL comparisons (such as Bdate IS NULL), and substring comparisons
(such as Lname LIKE ‘%mann’).
2. Indexes are often not used for nested queries using IN; for example, the following query: SELECT Ssn FROM EMPLOYEE
WHERE Dno IN ( SELECT Dnumber FROM DEPARTMENT
WHERE Mgr_ssn = ‘333445555’ ); may not use the index on Dno in EMPLOYEE, whereas using Dno = Dnumber in the
WHERE-clause with a single block query may cause the index to be used.
3. Some DISTINCTs may be redundant and can be avoided without changing the result. A DISTINCT often causes a sort
operation and must be avoided as much as possible.
4. Unnecessary use of temporary result tables can be avoided by collapsing multiple queries into a single query unless the
temporary relation is needed for some intermediate processing.
5. In some situations involving the use of correlated queries, temporaries are useful. Consider the following query, which
retrieves the highest paid employee in each department:
SELECT Ssn
FROM EMPLOYEE E
This has the potential danger of searching all of the inner EMPLOYEE table M for each tuple from the outer EMPLOYEE table E.
To make the execution more efficient, the process can be broken into two queries, where the first query just computes the
maximum salary in each department as follows:
STUDENT, it is better to use EMPLOYEE.Ssn = STUDENT.Ssn as a join condition rather than EMPLOYEE.Name =
STUDENT.Name if Ssn has a clustering index in one or both tables.
7. One idiosyncrasy with some query optimizers is that the order of tables in the FROM-clause may affect the join processing. If
that is the case, one may have to switch this order so that the smaller of the two relations is scanned and the larger relation is used
with an appropriate index.
8. Some query optimizers perform worse on nested queries compared to their equivalent unnested counterparts. There are four
types of nested queries:
Of the four types above, the first one typically presents no problem, since most query optimizers evaluate the inner query once.
However, for a query of the second type, such as the example in item 2, most query optimizers may not use an index on Dno in
EMPLOYEE. However, the same optimizers may do so if the query is written as an unnested query. Transformation of correlated
subqueries may involve setting temporary tables. Detailed examples are outside our scope here.
9. Finally, many applications are based on views that define the data of interest to those applications. Sometimes, these views
become overkill, because a query may be posed directly against a base table, rather than going through a view that is defined by a
JOIN.
Concepts of Keys
A key is a combination of one or more columns that is used to identify rows in a relation
A key in DBMS is an attribute or a set of attributes that help to uniquely identify a tuple (or row) in a relation (or table). Keys are
also used to establish relationships between the different tables and columns of a relational database. Individual values in a key are
called key values.
A composite key is a key that consists of two or more columns
Differentiating record from other records based on columns called Key- columns.
When someone is searching account balance through CNIC when account No. is not provided.
On search two account No.s will be appeared, one is single and other is joint account. A situation is raised, exactly which record of
balance is needed.
Super Keys
A super key is a combination of columns that uniquely identifies any row within a relational database management system
(RDBMS) table.
In a real database we don't need values for all of those columns to identify a row
A candidate key is a key that determines all of the other columns in a relation
Candidate key columns help in searching fewer duplicated or unique records.
Examples
Product can be anything like biscuits, stationery, and books etc. In PRODUCT relation
Prod# is a candidate key Prod_Name is also a candidate key
In ORDER_PROD
(OrderNumber, Prod#) is a candidate key (Candidate key can be called as Alternate Key) Candidate Key Example Invoice
(Buying Items)
Items are sold to Customers
Customer should buy Items with Invoice
Same items can be sold on many invoices
Invoice# and Item# will identifying exact record, what other columns are required
If someone tells you inv# and Qty, can you find exact product Therefore, different candidate keys are used in different
organization e.g, For a Bank as an Enterpise: AccountHolder (or Customer)
ACC#, Fname, Lname, DOB, CNIC#, Addr, City, TelNo, Mobile#, DriveLic#
For NADRA: Citizen (CNIC#, Fname, Lname, FatherName, DOB, OldCNIC#, PAddr, PCity, TAddr, TCity, TelNo, Mobile#)
Candidate key is used for the searching purposes in the logical and conceptual database system.
cname, address, city can be duplicated individually and cannot determine a record. The following combinations distinguish
customer records or tuples.
{cname, telno}, {natid}, {natid, cname}
As {natid} {natid, cname}, then {natid, cname} is not candidate key and {natid} is a candidate key Example-3:
Employee(empno, name, birth_date, address, city, telno, citizenship_id)
empno, telno, citizenship_id are possible candidate keys.
Exercises
A primary key is a candidate key selected as the primary means of identifying rows in a relation: Characteristics of Primary Key:
There is one and only one primary key per relation
The primary key is NOT NULL, UNIQUE
The ideal primary key is short, numeric(alpha), indexed, fixed length and never changes
Key that has its ownership of an Enterprise
The primary key may be a composite key
This means that no subset of the primary key is sufficient to provide unique identification of tuples. NULL values is not allowed in
primary key attribute.
We will now discuss how to identify the primary key in deferent examples.
Example-1:
B# is a primary key. Although, BName is unique, not null but it is not short.
This relation indicates the information about personal details. There is a chance that cname is duplicated, some may have citizenid
and telno as null. This forces us to introduce new a attribute such as cust# that would be a primary key.
Customer (cust#, cname, citizenid, address, city, telno)
Example-4:
In this topic we are going to discuss more example because we need to analyze further new issues from the real life that how we
can decide a well formatted and well-organized kind of primary keys.
As we all know, different organizations utilize various primary keys to express their views. For example,
Indexing is a way to optimize the performance of a database by minimizing the number of disk accesses required when a query is
processed. It is a data structure technique which is used to quickly locate and access the data in a database. DBMS automatically
set the indexing on primary key.
In this topic we have discuss different formats and styles of the primary key.
Basics of Indexing:
This figure indicates that when indexing is used, data is accessed more quickly than when indexing is not used.
Roll# is issued by some University or college in which these students are studying.
Format or instance (Shape of Roll#), meaningful column for PK
Roll# needs to be introduced by some university or college (Enterprise) in which these students are studying It is not wise Roll# is
a serial number because there are many levels, classes and courses.
PK being Roll# should have some format of instance or meaning rather than serial number.
DBMS supplied
Short, numeric and never changes – an ideal primary key!
Has artificial values that are meaningless to users
Normally hidden in forms and reports
Example
RENTAL_PROPERTY
(Street, City, State/Province, Zip/PostalCode, Country, Rental_Rate)
Needs attributes?
Example 2: InsurancePaid
Normally insurance is paid every year in advance. Therefore, paid year is a well-defined artificial key because there is a standard
criteria Insurance is managed yearly basis.
In continuous visit# for all patients will be hard to remember when it is greater than 100.
Solving Example-1
Now we must choose if invoice# should be handled as a surrogate key or not. (When we are buying something from the cash and
carry store).
Let us choose
Surrogate key is called the fact less key as it is added just for our ease of identification of unique values and contains no relevant
fact (or information) that is useful for the table.
It contains unique value for all records of the table.
Solving Example-2
Let us discuss
o Collect all the necessary information about an event (book by, client name, and payment method etc.)
Which columns are required?
Lesson 18: Surrogated Keys Examples - IV
Discuss about Facebook, blogs and twitter. If we want to store data in database, how to decide which id should be given.
Solving Example-2
How to decide which columns are required to fill up to keep track record of installments?
o Columns that we need paid amount, due amount, balance, paid date, due date, penalty and status.
Who took loanidentified by some Loan#, then paid date, paid amount, due date, due amount, balance
Comparisons of keys
Let us discuss
The primary key is the minimal set of attributes which uniquely identifies any row of a table. The primary key cannot have
a NULL value. It cannot have duplicate values.
Unique Key is a key which has a unique value and is used to prevent duplicate values in a column. A unique key can have a NULL
value which is not allowed in a primary key.
1.
a. System date
b. time stamp
c. Random alphanumeric string
Definition:
A foreign key is an attribute that refers to a primary key of same or different relation to form a link (constraint) between the
relations:
Example-1
In this example DeptID is referred to the foreign key in Employee table that refers to a primary key in Department table.
In this topic, we have to discuss more examples of foreign key. In the last topic, we covered the characteristics of a foreign key.
Relationship details:
A relationship between two entities of a similar entity type is called a recursive relationship. Here the same entity type participates
more than once in a relationship type with a different role for each instance. In other words, a relationship has always been between
occurrences in two different entities. However, the same entity can participate in the relationship. This is termed a recursive
relationship.
Figure represent the significance of recursive relationship.
A referential integrity constraint is a statement that limits the values of the foreign key to those already existing as primary key
values in the corresponding relation
3. Recursive Relationship
Integrity rules
Referential with cascade
Integrity Example
Detail:
The entity integrity constraint states that no primary key value can be NULL. This is because the primary key value is used to
identify individual tuples in a relation. Having NULL values for the primary key implies that we cannot identify some tuples. For
example, if two or more tuples had NULL for their primary keys, we may not be able to distinguish them if we try to reference
them from other relations. Key constraints and entity integrity constraints are specified on individual relations. (Reference:
Database Systems (FDS), by Ramez Elmasri and Shamkant Navathe, Addison Wesley, 6th Edition.)
Referential integrity
Foreign keys must match candidate key of source table Foreign keys in some cases can be null
The database must not contain any unmatched foreign key values. If B references A, A must exist.
Detail:
The referential integrity constraint is specified between two relations and is used to maintain the consistency among tuples in the
two relations. Informally, the referential integrity constraint states that a tuple in one relation that refers to another relation must
refer to an existing tuple in that relation. (Reference: Database Systems (FDS), by Ramez Elmasri and Shamkant Navathe,
Addison Wesley, 6th Edition.)
When we update a tuple from a table, say S, that is referenced by another table, say SP, There are similar choices of referential
actions for update and delete:
ON UPDATE CASCADE ON UPDATE RESTRICT
There could be other choices besides these three. , e.g., ON UPDATE SET DEFAULT.
ON UPDATE SET NULL.
Integrity Example
Composite Keys
It is combination of Primary Keys (PK) in a relation of selected attributes gives the concept of composite key. Composite key is an
extended form of a primary key. All the characteristics of PK applies on composite Keys comprising of more than one column.
During ATM Transaction, amounts can be drawn several times on one ATM Card. We need follow the issues
How much amount will be withdrawn? When amount was drawn?
Which machine has been used? What are the types of transaction?
Card#, Amount, DrawDate, Machine#, TransType are the attributes for transaction
Can we say Card# will be a primary key? NO Can DrawDate will be a key with Card#? NO
Then what to do? Add Surrogated key which is TransNo ATMTransaction(Card#, TransNo, Amount, DrawDate, Machine#,
TransType)
Date includes time in seconds as well.
Card(Card#, CardHolderName, ExpiryDate, IssueDate, Acc#) ATMTransaction(Card#, TransNo, Amount, DrawDate, Machine#,
TransType) Answer following questions.
Do we need most closed related table? What are PK and FK?
Hospital Related Example
Card(Card#, CardHolderName, ExpiryDate, IssueDate, Acc#) ATMTransaction(ID, Card#, TransNo, Amount, DrawDate,
Machine#, TransType)
Answer following questions.
What is a drawback of using ID as Surrogated Key in ATMTransaction? What are PK and FK?
CrsOffer(SemID, CrsID, Sec, InstrName, B#, R#) What to do for keys, PK or Composite Key?
CrsOffer(SemID, CrsID, Sec, InstrName, B#, R#) CrsReg(SemID, Roll#, CrsID, Sec, TotMarks, Grade) Preparing Dataset for
Composite Keys
CrsOffer(SemID, CrsID, Sec, InstrName, Building#, Room#)
Meaningful learning-1
Draw dataset instance values
Meaningful learning-2
Indexing columns
Lesson 27: Additional or Other Constraints
Other constraints
• What are additional or other constraints?
• Null
• Domain constraint
CHECK constraints
The check clause in SQL permits domains to be restricted:
Use check clause to ensure that an hourly-wage domain allows only values greater than a specified value.
Create domain hourly-wage numeric(5,2) constraint value-test check(value > = 4.00)
The domain hourly-wage is declared to be a decimal number with 5 digits, 2 of which are after the decimal point
create table account (branch-name Varchar(15),
acc-number Char(10) not null, balance integer,
……)
alter table account
add constraint ck1 check (bal > 0); alter table emp
add constraint ck1_deptno check (deptno in (10,20,30));
UNIQUE & Default constraints
create table Dependent (empno CHAR(6),
SNO Number(2),
DepName varchar2(25) NOT NULL, DOB Date,
BloodGroup VARCHAR2(5), RelType CHAR(1) Default ‘W’); alter table dependent
add constraints pk_dep_empnoSNO primary key (empno, sno); alter table dependent
add constraints uniq_dependent unique (empno, DepName);
Composite UNIQUE constraints
• Super Key
• Candidate Key
• Primary Key (PK)
• Surrogated Key
• Foreign Key (FK)
• Composite Key
• Additional constraints are
Super Key describes concepts to find unique records, can be null & indexed
Candidate key gives fewer records in search. Candidate keys can be used for searching purposes
{CNIC}, {Roll#} and {MedName} are candidate keys
Primary Key
Gives unique record with conditions NOT NULL, UNIQUE and INDEXED
Fixed length, ownership and not frequently changed, Descriptive/ well defined key Roll#, Empno, Item#, PatientID etc.
2019004, 16K-1005 etc.
Surrogated Key
Just like a PK but it is artificial key. Decided by the designer from set of attributes and meaningless to the user.
Foreign Key
Referencing key and also called referential integrity constraint Dept(DNO, Dname, Loc)
Emp(Empno, Ename, Sal, DOB, DNO)
Additional Constraints
Item Table
Create table Item (ItemID CHAR(5),
ItemName VARCHAR2(50) NOT NULL, PPrice NUMBER(10,2),
TotQty NUMBER(5),
Status CHAR(1)
);
Alter table Item
add constraint PK_ItemID primary key (ItemID);
Semantic Modeling
• A class of data models
• Conveys semantic meaning
• Implements databases more intelligently
• Support more sophisticated user interfaces (SQL queries, forms & reports)
• Research started in 1970s and early 1980s
• Other names are data modeling, E/R modeling, E-modeling
Chen ER Model
• Chen in 1976 introduced the E/R model
• The E/R model allows us to sketch the design of a database informally.
• Designs are pictures called entity-relationship diagrams.
• Fairly mechanical ways to convert E/R diagrams to real implementations like relational databases exist.
System Development & DBA
Database Development
🞭 Requirements Analysis
🞤 Collect and Analyze the requirements of the users, e.g., Functional Specification, Prototyping.
🞭 Conceptual Design
🞤 Design a conceptual model (schema), e.g., ER model, ER Mapping.
🞭 Logical Design
🞤 Translate the ER model into a relational model (schema), DDL Scripts.
🞤 Normalization
🞭 Database Building
🞤 Build the database and write application programs for specific DBMS & S/W Appls/ Tools.
🞭 Operation, Maintenance & Tuning
🞤 Use/ Training, Installations, maintain and “tune” the database.
Conceptual Model
Conceptual Design
Lesson 34: ER Diagram & Attributes displays
Attributes Display
• For Logical Design
• Symbols/ Notations
• Motivational Example
• Entity or Entity Type
• Simple or Single Attributes
• Composite Attributes
• Multi valued Attributes
• Derived or Stored Attributes
• Complex Attributes
Normalization
• To ensure you have a “good” design (Testing of your model).
Motivational Example
Consider building a course management system (CMS):
Students
Courses
Professors
Symbols/ Notations
Entity Employee
Underlined are Key Identifiers
There is no PK and FK are shown in ER Diagram
Entity Department
• Underlined are Key Identifiers
• There is no PK and FK are shown in ER Diagram
• Double oval represents multi-valued
Entity Project
An entity type PROJECT has attributes:
Department
Employee
Manager/ Boss??
Example-1 (one-many)
Any Employee has his/ her own department, where works in
One department has many employees or
Employee belongs to department
Example-2 (one-many)
Any Employee has his/ her own department, where works in
Any one department has at least ONE or many employees
Employee belongs to department
Example-3 (one-many)
Example-4 (one-one)
Exercise
Combining ER Diagrams of both one to one and one to many Entities are Department & Employee
Lesson 37: ER Diagrams with Relationships Many to Many
Attributes Display
Example-1 (M-N)
Example-2 (M-N)
Example-3 (M-N)
Example-4 (M-N)
Exercise
Example-1 (many-many)
Any Employee has his/ her own department, where works in as full time employee.
Additionally, Employee works in a Project with number of hours and rate per hour
Example-2 (many-many)
Example-3 (many-many)
Every course is offered with book as its recommended and reference books.
Any book can referred in many courses
Example-4 (many-many)
Below diagram is no longer be valid if student re-enrolled in course again.
Enrollment requires semester as well.
Exercise
Exercise-4 needs to be drawn with correctness, using semester entity after course offering. Such diagram is called ternary ER
relationship.
An entity whose existence depends on some other entity type is called a weak entity type. Otherwise it is called a strong entity
type.
There is a chance of records’ duplications on non-key attributes in weak entity types.
Child entity depends on other parent entity. Parent entity is identifying the child entity.
Weakness can be reduced only.
Example-1
EMPLOYEE(ENO, Name, Gender, DOB, Addr, City, CNIC) Dependent(ENO, SNO, Name, Gender, DOB, RelType)
Exercise
Our ER gives relational schema tables similar to above … Applying Knowledge – Do able artifacts (Implementation) Each mapped
table will have fewer records
Records should be referenced with the possible variations in tables
Mapping of Schema
Lesson 40: Oracle DBMS
Oracle DBMS
Definition
Relational Model
Relational DB Elements
How data is organized?
RDBMS Operations
Oracle DBMS Quick History
Definition
Database Management System (DBMS) is a software package that controls the storage, organization, and retrieval of data, a
DBMS has the following elements:
Relational Model
Scientist E. F. Codd defined relational model based on mathematical theory, called relational model. The relational model has the
following major aspects:
• Structures (Well-defined objects store or access the data)
• Operations (data manipulation)
• Integrity rules (constraints or restrictions)
RDBMS Operations
• Logical operations
An application specifies what content is required. For example, an application requests an employee name or adds an employee
record to a table. Execution of DDL in an application or schema of DBMS.
• Physical operations
RDBMS determines how things should be done and carries out the operation. For example, after an application queries a table,
the database may use an index to find the requested rows and read the data into memory etc. before returning result.
Both operations are independent to each other Oracle DBMS Quick History
Lesson 41: Oracle Schema Objects & Data Access
Oracle Schema Objects & Data Access
Schema Objects
Types of Schema Objects
Tables
Indexes
Tables & Indexes in a Schema
Views, Sequences & Synonym
Data Access
Schema Objects
Physical data storage is independent from logical data structures
Schema is a collection of data structures, or schema objects
Schema is owned by the database user name.
Schema objects such as tables, views, indexes and sequences etc. are user-created structures that directly refer to the data (files)
in the database.
BMS created objects are data dictionary views, user’s roles etc
Tables
• Basic storage unit or schema object for storing data in structured form
• Table consists rows and columns with specific data types of columns
• Maximum number of columns are 255 and unlimited number of rows or tuples in a table
• Every table is uniquely identified in a schema
• Integrity constraints such as NOT NULL, CHECK and PKs etc. are applied on table's columns
Indexes
• An index is an optional data structure that can be created on one or more columns of a table.
• Index can increase performance of data retrieval
• Index can be created on searching columns
• Indexes are physically and logically independent
• Stored queries are called Views. Views do not keep separate data rather they extract data from tables
• A Sequence is a user-created object that can be shared by multiple users. It can be ised for PK column.
• A Synonym is an alias for another schema object and does not require separate storage
Transaction Management
Oracle Database is designed as a multiuser database. The database must ensure that multiple users can work concurrently without
corrupting one another's data.
Multi-user Environment
Oracle Database is designed as a multiuser database. The database must ensure that multiple users can work concurrently without
corrupting one another's data.
Transactions
SQL is initiated at user level (client) and it executes on Server (central Database) at a distant, may be from one city to another.
A transaction is a logical, atomic unit of work (block) that contains one or more SQL statements.
Either transaction is done (committed) or not done (rolled back). No partial execution of transaction is allowed.
Example of Funds Transfer
A transaction is a funds transfer from a saving account to a checking account. The transfer consists of the following separate
operations:
1. Decrease the saving account
2. Increase the checking account
3. Record the transaction in the transaction journal
Oracle Database guarantees that all three operations succeed or fail as a unit. In case of failure, all three operations will be rolled
back.
All or Nothing Transactions
If you perform an atomic operation that updates several files, and if the system fails halfway through, then the files will not be
consistent. In contrast, a transaction moves an Oracle Database from one consistent state to another
All or nothing, basic principle of atomic execution of a transaction
PL/SQL Transactions Example
Database Architecture
A database server is the key to information management
In general, a server reliably manages a large amount of data in a multiuser environment so that users can concurrently access the
same data
A database server also prevents unauthorized access and provides efficient solutions for failure recovery.
Database & Instance
A database is a set of files, located on disk, that store data. These files can exist independently of a database instance.
An instance is a set of memory structures that manage database files. The instance consists of a shared memory area, called the
system global area (SGA), and a set of background processes. An instance can exist independently of database files.
Each client process is associated with its own server process. The server process has its own private session memory, known as
the program global area (PGA).
Instance DB Configuration
Each database instance is associated with one and only one database. If there are multiple databases on the same server, there is a
separate and distinct database instance for each database. A database instance cannot be shared. A Real Application Clusters
(RAC) database usually has multiple instances on separate servers for
the same shared database. In this model, the same database is associated with each RAC instance, which preserves the
requirement that at most only one database be associated with an instance.
Database Logical Storage Structure
The files that constitute an Oracle database are organized into the following:
• Control files: Contain data about the database itself (that is, physical database structure information). These files are critical to
the database. Without them, you cannot open data files to access the data in the database. It can also contain metadata related to
backups.
• Data files: Contain the user or application data of the database, as well as metadata and the data dictionary
• Online redo log files: Allow for instance recovery of the database. If the database server crashes and does not lose any data
files, the instance can recover the database with the information in these files.
Installation Startup
Before Installing Oracle, you must installed complete OS windows with all patches and Activations (Install Java JDK
1.5 (e.g., jre-1_5_0-windows-i586.exe) Run SETUP.EXE to start Oracle Universal Installer
Installation Startup
Before Installing Oracle, you must install complete OS windows with all patches and activations. You may get this pop up in
case if you have oracle previous Installation, other proceed with next choice.
Installation Startup
Before Installing Oracle, you must installed complete OS windows with all patches and activations Run SETUP.EXE to start
Oracle Universal Installer
Installation
Lesson 45: MS SQL Server Installation
Installation Steps
Editions of MS SQL Server 2017
Installation …
Installation by Giving Password
Installation in Progress
Installation Complete
Installation …
Once SQL Server 2017 Developer edition is downloaded, double click on it to launch.
Specify the Media Location where we want to save the SQL Server media. The drive should have at minimum free space of 9240
MB in order to extract the media.
Specify the Media Location where we want to save the SQL Server media. The drive should have at minimum free space of 9240
MB in order to extract the media.
Click on New SQL Server stand-alone installation and you will get this message.
Click on SQL*Plus icon and login with your user id and password with database name (SID) that you have given during
installation.
SQL>CONNECT username @ "hostname[:port][/DBname]"
Metadata
Repository of information (metadata) describing the data in the database. Data about data is called Metadata
Oracle metadata is managed by Data Dictionary (Read only access)
Data Dictionary is managed by using SQL
Catalogue, Repository are alternate names SQL>desc dict
Data Dictionary
• Schema objects in the database including default values for columns and integrity constraints
• Show up of amount of space or storage allocated by objects
• Show up of database users, privileges and roles of users.
• Base Tables
These underlying tables store information about the database. Only Oracle database should write to and read these tables.
• Views
These views decode the base table data into useful information, such as user or table names, using joins and WHERE clauses to
simplify the information. These views contain the names and description of all objects in the data dictionary.
V$PGASTAT
select value from v$pgastat
where name='maximum PGA allocated';
For example, if SGA_TARGET is 272M and PGA_AGGREGATE_TARGET is 90M as shown above, and if the maximum PGA
allocated is determined to be 120M, then MEMORY_TARGET should be at least 392M (272M + 120M).
Lesson 51: Memory Architecture Overview (SGA)
Memory Management & Architecture
• Instance & Database
• Overview of SGA
• Overview of SGA & Database
Memory Architecture Overview
Control Files: Infor. where the files are located, manages overall structure of database
Overview of SGA & Database
Shared Pool: All the executable and libraries (SQL, PLSQL) are available, Data dictionary Cache, processing queries
Buffer Cache: SQL brought data from data file into buffer cache
Entries are made into Redo Log Buffer and data changes are also managed
Many processes, but these are main 5 processes
Any data block (changes in data) in buffer cache is written into data file using DBWR, called Dirty Buffer
LGWR manages log of changes done in Buffer Cache, responsible to write in Redo Log Files
CKPT synchronizes between Data Files and Redo Log Files SMON: used for Recovery
PMON: Process monitorying of all activities
Lesson 52: Introduction to Database Administration
Configuring Memory Configuring Memory with SGA
• Fundamentals of Configuration
• Granules
• Granule Size
• SGA Memory Size
• SGA_MAX_SIZE
• Setting SGA Target Size
• Manual Shared Memory Management
Fundamentals of Configuration
Disable automatic memory management of SGA set target and maximum sizes for the SGA With automatic shared memory
management
The database then sets the total size of the SGA to your designated target, and dynamically tunes the sizes of many SGA
components.
Set the sizes of several individual SGA components with manual shared memory management, thereby determining the overall
SGA size. You then manually tune these individual SGA components on an ongoing basis.
Similarly, for the instance PGA, there is automatic/ manual PGA memory management, in which you set a target size for the
instance PGA.
Granules
SGA memory components include
• the shared pool (used to allocate memory for SQL and PL/SQL execution),
• the java pool (used for java objects and other java execution memory), and
• the buffer cache (used for caching disk blocks).
All SGA components allocate and deal locate space in units of granules.
The granule size is based on the value of the SGA_MAX_SIZE initialization parameter.
Table Size
Login with scott/tiger Desc dba_segments
select sum(bytes)/1024/1024 Table_size_MB from dba_segments
where segment_name ='EMP';
Lesson 55: What are Control Files & Creating Control Files
Creating Control Files Control Files
• What is Control Files?
• Managing Control Files
• Back up Control Files
• Creating Control Files
• Multiplex Copying of Control Files
What is Control Files?
Control file contains
The database name
Names and locations of associated data files and redo log files
The timestamp of the database creation
The current log sequence number
Checkpoint information
Managing Control Files?
• DB server writes into control file when DB is open.
• DB is mounted for recovery with control file.
• More control files can be created
Back up Control Files?
• Adding, dropping, or renaming datafiles
• Adding or dropping a tablespace, or altering the read/write state of the tablespace
• Adding or dropping redo log files or groups
Creating Control Files?
• Creating Initial Control Files
• Creating Additional Copies, Renaming, and Relocating Control Files
• Creating New Control Files
• Initial Control Files are
CONTROL_FILES = (/u01/oracle/prod/control01.ctl,
/u02/oracle/prod/control02.ctl,
/u03/oracle/prod/control03.ctl)
\
SQL of V$CONTROLFILE
With default installation, oracle contains only two control files
SQL of V$PARAMETER
E:\APP\HAIDER\ORADATA\ORCL\CONTROL01.CTL E:\APP\HAIDER\FLASH_RECOVERY_AREA\ORCL\CONTROL02.CTL
E:\APP\HAIDER\ORADATA\ORCL\CONTROL03.CTL
Executing steps in creating new Control Files login sqlplus with user: sys / as sysdba select name from v$controlfile;
Alter database backup controlfile to trace as 'E:\APP\HAIDER\ORADATA\ORCL\control.bkp'; Alter system set control_files='E:\
APP\HAIDER\ORADATA\ORCL\control01.ctl', 'E:\APP\HAIDER\FLASH_RECOVERY_AREA\ORCL\control02.ctl', 'E:\APP\
HAIDER\ORADATA\ORCL\control03.ctl' scope=spfile;
shutdown immediate exit
Lesson 58: Recover & Dropping Control Files
• Checking for Missing or Extra Files
• Handling Errors During CREATE CONTROLFILE
• ORA Error - A Sample
• Different Locations of Control Files
• Executing Commands on SQL
• Backing up Control Files
• Viewing Alert Log using Data Dictionary
• Viewing Alert Log using Enterprise Manager
• Finding Trace Files
• Recovery from Permanent Media Failure
• Dropping Control Files
Checking for Missing or Extra Files
• Dropped control files are still shown through data dictionary.
• MISSINGnnnn is a flag in control file, gives status of file when seen by the data dictionary but actually does not exist on disk.
• If the actual data file corresponding to MISSINGnnnn is read-only or offline normal, then you can make the data file accessible by
renaming MISSINGnnnn to the name of the actual data file.
• If MISSINGnnnn corresponds to a data file that was not read-only or offline normal, then you cannot use the rename operation to
make the data file accessible, because the data file requires media recovery that is precluded by the results of RESETLOGS.
• In this case, you must drop the tablespace containing the data file.
• Conversely, if a data file listed in the control file is not present in the data dictionary, then the database removes references to it
from the new control file. Refers to Alert log in any case.
Handling Errors During CREATE CONTROLFILE
If Oracle Database sends you an error (usually error ORA-01173, ORA-01176, ORA-01177, ORA-01215, or ORA-01216) when
you attempt to mount and open the database after creating a new control file, the most likely cause is that you omitted a file from
the CREATE CONTROLFILE statement or included one that should not have been listed.
ORA Error – A Sample Oracle Error: ORA-01173 Error Description:
Data dictionary indicates missing data file from system table space Error Cause:
Either the database has been recovered to a point in time in the future of the control file or a datafile from the system tablespace
was omitted from the create control file command previously issued.
Action:
For the former problem you need to recover the database from a more recent control file. For the later problem, simply recreate the
control file checking to be sure that you include all the datafiles
in the system tablespace.
Different Locations of Control Files
Executing Commands on SQL
Example of Multiplexing
First LGWR writes concurrently to both A_LOG1 and B_LOG1. Then it writes concurrently to both A_LOG2and B_LOG2, and so
on. LGWR never writes concurrently to members of different groups (for example, to A_LOG1and B_LOG2)
Legal & Illegal Multiplexed Redo Log Configuration
Placing Redo Log on Different Locations?
• When setting up a multiplexed redo log, place members of a group on different physical disks. If a single disk fails, then only one
member of a group becomes unavailable to LGWR and other members remain accessible to LGWR, so the instance can continue to
function.
• Minimum size permitted to Redo Log File is 4MB.
When a redo log group is dropped from the database, and you are not using the Oracle Managed Files feature, the operating system
files are not deleted from disk. Rather, the control files of the associated database are updated to drop the members of the group
from the database structure. After dropping a redo log group, ensure that the drop completed successfully, and then use the
appropriate operating system command to delete the dropped redo log files.
ALTER DATABASE DROP LOGFILE GROUP 3;
After dropping a redo log group, ensure that the drop completed successfully, and then use the appropriate operating system
command to delete the dropped redo log files.
When a redo log group is dropped from the database, and you are not using the Oracle Managed Files feature, the operating system
files are not deleted from disk. Rather, the control files of the associated database are updated to drop the members of the group
from the database structure. Consider the following restrictions and precautions before dropping individual redo log members:
• It is permissible to drop redo log files so that a multiplexed redo log becomes temporarily asymmetric. For example, if you use
duplexed groups of redo log files, you can drop one member of one group, even though all other groups have two members each.
• However, you should rectify this situation immediately so that all groups have at least two members, and thereby eliminate the
single point of failure possible for the redo log.
• An instance always requires at least two valid groups of redo log files, regardless of the number of members in the groups. (A
group comprises one or more members.) If the member you want to drop is the last valid member of the group, you cannot drop the
member until the other members become valid.
• To see a redo Forcing Log Switches log file status, use the V$LOGFILE view.
• You can drop a redo log member only if it is not part of an active or current group. o drop a member of an active group, first force
a log switch to occur.
• Make sure the group to which a redo log member belongs is archived (if archiving is enabled) before dropping the member. To see
whether this has happened, use the V$LOG view.
• To drop specific inactive redo log members, use the following statement: ALTER DATABASE DROP LOGFILE
MEMBER '/oracle/dbs/log3c.rdo';
Lesson 64: Forcing Log Switches Between Redo Log Groups & Members
Forcing Log Switches
• Log Switching
• Log Switching – An Example
• Verifying Blocks in Redo Log Files
Log Switching
• A log switch occurs when LGWR stops writing to one redo log group and starts writing to another. By default, a log switch occurs
automatically when the current redo log file group fills.
• You can force a log switch to make the currently active group inactive and available for redo log maintenance operations. For
example, you want to drop the currently active group, but are not able to do so until the group is inactive.
• You may also want to force a log switch if the currently active group must be archived at a specific time before the members of the
group are completely filled. This option is useful in configurations ith large redo log files that take a long time to fill.
• The following statement forces a log switch: ALTER SYSTEM SWITCH LOGFILE;
Log Switching – An Example
In 3rd step, if archival mode is disabled. Log will be overwritten and no backup of redo log will be maintained. Otherwise, before
switching to logA (when archival mode is enabled), backup of logA will be taken in ARCm files
Verifying Blocks in Redo Log Files
• You can configure the database to use checksums to verify blocks in the redo log files.
• If you set the initialization parameter DB_BLOCK_CHECKSUM to TYPICAL(the default), the database computes a check sum
for each database block when it is written to disk, including each redo log block as it is being written to the current log.
• The checksum is stored header’s block.
• Oracle Database uses the checksum to detect corruption in a redo log block.
• The database verifies the redo log block when the block is read from an archived log during recovery and when it writes the block
to an archive log file. An error is raised and written to the alert log if corruption is detected.
• If corruption is detected in a redo log block while trying to archive it, the system attempts to read the block from another member
in the group.
• If the block is corrupted in all members of the redo log group, then archiving cannot proceed.
• The value of the DB_BLOCK_CHECKSUM parameter can be changed dynamically using the ALTER SYSTEM statement.
Lesson 65: Clearing Redo Log Files
• How to clear Redo Log Files?
• Error in clearing Redo Log Files
• Dropping Redo Log File
• Creating Redo Log File
How to clear Redo Log Files?
A redo log file might become corrupted while the database is open, and ultimately stop database activity because archiving cannot
continue. In this situation the ALTER DATABASE CLEAR LOGFILE statement can be used to reinitialize the file without
shutting down the database.
The following statement clears the log files in redo log group number 3:
o ALTER DATABASE CLEAR LOGFILE GROUP 3;
If you clear a log file that is needed for recovery of a backup, then you can no longer recover from that backup. The database writes
a message in the alert log describing the backups from which you cannot recover.
To clear an unarchived redo log that is needed to bring an offline tablespace online, use the UNRECOVERABLE DATAFILE
clause in the ALTER DATABASE CLEAR LOGFILE statement.
Error in clearing Redo Log Files
• SQL> alter database drop logfile group 1;
• alter database drop logfile group 1
• *
• ERROR at line 1:
• ORA-01624: log 1 needed for crash recovery of instance SBIP18DB (thread 1)
• ORA-00312: online log 1 thread 1: '/SIBIP16/SBIP18DB/SBIP18DB/redo01.log'
Error in clearing Redo Log Files select l.group#, l.thread#, substr(f.member,1,50) f_name, l.archived,
l.status, (bytes/1024/1024) fsize from
v$log l, v$logfile f
where f.group# = l.group# order by 1,2;
Significant of Views
Use of member with status select l.group#, l.thread#, substr(f.member,1,50) f_name, l.archived,
l.status, (bytes/1024/1024) fsize from
v$log l, v$logfile f
where f.group# = l.group# order by 1,2;
Lesson 67: Archive & NoArchive of Redo Log Files
• Archived Redo Log
• Archived Redo Log Usage
• Archived Redo Log Mode
• Archived Redo Log Disabled to Enabled
• Archived Redo Log Enabled to Disabled
• Redo Logs Use in ARCHIVELOG Mode
Archived Redo Log
• Oracle Database lets you save filled groups of redo log files to one or more offline destinations, known collectively as the archived
redo log.
• The process of turning redo log files into archived redo log files is called archiving. This process is only possible if the database is
running in ARCHIVELOG mode.
• You can choose automatic or manual archiving
• An archived redo log file is a copy of one of the filled members of a redo log group with log sequence number.
• For example, if you are multiplexing your redo log, and if group 1 contains identical member files a_log1and b_log1, then the
archiver process (ARCn) will archive one of these member files.
• Should a_log1 become corrupted, then ARCn can still archive the identical b_log1. The archived redo log contains a copy of every
group created since you enabled archiving.
• When the database is running in ARCHIVELOG mode, LGWR cannot reuse and hence overwrite a redo log group until it has been
archived.
• The background process RCn automates archiving operations when automatic archiving is enabled. The database starts multiple
archiver processes as needed to ensure that the archiving of filled redo logs does not fall behind.
Archived Redo Log Usage
You can use archived redo logs to:
• Recover a database
• Update a standby database
• Get information about the history of a database using the LogMiner utility
Archived Redo Log Mode
Display DB Archive Mode Disable or Enable Login with sys / as sysdba.
We will discuss archival files backup later
Archived Redo Log Disabled to Enabled
Using SQL sys to convert Archivelog Disabled to Enabled & Vice Versa During recovery Oracle ask you to use
Alter database open instead of startup open
Archived Redo Log Enabled to Disabled
Redo Logs Use in ARCHIVELOG Mode
Archiving: After a switch, a copy of the Redo Log file is sent to Archive Destination
Setting the Initial Database Archiving Mode
• Usually, you can use the default of NOARCHIVELOG mode at database creation because there is no need to archive the redo
information generated by that process.
• After creating the database, decide whether to change the initial archiving mode.
• If you specify ARCHIVELOG mode, you must have initialization parameters set that specify the destinations for the archived redo
log files
STARTUP MOUNT
• To enable or disable archiving, the database must be mounted but not open.
• Change the database archiving mode. Then open the database for normal operations.
SHUTDOWN IMMEDIATE
• Back up the database.
• Valid/Invalid: indicates whether the disk location or service name information is specified and valid
• Enabled/Disabled: indicates the availability state of the location and whether the database can use the destination
• Active/Inactive: indicates whether there was a problem accessing the destination
Lesson 70: Managing Archive Destination Failure & Controlling Trace
• Rearchiving to a Failed Destination
• Controlling Trace Output
• Initialization Parameter
• Archived Redo Log Views
Rearchiving to a Failed Destination
• Use the REOPEN attribute of the LOG_ARCHIVE_DEST_n parameter to specify whether and when ARCn should attempt to
rearchive to a failed destination following an error. REOPEN applies to all errors, not just OPEN errors.
• REOPEN=n sets the minimum number of seconds before ARCn should try to reopen a failed destination. The default value for n is
300 seconds. A value of 0 is the same as turning off the REOPEN attribute; ARCn will not attempt to archive after a failure.
• If you do not specify the REOPEN keyword, ARCn will never reopen a destination following an error.
• If you do not specify the REOPEN keyword, ARCn will never reopen a destination following an error.
• You cannot use REOPEN to specify the number of attempts ARCn should make to reconnect and transfer archived logs. The
REOPEN attempt either succeeds or fails.
• You cannot use REOPEN to specify the number of attempts ARCn should make to reconnect and transfer archived logs. The
REOPEN attempt either succeeds or fails.
When you specify REOPEN for an OPTIONAL destination, the database can overwrite online logs if there is an error. If you
specify REOPEN for a MANDATORY destination, the database stalls the production database when it cannot successfully archive.
In this situation, consider the following options:
• Archive manually to the failed destination.
• Change the destination by deferring the destination, specifying the destination as optional, or changing the service.
• Drop the destination.
Controlling Trace Output
Background processes always write to a trace file when appropriate. In the case of the archive log process, you can control the
output that is generated to the trace file. You do this by setting the LOG_ARCHIVE_TRACE initialization parameter to specify a
trace level.
The following values can be specified on next slide:
Initialization Parameter
You can combine tracing levels by specifying a value equal to the sum of the individual levels that you would like to trace. For
example, setting LOG_ARCHIVE_TRACE=12, will generate trace level 8 and 4 output.
Archived Redo Log Views
An extent is a logical unit of database storage space allocation made up of a number of contiguous data blocks. One or more
extents in turn make up a segment. When the existing space in a segment is completely used, Oracle allocates a new extent for the
segment.
Benefits of using Multiple Tablespaces
• Separate user data from data dictionary data to reduce I/O contention.
• Separate data of one application from the data of another to prevent multiple applications for offline purpose.
• Store the data files of different tablespaces on different disk drives to reduce I/O contention.
• Take individual tablespaces offline while others remain online, providing better overall availability.
• In one database one can put off line data of Sales tablespaces when other activities of Purchasing tablespaces are going on
• Optimizing tablespace use by reserving a tablespace for a particular type of database use, such as high update activity, read-only
activity, or temporary segment storage.
• Back up individual tablespaces.
Note: Some operating system can affect the number of tablespaces that can simultaneously online. Data files size for tablespaces
must be reasonable.
Review your data in light of these factors and decide how many tablespaces you need for your database design. In one database one
can put off line data of Sales tablespaces when other activities of purchasing tablespaces are going on
Assigning Tablespace Quotas to Users
Grant to users who will be creating tables, clusters, materialized views, indexes, and other objects the privilege to create the object
and a quota(space allowance or limit) in the tablespace intended to hold the object segment. For PL/SQL objects such as packages,
procedures, and functions, users only need the privileges to create the objects. No explicit tablespace quota is required to create
these PL/SQL objects
Types of Tablespaces
SYSTEM tablespace
• Created with the database
• Contains the data dictionary
• Contains the SYSTEM undo segment Non-SYSTEM tablespace
• Separate segments
• Eases space administration
• Controls amount of space of a user
System is a default built in tablespace. Medcare, MedTest are enterprise’s data tablespaces.
Lesson 73: Creating Tablespaces
• Creating Tablespace Syntax
• Creating Tablespace using SQL
• How a user uses Tablespace?
• Space Management in Tablespace
• Locally-Managed Tablespaces
• Dictionary-Managed Tablespaces
• Create UNDO Tablespace
• Create Temporary Tablespace
• Creating a default Temporary Tablespace
• Restrictions
Creating Tablespace Syntax
Readonly, online, offline can also be used. Creating Tablespace using SQL Windows Platform:
CREATE TABLESPACE WORK
datafile 'C:\Oracle11g\ordata\wrk01.dbf' size 500M online;
Unix/ Linux Platform:
CREATE TABLESPACE WORK
datafile '/u01/ordata/wrk01.dbf' size 500M online;
How a user uses Tablespace?
Create user ejaz identified by pwd default tablespace WORK temporary tablespace TEMP;
--Assigning default roles to user(owner) Grant connect, resource to ejaz; Grant create table to ejaz;
Grant create view to ejaz;
Space Management in Tablespace
Tablespaces allocate space in extents. Tablespaces can be created to use one of the following two different methods of keeping
track of free and used space:
Locally managed tablespaces:
The extents are managed within the tablespace via bitmaps. Each bit in the bitmap corresponds to a block or a group of blocks.
Space Management in Tablespace
When an extent is allocated or freed for reuse, the Oracle server changes the bitmap values to show the new status of the blocks.
Locally managed is the default beginning Oracle.
Dictionary-managed tablespaces:
The extents are managed by the data dictionary. The Oracle server updates the appropriate tables in the data dictionary whenever
an extent is allocated or deallocated.
Locally-Managed Tablespaces
• Reduced contention on data dictionary tables
• No undo generated when space allocation or deallocation occurs
• No coalescing required CREATE TABLESPACE userdata datafile 'C:\Ora11g\ordata\usr01.dbf' size 500M EXTENT
MANAGEMENT LOCAL UNIFORM SIZE 128K;
Dictionary-Managed Tablespaces
• Extents are managed in the data dictionary
• Each segment stored in the tablespace can have a different storage clause.
• No coalescing required CREATE TABLESPACE userdata datafile 'C:\Ora11g\ordata\usr01.dbf' size 500M EXTENT
MANAGEMENT DICTIONARY DEFAULT STORAGE (Initial 1M NEXT 1M PCTINCREASE 0);
Create UNDO Tablespace
• Used to store undo segments
• Cannot contain any other objects
• Extents are locally managed
• Can only use the DATAFILE and EXTENT MANAGEMENT Clause CREATE UNDO TABLESPACE undo01 datafile 'C:\
Oracle11g\ordata\undo01.dbf' size 40M;
Create Temporary Tablespace
• Used for sort operations
• Can be shared by multiple users
• Cannot contain any permanent objects
• Locally managed extents recommended CREATE Temporary TABLESPACE temp1
datafile 'C:\Oracle11g\ordata\temp101.dbf' size 20M
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 4M;
When big data sort is not accommodated in a memory, use temp tablespace
Creating a default Temporary Tablespace
• After database creation: ALTER DATABASE
DEFAULT TEMPORARY TABLESPACE default_temp1;
• To see default temporary tablespace, use Select property_name from database_properties;
When big data sort is not accommodated in a memory, use temp tablespace
Restrictions
Default temporary tablespace cannot be:
• Dropped until after a new default is made available
• Taken offline
• Alter to a permanent tablespace
Renaming Tablespaces
Using the RENAME TO clause of the ALTER TABLESPACE, you can rename a permanent or temporary tablespace. For
example, the following statement renames the users tablespace:
ALTER TABLESPACE users RENAME TO usersts;
When you rename a tablespace the database updates all references to the tablespace name in the data dictionary, control file, and
(online) data file headers.
To confirm use DBA_USERS.
Affects of Renaming Tablespaces
• The COMPATIBLE parameter must be set to 10.0.0 or higher.
• If the tablespace being renamed is the SYSTEM tablespace or the SYSAUX tablespace, then it will not be renamed.
• If any data file or the tablespace is offline then the tablespace is not renamed.
• If the tablespace is read only, then data file headers are not updated. This should not be regarded as corruption; instead, it causes a
message to be written to the alert log indicating that data file headers have not been renamed. The data dictionary and control file
are updated.
• If the tablespace is the default temporary tablespace, then the corresponding entry in the database properties table is updated and
the DATABASE_PROPERTIES view shows the new name.
• If the tablespace is an undo tablespace and if the following conditions are met, then the tablespace name is changed to the new
tablespace name in the server parameter file (SPFILE).
The server parameter file was used to start up the database.
The tablespace name is specified as the UNDO_TABLESPACE for any instance.
Dropping Tablespaces
• You can drop a tablespace and its contents (the segments contained in the tablespace) from the database if the tablespace and its
contents are no longer required. You must have the DROP TABLESPACE system privilege to drop a tablespace.
• When you drop a tablespace, the file pointers in the control file of the associated database are removed. You can optionally direct
Oracle Database to delete the operating system files (data files) that constituted the dropped tablespace.
• Off line tablespace before dropping it.
• You cannot drop a tablespace that contains any active segments.
• For example, if a table in the tablespace is currently being used or the tablespace contains undo data needed to roll back
uncommitted transactions, you cannot drop the tablespace.
• To drop a tablespace, use the DROP TABLESPACE statement. The following statements drops the users tablespace, including the
segments in the tablespace:
Alter tablespace users offline; DROP TABLESPACE users INCLUDING CONTENTS;
To delete the data files associated with a tablespace at the same time that the tablespace is dropped, use the INCLUDING
CONTENTS AND DATAFILES clause. The following statement drops the users tablespace and its associated data files:
DROP TABLESPACE users INCLUDING CONTENTS AND DATAFILES;
Dropping Tablespaces - Example
create tablespace test
datafile 'E:\app\Haider\oradata\tst01.dbf' size 100M; Tablespace created.
create user usr1 identified by xyz default tablespace test
temporary tablespace temp; grant connect, resource to usr1; grant create table to usr1;
alter tablespace test offline; shutdown immediate
startup open
DROP TABLESPACE test including contents; conn usr1/xyz
SQL> create table AA
2 (id number); create table
*
ERROR at line 1:
ORA-00959: tablespace 'TEST' does not exist
Segments in a Tablespace
Scenario-2
Dropping a corrupted segment:
In this scenario, perform the following tasks:
1. Call the SEGMENT_VERIFY procedure with the SEGMENT_VERIFY_EXTENTS_GLOBAL option. If no overlaps
are reported, then proceed with steps 2 through 5.
2. Call the SEGMENT_DUMP procedure to dump the DBA ranges allocated to the segment.
3. For each range, call TABLESPACE_FIX_BITMAPS with the TABLESPACE_EXTENT_MAKE_FREE option to
mark the space as free.
4. Call SEGMENT_DROP_CORRUPT to drop the SEG$ entry.
5. Call TABLESPACE_REBUILD_QUOTAS to rebuild quotas
Scenario-3
Fixing Bitmap Where Overlap is Reported:
The TABLESPACE_VERIFY procedure reports some overlapping. Some of the real data must be sacrificed based on previous
internal errors.
After choosing the object to be sacrificed, in this case say, table t1, perform the following tasks:
1. Make a list of all objects that t1 overlaps.
2. Drop table t1. If necessary, follow up by calling the SEGMENT_DROP_CORRUPT procedure.
3. Call the SEGMENT_VERIFY procedure on all objects that t1overlapped. If necessary, call the
TABLESPACE_FIX_BITMAPS procedure to mark appropriate bitmap blocks as used.
4. Rerun the TABLESPACE_VERIFY procedure to verify that the problem is resolved
Scenario-4
Correcting Media Corruption of Bitmap Blocks A set of bitmap blocks has media corruption.
In this scenario, perform the following tasks:
1. Call the TABLESPACE_REBUILD_BITMAPSprocedure, either on all bitmap blocks, or on a single block if only
one is corrupt.
2. Call the TABLESPACE_REBUILD_QUOTAS procedure to rebuild quotas.
3. Call the TABLESPACE_VERIFY procedure to verify that the bitmaps are consistent.
Scenario-5
Assume that the database block size is 2K and the existing extent sizes in tablespace tbs_1 are 10, 50, and 10,000 blocks (used,
used, and free). The MINIMUM EXTENT value is 20K (10 blocks). Allow the system to choose the bitmap allocation unit. The
value of 10 blocks is chosen, because it is the highest common denominator and does not exceed MINIMUM EXTENT.
The statement to convert tbs_1 to a locally managed tablespace is as follows:
EXEC DBMS_SPACE_ADMIN.TABLESPACE_MIGRATE_TO_LOCAL ('tbs_1');
If you choose to specify an allocation unit size, it must be a factor of the unit size calculated by the system.
Lesson 81: Migrating the System Tablespace
• Introduction
• Other tablespaces
• Pre-requisites of migration
• Command for migration
• Importing and exporting data of tablespaces
Introduction
What is SYSTEM tablespace
• It is required when database is created and when database is getting start using.
• Pre bundled code and data dictionary-managed tablespace.
• It contains System Objects, your objects and Users/ Privileges.
• It contains AWR: Automatic Workload Repository
Other Tablespaces
Following tablespaces are required with the SYSTEM tablespace
• UNDO TS
• User ability to rollback
• old values of DML changes
• Read Consistency of Data
• Temporary TS
• contents are temp stored in TS
• order by, distinct, Index --> PGA is active
• Kind rough work area
• USERS TS
• Create users, which users will use which TS information
Pre-requisites of Migration
Before performing the migration the following conditions must be met:
• The database has a default temporary tablespace that is not SYSTEM.
• There are no rollback segments in the dictionary-managed tablespace.
• There is at least one online rollback segment in a locally managed tablespace, or if using automatic undo management, an undo
tablespace is online.
• All tablespaces other than the tablespace containing the undo space (that is, the tablespace containing the rollback segment or the
undo tablespace) are in read-only mode.
• The SYSAUX tablespace is offline.
• The system is in restricted mode.
• There is a cold backup of the database.
Command for Migration
The following statement performs the migration:
SQL> EXECUTE DBMS_SPACE_ADMIN.TABLESPACE_MIGRATE_TO_LOCAL('SYSTEM');
exp scott/tiger FILE = dat1.dmp, dat2.dmp, dat3.dmp FILESIZE=2048
Importing & Exporting Data of Tablespaces
Use following command of export on source database C:\>exp usr/pwd file=c:\test\f1.dmp
Shutdown and startup database, run complicated query to see whether new tablespace getting filled or not see file alert.log for more
details
To check default tablespace is created or DONE or NOT: select * from database_properties;
• Tablespace must be empty before dropping off
• Drop tablespace temp
• including contents and datafiles;
• Shutdown and startup database again
• Drop related files from folder like TEMP01.DBF
Verifying TEMP Tablespace
V$Tablespaces
To display columns of V$Tablespaces:
V$Tablespaces
Select column name V$Tablespaces:
DBA_DATA_FILES
Display name and data file size in MB:
DBA_TEMP_FILES
Display name and data file size in MB:
Logical units of database space allocation are data bocks, extents and TSs The data in the data files is stored in operating system
blocks
Tablespace
• Tablespace stores space logically to create logical objects such as Tables, Indexes and other objects.
• Tablespace’s physically space stored in associated files.
Database Block
• Logical minimum storage unit is called database block.
• Its size is decided during database creation.
• 5 default sizes: 2K, 4K, 8K, 16K, 32K (K: Kilobytes)
Database Block, Extents and Segments
One of block sizes are chosen.
At the finest level of granularity, Oracle Database stores data in data blocks. One logical data block corresponds to a specific
number of bytes of physical disk space, for example, 2 KB. Data blocks are the smallest units of storage that Oracle Database can
use or allocate.
An extent is a set of logically contiguous data blocks allocated for storing a specific type of information. In Figure 12-2, the 24 KB
extent has 12 data blocks, while the 72 KB extent has 36 data blocks.
A segment is a set of extents allocated for a specific database object, such as a table. For example, the data for the employees table
is stored in its own data segment, whereas each index for employees is stored in its own index segment. Every database object that
consumes storage consists of a single segment.
Each segment belongs to one and only one tablespace. Thus, all extents for a segment are stored in the same tablespace. Within a
tablespace, a segment can include extents from multiple data files, as shown in Figure 12-2. For example, one extent for a segment
may be stored in users01.dbf, while another is stored in users02.dbf. A single extent can never span data files
Database Block, Extents and Segments within Database
• At the finest level of granularity, Oracle Database stores data in data blocks.
• One logical data block corresponds to a specific number of bytes of physical disk space. Block sizes can be 2 KB, 4KB or more.
• Data block is the smallest unit of storage that Oracle Database can use or allocate.
Lesson 85: Overview of Database Blocks
• What is Data Block?
• Data Blocks & OS Blocks
• Interaction of Data Blocks & OS Blocks
• Database Block Size
• Data Block Format
• Row Format
• RowID Pseudocolumn
What is Data Block?
Oracle Database manages the logical storage space in the data files of a database in units called data blocks, also called Oracle
blocks or pages. A data block is the minimum unit of database I/O.
Data Blocks & OS Blocks
• At the physical level, database data is stored in disk files made up of operating system blocks.
• An operating system block is the minimum unit of data that the operating system can read or write.
• In contrast, an Oracle block is a logical storage structure whose size and structure are not known to the operating system.
Figure shows that operating system blocks may differ in size from data blocks. The database requests data in multiples of data
blocks, not operating system blocks.
Interaction of Data Blocks & OS Blocks
When the database requests a data block, the operating system translates this operation into a requests for data in permo9anent
storage. The logical separation of data blocks from operating system blocks has the following implications:
• Applications do not need to determine the physical addresses of data on disk.
• Database data can be striped or mirrored on multiple physical disks.
Database Block Size
• Every database has a database block size. The DB_BLOCK_SIZE initialization parameter sets the data block size for a database
when it is created.
• The size is set for the SYSTEM and SYSAUX tablespaces and is the default for all other tablespaces.
• The database block size cannot be changed except by re-creating the database.
• If DB_BLOCK_SIZE is not set, then the default data block size is operating system-specific. The standard data block size for a
database is4 KB or 8 KB.
• If the size differs for data blocks and operating system blocks, then the data block size must be a multiple of the
operating system block size
• You can create individual tablespaces whose block size differs from the DB_BLOCK_SIZE setting.
• A nonstandard block size can be useful when moving a transportable tablespace to a different platform.
Data Block Format
Every data block has a format or internal structure that enables the database to track the data and free space in the block. This
format is similar whether the data block contains table, index, or table cluster data
Block header
This part contains general information about the block, including disk address and segment type. For blocks that are transaction-
managed, the block header contains active and historical transaction information.
A transaction entry is required for every transaction that updates the block. Oracle
Database initially reserves space in the block header for transaction entries. In data blocks allocated to segments that support
transactional changes, free space can also hold transaction entries when the header space is depleted. The space required for
transaction entries is operating system dependent. However, transaction entries in most operating systems require approximately 23
bytes.
■ Table direct ory
For a heap-organized table, this directory contains metadata about tables whose rows are stored in this block. Multiple tables can
store rows in the same block.
■ Row directory
For a heap-organized table, this directory describes the location of rows in the data portion of the block.
After space has been allocated in the row directory, the database does not reclaim this space after row deletion. Thus, a block that is
currently empty but formerly had up to 50 rows continues to have 100 bytes allocated for the row directory. The database reuses
this space only when new rows are inserted in the block.
Row Format
The row data part of the block contains the actual data, such as table rows or index key entries. Just as every data block has an
internal format, every row has a row format that enables the database to track the data in the row.
RowID Pseudocolumn
Queries the ROWID pseudocolumn to show the extended rowid of the row in the employee stable for employee 100
Lesson86: Percentage of Free Space in Blocks (PCTFREE)
Percentage of Free Space Blocks
• What is Free Space in Data Block?
• Data Blocks & OS Blocks
• Percentage of Free Space in Data Blocks
• PCTFREE Space in Data Block
• Optimizing of Free Space
• Reuse of Index Space
• Row Migration
What is Free Space in Data Block?
• As the database fills a data block from the bottom up, the amount of free space between the row data and the block header
decreases. This free space can also shrink during updates, as when changing a trailing null to a non-null value.
• The database manages free space in the data block to optimize performance and avoid wasted space (Tuning Feature).
Data Blocks & OS Blocks
• At the physical level, database data is stored in disk files made up of operating system blocks.
• An operating system block is the minimum unit of data that the operating system can read or write.
• In contrast, an Oracle block is a logical storage structure whose size and structure are not known to the operating system.
Percentage of Free Space in Data Blocks
• The PCTFREE storage parameter is essential to how the database manages free space. This SQL parameter sets the minimum
percentage of a data block reserved as free space for updates to existing rows.
• PCTFREE is important for preventing row migration and avoiding wasted space.
PCTFREE Space in Data Block
Shows how a PCTFREE setting of 20affects space management. The database adds rows to the block over time, causing the row
data to grow upwards toward the block header, which is itself expanding downward toward the row data.
The PCTFREE setting ensures that at least20% of the data block is free. For example, the database prevents an INSERT statement
from filling the block so that the row data and header occupy a combined 90% of the total block space, leaving only 10% free.
Optimizing of Free Space
• While the percentage of free space cannot be less than PCTFREE, the amount of free space can be greater. For example, a
PCTFREE setting of 20% prevents the total amount of free space from dropping to 5% of the block, but permits 50% of the block
to be free space.
The following SQL statements can increase free space:
• DELETE statements
• UPDATE statements that either update existing values to smaller values or increase existing values and force a row to migrate
• INSERT statements on a table that uses OLTP compression. If inserts fill a block with data, then the database invokes block
compression, which may result in the block having more free space.
• The space released is available for INSERT under the following conditions:
• If the INSERT statement is in the same transaction and after the statement that frees space, then the statement can use the space.
• If the INSERT statement is in a separate transaction from the statement that frees space (perhaps run by another user), then the
statement can use the space made available only after the other transaction commits and only if the space is needed.
Reuse of Index Space
The database can reuse space within an index block.
For example, if you insert a value into a column and delete it, and if an index exists on this column, then the database can reuse the
index slot when a row requires it
Row
The left block contains a row that is updated so that the row is now too large for the block.
Migration
The database
moves the entire row to the right block and leaves a pointer to the
migrated row in the left block
Lesson87: Overview of Extents
• What is Extent?
• Incremental Extents of Segment
• Deallocation of Extents
• Manually Deallocate Space
• Sample SQL DDL using Extent
What is Extent?
An extent is a logical unit of database storage space allocation made up of contiguous data blocks. Data blocks in an extent are
logically contiguous but can be physically spread out on disk because of RAID striping and file system implementations.
Data Files
What is Extents?
By default, the database allocates an initial extent for a data segment when the segment is created. An extent is always contained in
one data file. Although no data has been added to the segment, the data blocks in the initial extent are reserved for this segment
exclusively. The first data block of every segment contains a directory of the extents in the segment.
Figure shows the initial extent in a segment in a data file that previously contained no data.
Incremental Extents of Segment
If the initial extent become full, and if more space is required, then the database automatically allocates an incremental extent for
this segment. An incremental extent is a subsequent extent created for the segment. The allocation algorithm depends on whether
the tablespace is locally managed or dictionary-managed. In the locally managed case, the database searches the bitmap of a data
file for adjacent free blocks. If the data file has insufficient space, then the database looks in another data file. Extents for a
segment are always in the same tablespace but may be in different data files. Figure shows that the database can allocate extents for
a segment in any data file in the tablespace. For example, the segment can allocate the initial extent in users01.dbf, allocate the first
incremental extent in users02.dbf, and allocate the next extent in users01.dbf.
Deallocation of Extents
In general, the extents of a user segment do not return to the tablespace unless you drop the object using a DROP command. In
Oracle Database 11g Release 2, you can also drop the segment using the DBMS_SPACE_ADMIN package. For example, if you
delete all rows in a table, then the database does not reclaim the data blocks for use by other objects in the tablespace.
Manually Deallocate Space
Following techniques can deallocate extents
• You can use an online segment shrink to reclaim fragmented space in a segment. Segment shrink is an online, in-place operation.
• You can move the data of a non partitioned table or table partition into a new segment, and optionally into a different tablespace for
which you have quota.
• You can rebuild or coalesce the index
• You can truncate a table or table cluster, which removes all rows. By default, Oracle Database deallocates all space used by the
removed rows except that specified by the MINEXTENTS storage parameter.
• You can deallocate unused space, which frees the unused space at the high water mark end of the database segment and makes the
space available for other segments in the tablespace
Sample SQL DDL using Extent
CREATE TABLESPACE MCare_indx
DATAFILE 'D:\oracle11g\ora11g\oradata\MCare_indx01.dbf' SIZE 100M
MINIMUM EXTENT 64K
DEFAULT STORAGE (INITIAL 64K NEXT 64K);
Data Files
For example, Patient table consists of three segments from two data files
Collection of extents, may be from multiple data files is called Segments. Collection of segments is called Tablespace
User Segments
A single data segment in a database stores the data for one user object. There are different types of segments. Examples of user
segments include:
• Table, table partition, or table cluster
• LOB or LOB partition
• Index or index partition
Each non partitioned object and object partition is stored in its own segment. For example, if an index has five partitions, then five
segments contain the index data.
User Segments Creation
By default, the database uses deferred segment creation to update only database metadata when creating tables and indexes.
When a user inserts the first row into a table or partition, the database creates segments for the table or partition, its LOB columns,
and its indexes.
Deferred segment creation enables you to avoid using database resources unnecessarily.
For example, installation of an application can create thousands of objects, on consuming significant disk space. Many of these
objects may never be used.
You can use the DBMS_SPACE_ADMIN package to manage segments for empty objects.
Creation of Multiple Segments
Cluster Key
• The cluster key is the column or columns that the clustered tables have in common.
• For example, the employees and departments tables share the department_id column.
• You specify the cluster key when creating the table cluster and when creating every table added to the table cluster.
Cluster Key Value
• The cluster key value is the value of the cluster key columns for a particular set of rows. All data that contains the same cluster key
value, such as department_id=20, is physically stored together. Each cluster key value is stored only once in the cluster and the
cluster index, no matter how many rows of different tables contain the value
What is a Table?
• Tables are the basic unit of data storage in an Oracle Database. Data is stored in rows and columns. You define a table with a table
name, such as employees, and a set of columns.
• You give each column a column name, such as emp_id, last_name, and job_id; a data type, such as VARCHAR2, DATE, or
NUMBER; and a width. The width can be predetermined by the data type, as in DATE.
• If columns are of the NUMBER data type, define precision and scale instead of width. A row is a collection of column information
corresponding to a single record.
• You can specify rules for each column of a table. These rules are called integrity constraints. One example is a NOT NULL
integrity constraint. This constraint forces the column to contain a value in every row.
• Some column types, such as LOBs, varrays, and nested tables, are stored in their own segments. LOBs and varrays are stored in
LOB segments, while nested tables are stored in storage tables.
Heap-Organized Table
• This is the basic, general purpose type of table which is the primary subject of this chapter. Its data (heap).
Clustered Table
• A clustered table is a table that is part of a cluster. A cluster is a group of tables that share the same data blocks because they share
common columns and are often used together
Indexed-Organized Table
• Unlike an ordinary (heap-organized) table, data for an index-organized table is stored in a B-tree index structure in a primary key
sorted manner.
• Besides storing the primary key column values of an index-organized table row, each index entry in the B-tree stores the non key
column values as well
Partitioned Table
• Partitioned tables enable your data to be broken down into smaller, more manageable pieces called partitions, or even subpartitions.
Each partition can have separate physical attributes, such as compression enabled or disabled, type of compression, physical
storage settings, and tablespace, thus providing a structure that can be better tuned for availability and performance.
• In addition, each partition can be managed individually, which can simplify and reduce the time required for backup and
administration.
Alter Table
You have the option of rebuilding the index online. Rebuilding online enables you to update base tables at the same time that you
are rebuilding.
The following statement rebuilds the emp_name index online:
ALTER INDEX emp_name REBUILD ONLINE;
Making Index Visible/ Invisible
An invisible index is ignored by the optimizer unless you explicitly set the OPTIMIZER_USE_INVISIBLE_INDEXES
initialization parameter to TRUE at the session or system level. Making an index invisible is an alternative to making it unusable or
dropping it.
ALTER INDEX index INVISIBLE; ALTER INDEX index VISIBLE;
Making Index Useable
Query the data dictionary to determine whether an existing index or index partition is usable or unusable. SELECT
INDEX_NAME AS "INDEX OR PART NAME", STATUS, SEGMENT_CREATED
FROM USER_INDEXES UNION ALL
SELECT PARTITION_NAME AS "INDEX OR PART NAME", STATUS, SEGMENT_CREATED FROM
USER_IND_PARTITIONS;
Lesson 101: Recycle Bin
Schema Recycle Bin
• What is the Recycle Bin?
• What is the Recycle Bin?
• Object Naming the Recycle Bin
• Recycle Bin Tables
• Enable or Disable Recycle Bin
• Purging Objects in the Recycle Bin
• Restoring Objects From Recycle Bin
• Enable or Disable Recycle Bin
What is the Recycle Bin?
• The recycle bin is actually a data dictionary table containing information about dropped objects.
• Dropped tables and any associated objects such as indexes, constraints, nested tables, and the likes are not removed and still
occupy space.
• They continue to count against user space quotas, until specifically purged from the recycle bin or the unlikely situation where they
must be purged by the database because of tablespace space constraints.
• Each user can be thought of as having his own recycle bin, because, unless a user has the SYSDBA privilege.
SELECT * FROM RECYCLEBIN;
Object Naming the Recycle Bin
When a dropped table is moved to the recycle bin, the table and its associated objects are given system- generated names. This is
necessary to avoid name conflicts that may arise if multiple tables have the same name. circumstances are:
• A user drops a table, re-creates it with the same name, then drops it again.
• Two users have tables with the same name, and both users drop their tables. BIN$unique_id$version
Recycle Bin Tables
Data Dictionary Views for VIEWS grant create view to scott; create view v_emp
AS
select job, count(*) Count_EMP from emp
group by job;
Data Dictionary Views for VIEWS
Following view can be used to see synonyms
Lesson117: Introduction to Database Server Programming using PL/SQL
Introduction to PL/SQL
• What is PL/SQL
• PL/SQL & Other Languages
What is PL/SQL
• Embedded structured programming
• Performed on Server Machine
• Centrally accessible from single database
• Executed on server without any network overhead
PL/SQL & Other Languages
• Embedded SQL (PL/SQL, JAVA/ VB & DB)
• Database Server Level Programming(PL/SQL, Transact-SQL, IBM DB2-Cobol, ProC, ProCobol)
• (Front end programming can be changed without affecting Server programming)
• Database Client Programming (Developer 9i, JDeveloper 9i, Java (J2EE), VB, .Net)
Lesson118: PL/SQL Constructs
• SQL & PL/SQL
• PL/SQL Data Types
• PL/SQL Contents
SQL & PL/SQL
SQL & PL/SQL on Application & Server Sides
PL/SQL Contents
• Benefits
• Basic Constructs
• Anonymous blocks
• Procedures
• Functions
• Packages
• Triggers
• Cursors
• Dynamic SQL
Lesson119: PL/SQL Benefits & Basic Constructs
• PL/SQL Benefits
• PL/SQL Basic Constructs
PL/SQL Benefits
More powerful than pure SQL because it combines the power of SQL and
• Iteration (loops)
• Selection (Ifs)
• Cursors
• Block Structures
• Stored Procedures
• etc.
PL/SQL Basic Constructs
Basic Structure Running a program Variables
SELECT INTO
Comments IFs LOOPs
Output
Program Compilation
Following program (Anonymous Block) executes with error due to compile & execute together
Lesson122: Basic Programs Block Structure with more examples
Basic Program Structure
• Program execution with an error
• Program fixing of an error
• Program with SELECT INTO
Program execution with an error
Following program executes with an error during compilation/ execution:
FOR Loop
Following is its Syntax:
FOR index IN lower_bound .. upper_bound LOOP
statements; END LOOP;
FOR Loop Syntax
Following is its generic syntax:
Other Exceptions
Examples are
NO_DATA_FOUND, ZERO_DIVIDE, OTHERS
Raise some exception
WHEN NO_DATA_FOUND THEN
raise_application_error(-20011,'Invalid FK value'); Raise My_Excep;
To display details of oracle standard error message EXCEPTION
WHEN OTHERS THEN
DBMS_OUTPUT.PUT_LINE('Error detail is: '||SQLERRM)
Lesson 131: PL/SQL Procedure
PL/SQL Procedure
What is a Procedure in PL/SQL?
Procedure - Example
Compiling & Executing Procedure
Invoking a Procedure in a Block
Executing a Procedure & Show Errors
Data Dictionary for Procedures
Managing Procedure using SQL Developer
What is a Procedure in PL/SQL?
Is a block with a name
The DECLARE key word is not used
Parameters can be
IN OUT IN OUT
• Is stored (USER_SOURCE) What is Procedure in PL/SQL? Writing PL/SQL procedure
Procedure – Example
Calling Function
Function can be called after compilation in some other function, procedure, block or package by assigning it to variable.
• Function - Exercise-1
• Function - Exercise-2
Function – Exercise-2
Lesson 135: Exercises of Procedures
Exercises of Procedures
• Compiling & Executing Function
• Procedure Exercise-I Compiling & Executing Procedure DECLARE
… BEGIN
… proc_test('23');
… END;
/ Or
SQL> exec proc_test(‘1123’)
Procedure – Exercise-1
Lesson 136: PL/SQL Package-1
PL/SQL Package-1
Purpose of PL/SQL Package
PL/SQL Package contains …
Schematic Diagram of Package
PL/SQL Package Specification
PL/SQL Package Body
Package Specification & its Body
Purpose of PL/SQL Package
In PL/SQL, a package is a schema object that contains definitions for a group of related functionalities.
A package includes variables, constants, cursors, exceptions, procedures, functions, and subprograms.
It is compiled and stored in the Oracle Database.
PL/SQL Package contains …
• Typically, a package has a specification and a body. A package specification is mandatory while the package body can be required
or optional, depending on the package specification.
Schematic Diagram of Package
https://www.oracletutorial.com/plsql-tutorial/plsql-package/
PL/SQL Package Specification
The package specification declares the public objects that are accessible from outside the package.
If a package specification whose public objects include cursors and subprograms, then it must have a body which defines queries
for the cursors and code for the subprograms.
PL/SQL Package Body
A package body contains the implementation of the cursors or subprograms declared in the package specification. In the package
body, you can declare or define private variables, cursors, etc., used only by package body itself.
A package body can have an initialization part whose statements initialize variables or perform other one-time setups for the whole
package.
A package body can also have an exception-handling part used to handle exceptions.
Package Specification & its Body
Lesson 137: PL/SQL Package-2
• What is a Package in PL/SQL?
• Package Specification
• Package Body
• Add in a Package Body by hiding from User Specification
• Invoking Package Subprogram
What is a Package in PL/SQL?
Selected and purposeful collection of procedures, functions and variables is managed in package.
Package
Package Specification
Package Body
Invoking Package subprogram
Abstract Programming
To avoid spreading of procedures & functions
Is stored (USER_SOURCE)
Package Specification
Fired Triggers
Need to see which trigger names are fired on commands such as
Updating & Deleting on Table with specific Columns
Cursor’s usage
• Declare a cursor
CURSOR cursor_name IS query;
• Open a cursor
OPEN cursor_name;
• Fetch from a cursor
The FETCH statement places the contents of the current row into var.
FETCH cursor_name INTO variable_list; To retrieve all rows in a result set, you need to fetch each row till the last one.
Closing a cursor
CLOSE cursor_name;
Explicit Cursor Attributes
• %ISOPEN
• %FOUND
• %NOTFOUND
• %ROWCOUNT
Lesson 143: Cursor Programming-2
Cursor Programming-2
• Explicit Cursor Attributes
• Implicit Cursor - Record Type
• ROWTYPE Example - 1
• ROWCOUNT Example
• How to Declare Cursor?
• %ISOPEN, %FOUND, FETCH Example
Explicit Cursor Attributes
• %ISOPEN
• %FOUND
• %NOTFOUND
• %ROWCOUNT
Implicit Cursor – Record Type
• A record is a composite data structure, which means that it is composed of one or more elements.
• Records are very much like a row of a database table, but each element of the record does not stand on its own. PL/SQL supports
three kinds of records: table-based, cursor-based, and programmer-defined.
v_EmpRec EMP%TYPE; CURSOR c_emp IS
SELECT empno, ename, job FROM emp
WHERE deptno = 20;
vr_emp c_emp%ROWTYPE;
ROWTYPE Example - 1
ROWTYPE Example - 2
ROWCOUNT Example
https://www.tutorialspoint.com/plsql/plsql_collections.htm
Declaration of an Array
A varray type is created with the CREATE TYPE statement. You must specify the maximum size and the type of elements stored
in the varray.
CREATE Or REPLACE TYPE namearray AS VARRAY(3) OF VARCHAR2(10);
/
Basic Example of an Array
Associative Array
An index-by table is a set of key-value pairs. Each key is unique and is used to locate the corresponding value. The key can be
either an integer or a string.
TYPE type_name IS TABLE OF element_type [NOT NULL] INDEX BY subscript_type;
table_name type_name;
Associative Array – An Example
Instance Structure
When an instance is started, Oracle Database allocates a memory area called the system global area (SGA) and starts one or more
background processes. The SGA serves various purposes, including the following:
Maintaining internal data structures that are accessed by many processes and threads concurrently
Caching data blocks read from disk
Buffering redo data before writing it to the online redo log files
Storing SQL execution plans The SGA is shared by the Oracle processes, which include server processes and background
processes, running on a single computer. The way in which Oracle processes are associated with the SGA varies according to
operating system
A database instance includes background processes. Server processes, and the process memory allocated in these processes, also
exist in the instance. The instance continues to function when server processes terminate.
Database Instance Framework
Lesson 148: Concepts of Oracle Instance-2
Database Instance Configuration
Database Instance Configuration
Database Single & RAC Instance Configuration
Database Instance Configuration
You can run Oracle Database in either of the following mutually exclusive configurations:
• Single-instance configuration A one-to-one relationship exists between the database and an instance.
• Oracle Real Application Clusters (Oracle RAC) configuration A one-to-many relationship exists between the database and
instances.
Database files can be control files, data files, Temp files etc Non Database files are Log files
Files include Physical and Logical concepts
Database Single & RAC Instance Configuration
This query shows the time that the current instance was started:
At this stage, no database is associated with the instance. Scenarios that require a NOMOUNT state include database creation and
certain backup and recovery operations. Instance Startup
• Login with user sys / as sysdba
• After shutdown database, run the following:
Instance Startup
This query shows the time that the current instance was started:
Database creation and certain backup & recovery operations are performed.
Lesson 151:Instance Startup & Shutdown-2
• Connection with Administrator Privileges
• Instance Started to perform steps
• How Database is Mounted?
• How Database is opened?
• Read-only Opened Database
• Progress from Shutdown & Startup OPEN
Connection with Administrator Privileges
• Database startup and shutdown are powerful administrative options that are restricted to users who connect to Oracle Database with
administrator privileges.
• Normal users do not have control over the current status of an Oracle database.
• Depending on the operating system, one of the following conditions establishes administrator privileges for a user:
• The operating system privileges of the user enable him or her to connect using administrator privileges.
• The user is granted the SYSDBA or SYSOPER system privileges and the database uses password files to authenticate database
administrators over the network.
• When you connect with the SYSDBA system privilege, you are in the schema owned by SYS. When you connect as SYSOPER,
you are in the public schema.
Instance Started to perform steps
• Searches for a server parameter file in a platform-specific default location and, if not found, for a text initialization parameter
file(specifying STARTUP with the SPFILE or PFILE parameters overrides the default behavior)
• Reads the parameter file to determine the values of initialization parameters
• Allocates the SGA based on the initialization parameter settings
• Starts the Oracle background processes
• Opens the alert log and trace files and writes all explicit parameter settings to the alert login valid parameter syntax
How Database is Mounted?
• The instance mounts a database to associate the database with this instance. To mount the database, the instance obtains the names
of the database control files specified in the CONTROL_FILES initialization parameter and opens the files.
• Oracle Database reads the control files to find the names of the data files and the online redo log files that it will attempt to access
when opening the database.
• In a mounted database, the database is closed and accessible only to database administrators.
• Administrators can keep the database closed while completing specific maintenance operations. However, the database is not
available for normal operations
How Database is opened?
• Opening a mounted database makes it available for normal database operations
• Opens the online data files in tablespaces other than undo tablespaces
• Acquires an undo tablespace
• Opens the online redo log files
• By default, the database opens in read/write mode.
Read-only Opened Database
• Data files can be taken offline and online. However, you cannot take permanent tablespaces offline.
• Offline data files and tablespaces can be recovered.
• The control file remains available
• Temporary tablespaces created with the CREATE TEMPORARY TABLESPACE statement are read/write.
• Writes to operating system audit trails, trace files, and alert logs can continue.
Progress from Shutdown & Startup OPEN
Lesson 152: Instance Startup & Shutdown-3
• Database with Shutdown Instance
• Progress from OPEN to Shutdown
• Steps to perform Database Shutdown
• Shutdown ABORT
• Shutdown IMMEDIATE
• Shutdown TRANSACTIONAL
• Shutdown NORMAL
• Shutdown Modes Summary
Database with Shutdown Instance
• In a typical use case, you manually shut down the database, making it unavailable for users while you perform maintenance or
other administrative tasks.
• You can use the SQL*Plus SHUTDOWN command or Enterprise Manager to perform these steps.
Progress from OPEN to Shutdown
Static parameters include DB_BLOCK_SIZE, DB_NAME, and COMPATIBLE. Dynamic parameters are grouped into session-
level parameters, which affect only the current user session, and system-level parameters, which affect the database and all
sessions.
MEMORY_TARGET=system-level & NLS_DATE_FORMAT=session level parameters
--scott is the owner of EMP table and granting GRANT select on EMP to jef;
Object Privileges using Oracle EM
Lesson 169: System Privileges Commands
• System Privileges
• System Privileges with ADMIN OPTION
• System Privileges - Examples
• System Privileges SYSOPER & SYSDBA
• System Privileges using Oracle EM
System Privileges
• More than 100 distinct system privileges
• ANY keyword in privileges signifies that users have the privilege in any schema
• GRANT command adds a privilege to a user or a group of users
• REVOKE command deletes the privileges
System Privileges with ADMIN OPTION
Alter Role – EM
Lesson 178: Default Roles
• Default Roles
• Assigning & Disable Roles - Sample SQLs
• Establishing Default Roles
Assigning & Disable Roles – Sample SQLs
• GRANT oe_clerk TO scott;
• GRANT hr_clerk TO hr_manager;
• GRANT hr_manager TO scott WITH ADMIN OPTION;
• Disable role during session of user u1
SQL> SET ROLE NONE;
Establishing Default Roles
where
• role: Is the role to be revoked or the role from which roles are revoked.
• user: is the user from which the system privileges or roles are revoked.
• PUBLIC: Revokes the privilege or role from all users
Revoke Roles using OEM
Using Oracle Enterprise Manager to Revoke a Role from a User From the OEM Console:
• Navigate to Security>Users.
• Highlight the user for whom a role is to be revoked.
• Navigate to Roles Granted.
• Select the role to be revoked.
• Select Revoke from the right-mouse menu.
• Select Yes to confirm revocation.
Removing Roles
• To remove a role from the database, use the following syntax:
• SQL> DROP ROLE role;
• When you drop a role, the Oracle server revokes it from all users and roles to whom it has been granted and removes it from the
database.
• In order to drop the role, you must have been granted the role with ADMIN OPTION or have the DROP ANY ROLE system
privilege.
Lesson 182:Guidelines for Creating Roles
• Guidelines
• Guidelines for an Example
• Guidelines for Assigning Roles
• Guidelines for using Password and Default Roles
• Roles with Password
Guidelines
• Because a role includes the privileges that are necessary to perform a task, the role name is usually an application task or a job title.
• Our example uses both application tasks and job titles for role names.
Guidelines for an Example
Use the following steps to create, assign, and grant users roles:
• Create a role for each application task. The name of the application role corresponds to a task in the application, such as
PAYROLL.
• Assign the privileges necessary to perform the task to the application role.
• Create a role for each type of user. The name of the user role corresponds to a job title, such as PAY_CLERK.
• Grant application roles to user's roles.
• Grant user's roles to users.
• If a modification to the application requires that the new privileges are needed to perform the payroll task, then the DBA only
needs to assign the new privileges to the PAYROLL application role. All of the users that are currently performing this task will
receive the new privileges.
Guidelines for Assigning Roles
Lesson 187:SQL*Loader
• Introduction
• Files used by SQL*Loader
• Using SQL*Loader
Introduction
SQL*Loader loads data from external files into tables in an Oracle database. SQL*Loader has the following features:
• SQL*Loader can use one or more input files.
• Several input records can be combined into one logical record for loading.
• Input fields can be of fixed or variable lengths.
• Input data can be in any format: character, binary, packed decimal, date, and zoned decimal.
• Data can be loaded from different types of media such as disk, tape, or named pipes.
• Data can be loaded into several tables in one run.
• Options are available to replace or to append to existing data in the tables.
• SQL functions can be applied on the input data before the row is stored in the database.
• Column values can be auto generated based on rules. For example, a sequential key value can be generated and stored in a column.
• Data can be loaded directly into the table, bypassing the database buffer cache.
Files used by SQL*Loader
SQL*Loader uses the following files:
• Loader control file: Specifies the input format, output tables, and optional conditions that can be used to load only part of the
records found in the input data files
• Input data files: Contain the data in the format defined in the control file
• Parameter file: Is an optional file that can be used to define the command line parameters for the load
• Log file: Is created by SQL*Loader and contains a record of the load
• Bad file: Is used by the utility to write the records that are rejected during the load.
• Discard file: Is a file that can be created, if necessary, to store all records that did not satisfy the selection criteria
Using SQL*Loader
When you invoke SQL*Loader, you can specify parameters that establish session characteristics. Parameters can be entered in any
order, optionally separated by commas.
You can specify values for parameters, or in some cases, you can accept the default without entering a value.
If you invoke SQL*Loader without specifying any parameters, SQL*Loader displays a Help screen that lists the available
parameters and their default values.
Using SQL*Loader
Lesson 188:SQL*Loader Control Files
• Introduction
• Three Sections of Control Files
• Sample Control File
• Sample Control File Explanation
Introduction
The loader control file tells SQL*Loader:
• Where to find the load data
• The data format
• Configuration details:
• Memory management
• Record rejection
• Interrupted load handling details
• How to manipulate the data
The SQL*Loader control file is a text file that contains DDL instructions. DDL is used to control the following aspects of a
SQL*Loader session:
• Where SQL*Loader finds the data to load
• How SQL*Loader expects that data to be formatted
• How SQL*Loader configures (memory management, rejecting records, interrupted load handling, and so on) as it loads the data
• How SQL*Loader manipulates the data being loaded
Three Sections of Control Files
• The first section contains session-wide information, for example:
- Global options such as bind size, rows, records to skip, and so on
- INFILE clauses to specify where the input data is located
- How data is to be loaded
• The second section consists of one or more INTO TABLE blocks. Each of these blocks contain information about the table into
which the data is to be loaded, such as the table name and the columns of the table.
• The third section is optional and, if present, contains input data.
• The second section consists of one or more INTO TABLE blocks. Each of these blocks contain information about the table into
which the data is to be loaded, such as the table name and the columns of the table.
• The third section is optional and, if present, contains input data.
Sample Control File
Logging Changes
Conventional path loading generates redo entries just as any DML statement. When using a Direct path load, redo entries are not
generated if:
The database is in NOARCHIVELOG mode The database is in ARCHIVELOG mode, but logging is disabled. Logging can be
disabled by setting the NOLOGGING attribute for the table or by using the UNRECOVERABLE clause in the control file
Enforcing Constraints
During Direct path loads, the constraints are handled as follows:
• NOT NULL constraints are checked when arrays are built.
• FK and CHECK constraints are disabled
• PK and unique constraints are checked during and at the end of the run and may be disabled
Parallel Direct Paths
• Multiple SQL*Loader sessions improve the performance of a Direct path load. Three models of concurrency can be used to
minimize the time required for data loading:
• Parallel conventional path loads
• Intersegment concurrency with Direct path load method
• Intra segment concurrency with Direct path load method
• Concurrent conventional path
• When trigger or integrity constraints pose a problem
• Intersegment concurrency with Direct path load method
• Loading of different objects such as tables and partitions of same table.
• Intra segment concurrency with Direct path load method
• Multiple Direct path load session concurrently into same table or partition
Enforcing Constraints
Lesson 192:Data Conversion
• Introduction
• SQL*Loader Rejects
• Discarded or Rejected Records
Introduction
During a conventional path load, data fields in the data file are converted into columns in the database in two steps:
• The field specifications in the control file are used to interpret the format of the data file and convert it to a SQL INSERT statement
using that data.
• The Oracle database server accepts the data and executes the INSERT statement to sore the data in the database
SQL*Loader Rejects
When input format is invalid
• When delaminated field exceeds its maximum length
• All bad records are saved in bad file
• Oracle determines row is valid or not.
• If row or record is rejected, SQL*Loader puts it in the bad file
• Why row can be rejected?
(unique or not null constraints)
Discarded or Rejected Records
Lesson 193:Log Files & Steps to adopt
Log File Contents
• Header Information
• Global Information
• Table Information
• Data File Information
• Table Load Information
• Summary Statistics
• Additional statistics for Direct path loads and multithreading Information
Header Information
• Date of the run
• Software version number
Global Information
• Name of all input/ output file
• Echo of command-line arguments
• Continuation character specification
Table Information
• Table Name
• Load Conditions
• INSERT, APPEND or REPLACE specifications
• Columns, length, data type and deliminator
Data File Information
• SQL*Loader and Oracle data record errors
Log Files & Steps to adopt Table Load Information
• Number of rows loaded, rejected, discarded, Null fields discarded
Summary Statistics
• Amount of space used for bind array
• Cumulative load statistics for data files, skipped, read and rejected records
• Additional statistics for Direct path loads and multithreading Information
• Direct path load of a partitioned table reports per-partition statistics
• Conventional-path load cannot report per-partition statistics
Lesson 194:Sample Example to Load Data-1
Example’s Contents
• Data Loading
• Data Migration – From File to Database
• Example of Data Loading
Data Loading
• Loading data from files
• Files can be comma delaminated
• Files can be in XML format
• Binary files (Oracle to Oracle)
Data Migration – Files to Database
Camtesia
Camtesia
Lesson 196:Sample Example to Load Data-3
Example’s Contents
• Executing Data Loading for Material (for Slide#195)
• Data Migration - Files to Database
Camtesia
Lesson 197:Sample Example to Load Data-4
Data Loading Tasks in Schema
• Loading data from TEMP to actual Material Table
• Tasks to load data from TEMP to actual Material Table
Loading data from TEMP to actual Material Table
• You have data in TMP_Material and you want to move or copy data into Material Table
Tasks to load data from TEMP to actual Material Table
• Use INSERT INTO … SELECT command
• Understand an error
• See where data discrepancies are found
• Run SQL UPDATE statements to fix discrepancies
Camtesia
Camtesia
Lesson 199:Sample Example to Load Data-6
Data Loading Tasks for Schema to Schema/ Files using Text Files
• Preparing Spool File of Schema’s Table using SQL
• Data Load from Oracle Schema into Files
Preparing Spool File of Schema’s Table using SQL
• In Scott schema
• Spool c:\test\emp.txt
• Select … from EMP;
• Spool off
• Spool file contains comma delaminated rows
• File can be used to load data into any DBMS
Data Load from Oracle Schema into Files
Camtesia
Lesson 200:Sample Example to Load Data-7
Data Preparation for Tasks using XML data
• Preparing Spool File of Schema’s Table using XML SQL
• Sample SQL using XML Elements
• Data Load from Oracle Schema into Files
• Output Results of Sample XML SQLs
Preparing Spool File of Schema’s Table using XML SQL
• In Scott schema
• Spool c:\test\emp.txt
• XML: Select … from EMP;
• Spool off
• Spool file contains XML-taged rows
• File can be used to load data into any DBMS
Sample SQL using XML Elements
select xmlelement(name "emp", xmlelement(name "eno",empno),
xmlelement(name "names",ename)) XML_output from emp
/
create table XML1 as select xmlelement("EMP",
xmlelement("empno",e.empno), xmlelement("ename",e.ename), xmlelement("job", e.job),
xmlelement("mgr", e.mgr), xmlelement("mgr1", e.mgr)) xml_desc from emp e
where rownum<=1;
Data Load from Oracle Schema into Files
Camtesia
Lesson 201: Backup & Recovery Strategies
• Backup & Recovery Concepts
• Physical & Logical Backup
• Backup and recovery is the set of concepts, procedures, and strategies involved in protecting the database against data loss caused
by media failure or users errors.
• In general, the purpose of a backup and recovery strategy is to protect the database against data loss and reconstruct lost data.
• A backup is a copy of data. A backup can include crucial parts of the database such as data files, the server parameter file, and
control file. A sample backup and recovery scenario is a failed disk drive that causes the loss of a data file.
• If a backup of the lost file exists, then you can restore and recover it. Media recovery refers to the operations involved in restoring
data to its state before the loss occurred.
Camtesia
Lesson 204: Logical Backup of a Schema/ User
• Logical Backup of Schema
• Export utility for Logical Backup
• Benefits of Export Utility for Logical Backup
Logical Backup of Schema
• A logical backup copies the data in the database and does not records the location of data.
• It can copy the data of schema, database and tables
Export utility for Logical Backup
• The export utility (export and export pump) offered by the Oracle can be used to take logical backup.
• The export utility copies the data and database definitions, and saves in binary OS file in Oracle internal format
Benefits of Export Utility for Logical Backup
• Data block corruption can be detected while exporting data and export procedure will fail.
• If use drops some table and it will recover from import command.
• What data and definition/ structure can be exported?
• Portable backup, can be imported in same or higher version of Oracle database.
Camtesia
Camtesia
Lesson 206: Cold Physical Backup
• Cold Physical Backup
• Cold Physical Backup Files
• Performing Cold Backup
Cold Physical Backup
• Offline or cold backups are performed when the database is completely shutdown.
• The disadvantage of an offline backup is that it cannot be done if the database needs to be run 24/7.
• Additionally, you can only recover the database up to the point when the last backup was made unless the database is running in
ARCHIVELOG mode.
Cold Physical Backup Files
An offline backup consists of physically copying the following files:
• Data files
• Control files
• Init.ora and config.ora files
Warning: If you make a cold backup in ARCHIVELOG mode do not backup redo log files.
Performing Cold Backup
• Before performing a cold backup, you need to know the location of the files that need to be backed up.
• Because the database structure changes day to day as more files get added or moved between directories, it is always better to
query the database to get the physical structure of database before making a cold backup.
Camtesia
Camtesia
Lesson 208:Hot Backup Concepts
• Hot Backup Concepts
• Startup Archive Mode
Hot Backup Concepts
• A hot backup is taken when the database needs to run all the time.
• It is an online backup.
• All files of the database are copied and there may be changes to the database during the copy.
• An online backup or hot backup is also referred to as ARCHIVE LOG backup. An online backup can only be done when the
database is running in ARCHIVELOG mode and the database is open.
• When the database is running in ARCHIVELOG mode, the archiver (ARCH) background process will make a copy of the online
redo log file to archive backup location.
An online backup consists of backing up the following files. But, because the database is open while performing a backup, you
have to follow the procedure to backup the following files:
• Data files of each tablespace
• Archived redo log files
• Control file
• Init.ora and config.ora files
Startup Archive Mode
Shutdown database and startup NoArchive to Archive database process
Camtesia
Lesson 209: Hot Backup Demo
Hot Backup Demo
• Hot Backup Demo
• Advise for Hot Backup
Hot Backup Demo
Following steps are required for Hot Backup:
• Put the tablespace in the Backup mode and copy the data files.
• Back up the control and Init.ora files.
• Stop archiving
• Back up the archive files
• Restart the archive process.
Advise for Hot Backup
An online backup of a database will keep the database open and functional for 24/7 operations. It is advised to schedule online
backups when there is the least user activity on the database, because backing up the database is very I/O intensive and users can
see slow response during the backup period. Additionally, if the user activity is very high, the archive destination might fill up very
fast.
Camtesia
E:\app\Haider\oradata\orcl
SELECT TABLESPACE_NAME, FILE_NAME FROM SYS.DBA_DATA_FILES
WHERE TABLESPACE_NAME = 'USERS';
Startup nomount
Login with RMAN target /
Restore Database
Recover Database
Resetlogs & Recovered Files
Camtesia
Lesson211: Partial Restore Logical Backup
Partial Restore Logical Backup
• Partial Restore Logical Backup
• Steps Partial Restore Logical Backup
• Partial Restore Tablespace
• Restore Archive Logs
Partial Restore Logical Backup
• A partial database backup includes a subset of the database: individual tablespaces or data files.
• A tablespace backup is a backup of all the data files in a tablespace or in multiple tablespaces.
• Tablespace backups, whether consistent or inconsistent, are valid only if the database is operating in ARCHIVELOG mode
because redo is required to make the restored tablespace consistent with the rest of the database.
Steps of Partial Restore Logical Backup
• Before beginning a backup of a tablespace, identify all of the datafiles in the tablespace with the DBA_DATA_FILES data
dictionary view:
SELECT TABLESPACE_NAME, FILE_NAME FROM SYS.DBA_DATA_FILES
WHERE TABLESPACE_NAME = 'USERS';
• Mark the beginning of the online tablespace backup:
• Back up the online datafiles of the online tablespace with operating system commands
• After backing up the datafiles of the online tablespace, run the SQL statement ALTER TABLESPACE with the END BACKUP
option
• Archive the unarchived redo logs so that the redo required to recover the tablespace backup is archived
Partial Restore Tablespace
RMAN> restore tablespace users;
DPITR Commands
RMAN> RECOVER DATABASE TO SCN 1500; RMAN> RESET DATABASE INCARNATION TO 1; RMAN> RECOVER
DATABASE TO SCN 1500;
Camtesia
Export Commands
exp PARFILE=filename
exp username/password PARFILE=filename
FULL=y FILE=dba.imp GRANTS=y INDEXES=y CONSISTENT=y
exp username/password PARFILE=params.dat INDEXES=n exp \'username/password AS SYSDBA\'
Import Utility
• The Import utility reads the object definitions and table data from an Export dump file. It inserts the data objects into an Oracle
database.
Import Ordering
The export file contains objects in the following order
1. Type definitions
2. Table definitions
3. Table data
4. Table indexes
5. Integrity constraints, views, procedures, and triggers
6. Bitmap, functional, and domain indexes
Export Commands
imp username/password PARAMETER=value or
imp username/password PARAMETER=(value1,value2,...,valuen) imp PARFILE=filename
imp username/password PARFILE=filename FULL=y FILE=dbay INDEXES=y CONSISTENT=y
https://docs.oracle.com/cd/E16340_01/portal.1111/e10239/cg_imex.htm
Export and import functions only within the same release of Oracle Portal and the same patch release, for example, release 10.1.4
to release 10.1.4 or release 11.1.1 to release 11.1.1. You cannot export and import between two different releases, such as release
10.1.2 to release 10.1.4 or release 10.1.4 to release 11.1.1.
For successful migration of objects, the version of the portal repository should be the same in the target and the source. Any
difference in the versions of the middle tiers does not impact migration.
When exporting or importing large data sets, check that there is sufficient space in the TEMP tablespace. This ensures that the
export or import process does not fail due to insufficient memory.
For exporting large page groups from the command line, use the opeasst.csh script. See Section 11.4.1.1.3, "Exporting Large Page
Groups from the Command Line" for more information.
For importing large page groups from the command line, use the import script with the -
automatic_merge option. See Section 11.7.3, "Importing the Transport Set Tables to the Target System" for more information.
Pre-requisite Portal Export & Import
• Privileges for Exporting and Importing Content
• Portal schema name.
• Portal schema password.
• Portal connect string information.
• Portal user name.
• Portal user password.
• Company name (used only for hosted portal installations). In most cases, this should be left blank.
Oracle Export & Import with Character Sets
• The Oracle Database exp utility always exports user data, including Unicode data, in the character sets of the export server. The
character sets are specified when the database is created.
• The Oracle Database imp utility automatically converts the data to the character sets of the import server.
Command-line Access
• If you are using the Oracle Portal Export and Import command-line scripts to move the transport sets from one system to another,
you must have command-line access to run the shell or command utilities generated by the export import process. The command-
line utilities, in turn, access the Oracle Database exp and imp utilities, and the Oracle Portal instance.
Oracle Export & Import with Character Sets
• Some 8-bit characters can be lost (that is, converted to 7-bit equivalents) when you import an 8-bit character set export file. This
occurs if the client system has a native 7-bit character set or if the NLS_LANG operating system environment variable is set to a 7-
bit character set. Most often, you notice that accented characters lose their accent marks.
• Both the Oracle Database exp and imp utilities alert you of any required character set conversion before exporting or importing the
data.
https://docs.oracle.com/cd/E16340_01/portal.1111/e10239/cg_imex.htm
Role of Character Sets in Export & Import
Lesson: 219:General Overview of Database Manager
General Overview of Oracle Enterprise Manager (OEM)
• Starting OEM
• Oracle Database Control Login
• OEM Home Page
• OEM - Grant Management Privileges
• OEM - Users Credentials
Starting OEM
On cmd (DOS prompt), you may run set oracle_sid=orcl
emctl stop dbconsole emctl start dbconsole
In the browser, you can run URL https://localhost:1158/em
Oracle Database Control Login
https://www.oracle.com/webfolder/technetwork/tutorials/obe/db/11g/r2/2day_dba/gettingstarted/gettingstarted.h tm
What is a Transaction?
SQL is initiated at user level (client) with Start-End and it executes on Server (central Database) at a certain distance.
A transaction is a logical, atomic unit of work (block) that contains one or more SQL statements.
Either transaction is done (committed) or not done (rolled back). No partial execution of transaction is allowed.
Processing of Transactions
Generally reflects the activities in the real world system Depositing some amount in an account, will be a transaction
Transferring amount from one account to another; if ok then DB is in consistent state otherwise inconsistent. One transaction can
be calculating GPA in our exam system
Processing of Transactions in Oracle
In Oracle Database, each user must see a consistent view of the data, including visible changes made by a user's own transactions
and committed transactions of other users.
• Single-user VS multi-user systems
A DBMS is single-user if at most one user can use the system at a time A DBMS is multi-user if many users can use the system
concurrently
• Problem
How to make the simultaneous interactions of multiple users with the database safe, consistent, correct, and efficient?
Lesson 221:Defining Transaction
• What is a Transaction?
• Transaction with SQL
• Converting SQL into Transaction
What is a Transaction?
A transaction T is a logical unit of database processing that includes one or more database access operations Embedded within an
application program
Specified interactively (e.g., via SQL) Transaction boundaries:
Begin/end transaction Types of transactions Read transaction write transaction
Transaction with SQL
For example, following SQL update will change the salary of some Employee Update EMP
Set sal = 850
Where empno = 7369;
Is it statement or a transaction?
Converting SQL into Transaction
Camtesia
Lesson 222:Role of Transaction
Transaction is travelled on network, it may have chances of lost and it requires transaction logs for tracing
Lesson 223: Example of Transactions
• Usage of ATM - An Example
• Funds Transfer Concept
• Writing a Transaction
Usage of ATM – An Example
Usage of ATM to draw/ transfer money, to collect currency with dispenser
Concurrency Problems
Several problems can occur when concurrent transactions are run in an uncontrolled manner, such type of problems is known as
concurrency problems.
At t1 time, T1 transaction reads the value of A i.e., 100. At t2 time, T1 transaction adds the value of A by 20.
At t3 time, T1transaction writes the value of A (120) in the database. At t4 time, T2 transactions read the value of A data item i.e.,
120.
At t5 time, T2 transaction adds the value of A data item by 30.
At t6 time, T2transaction writes the value of A (150) in the database.
At t7 time, a T1 transaction fails due to power failure then it is rollback according to atomicity property of transaction (either all or
none).
So, transaction T2 at t4 time contains a value which has not been committed in the database. The value read by the transaction T2
is known as a dirty read.
Lesson230: Incorrect Summary Problem
Incorrect Summary Problem
• Incorrect Summary Problem
• Incorrect Summary Problem - An Example
It is also known as an inconsistent retrieval problem. If a transaction T 1 reads a value of data item twice and the data item is
changed by another transaction T 2 in between the two read operation. Hence T 1 access two different values for its two read
operation of the same data item.
Incorrect Summary Problem – An Example
At t1 time, T1 transaction reads the value of A i.e., 100. At t2 time, T2transaction reads the value of A i.e., 100.
At t3 time, T2 transaction adds the value of A data item by 30.
At t4 time, T2 transaction writes the value of A (130) in the database.
Transaction T2 updates the value of A. Thus, when another read statement is performed by transaction T1, it accesses the new
value of A, which was updated by T2. Such type of conflict is known as R-W
Incorrect Summary Problem – An Example
Example-1 (Schedules A, B)
Creating Synonyms
Drop Synonyms
Drop Tables
Lesson250: Revision of Course Contents
Revision of DBA Course
• Introduction to Database & Architecture
• Database Design (Development)
• DBA
• PL/SQL
• Project’s Preparations & Scripts
• Logical & Physical Storage
Introduction to Database & Architecture
• What is Database?
• What is DBMS?
• ANSI SPARK 3 Level Architecture
• 2 & 3 Tiers Client Server Architecture
• Keys & Constraints
Database Design (Development)
• What is ER Model?
• ER Model Artifacts
• ER Mapping & Implementation
DBA
• DBA tasks & responsibilities
• Data Dictionary
• Oracle Installation & Configuration
• Logical & Physical Storage
• Backup & Recovery
• Database Tuning & Performance
• Support & Services
• Transaction Processing
PLSQL
• Procedure programming using Blocks
• Programming exceptions
• Procedures, functions & packages
• Cursor and loops
• Advance topics using Arrays, Dynamic SQL.
Project’s Preparation and Scripts
Database Mapping or Implementation
Scripts for DDL
Scripts for logical tablespaces
Scripts for Security
Data Migration
Logical & Physical Storage
Introduction to Database Administration
Data, Information
From Data to Information
For Knowledge
Examples of Data
Some one can think these are lengths and other may think room numbers
Similarly, 1.5, 2.5, 31.5, … MAY represent lengths of some iron rods
Defining Data, Information & Database
Information
Data Processed to reveal its meaning
• Information is meaningful
• In today’s world, accurate, relevant and timely information is the key to good decision making
• Good decision making is key to survival in today’s competitive and global environment
Defining Data, Information & Database
Data Information
Defining Data, Information & Database
Database & DBMS
A database is collection of stored operational data used by Package
application systems of some particular enterprise (C.J. Date)
Database
System
Data
DBM
S employee,
order,
inventory,
and
customer
Introduction to DBMS & its environment for DBA
DBMS Language
Examples of DBMS
Oracle
IBM DB2
Ingress
Teradata
MS SQL Server
MS Access
MySQL etc.
Introduction to DBMS & its environment for DBA
Data Accessing
using DBMS
A software system that is used to create, maintain, and provide controlled access to user databases DBMS and its Sub
Schemas
Order
Central database
Filing
Invoicing
System Contains employee, order, inventory, pricing, and customer data
DBMS
System
Payrol
DBMS manages data resources like an operating system manages hardware resources
l
Syste
Introduction to DBMS & its environment for DBA
Applications Programmer
Database Developer
Database Analyst
Teleprocessing
Form Processing & Report Processing Applications are build by using VB, DOT Net
or PHP programming. Reports can be developed by using Crystal Reports tool.
rd
Query Processing can be managed by using vendors SQL tool or 3 party tools such As TOAD, SQL Developer etc.
Introduction to DBMS Components & Architecture
• Teleprocessing
• File-Server
Teleprocessing
• Traditional architecture
• Query Processing
• Data Mining
END
• Which questions should we ask our data warehouse OLAP?
Chapter 4
DBMS Three Levels Architecture
Data Independence
Schema Objects
Database State
Introduction to DBMS Three Levels Architecture
DBMS Three Levels Architecture
External Level
User’s view of the database Describes that part of database that is relevant to a particular user
Conceptual Level
Community view of the database Describes what data is stored in database and relationships among
the data
Internal Level
Physical representation of the database on the computer
Describes how the data is stored in the database
Introduction to DBMS Three Levels Architecture
ANSI-SPARC Three-Level
Architecture
Introduction to DBMS Three Levels Architecture
ANSI-SPARC
Three-Levels Architecture
Introduction to DBMS Three Levels Architecture
Data Independence
Logical Data Independence
Schema Objects
Tables
Indexes
Constraints
Sequences
Views
Clusters
Triggers
Links
Introduction to DBMS Three Levels Architecture
Schema Objects
Network Administrator
Application Developers
DBA’s Tasks
DBA’s Responsibilities
Introduction to DBA Responsibilities & Tasks
Who is DBA?
A Database Administrator (DBA) is a person or a group of person who are responsible for managing all the activities related to database system.
DBA job requires a high level of expertise by a person or group of persons.
There are very rare chances that only a single person can manage all the database system activities so companies always have a group of people who take care of database system.
Introduction to DBA Responsibilities & Tasks
Network Administrator
Network administrator coordinates with the DBA for database connections and other issues such as storage, OS and hardware.
Some sites have one or more network administrators. A network administrator, for example, administers Oracle networking products, such as Oracle Net Services.
Introduction to DBA Responsibilities & Tasks
Application Developers
Designing and developing the database application
Designing the database structure for an application
Estimating storage requirements for an application
Specifying modifications of the database structure for an application
Introduction to DBA Responsibilities & Tasks
Application Developers
Relaying this information to a database administrator
Tuning the application during development
Establishing security measures for an application during development
Database Server Programming using Oracle PL/SQL
Introduction to DBA Responsibilities & Tasks
DBA’s Tasks
e the Database Server Hardware
he Oracle Database Software
Task 3: Plan the Database Task 4: Create and Open the
Database
Task 5: Back Up the Database Task 6: Enroll System Users
ent the Database Design
Introduction to DBA Responsibilities & Tasks
DBA’s Tasks
p the Fully Functional Database
Task 9: Tune Database Performance Task 10: Download and Install
Patches
Task 11: Roll Out to Additional Hosts
Introduction to DBA Responsibilities & Tasks
DBA’s Responsibilities
Installing and upgrading the Oracle Database server and application tools
Allocating system storage and planning future storage requirements for the database system
Creating primary database storage structures (tablespaces) after application developers have designed an application
Introduction to DBA Responsibilities & Tasks
DBA’s Responsibilities
Creating primary objects (tables, views, indexes) once application developers have designed an application
Modifying the database structure, as necessary, from information given by application developers
Introduction to DBA Responsibilities & Tasks
DBA’s Responsibilities
Enrolling users and maintaining system security
Ensuring compliance with Oracle license agreements
Controlling and monitoring user access to the database
Monitoring and optimizing the performance of the database
Introduction to DBA Responsibilities & Tasks
DBA’s Responsibilities
Planning for backup and recovery of database information
Maintaining archived data on tape
Backing up and restoring the database
Contacting Oracle for technical support
END
Overview of Physical Database Design
Purposes
Meeting the expectations of Database Designer for the database, following are two main purposes of Physical Database Design for a DBA.
Managing Storage Structure for database or DBMS
Performance & Tuning
Overview of Physical Database Design
The selection attributes used by queries and transactions with time constraints become higher-priority candidates for primary access structure.
Overview of Physical Database Design
or set of attributes
END
Introduction to Database Tuning
Database Tuning
Chapter 7
Tuning & goals
Tuning indexes
Problems in tuning
Tuning queries
Introduction to Database Tuning
The process of continuing to revise/adjust the physical database design by monitoring resource utilization as well as internal DBMS processing to reveal bottlenecks such as contention for the same data or devices.
Introduction to Database Tuning
Tuning Indexes
Reasons to tuning indexes
Certain queries may take too long to run for lack of an index;
Certain indexes may be causing excessive overhead because the index is on an attribute that undergoes frequent changes
Introduction to Database Tuning
Tuning Indexes
Options to tuning indexes
Problems in Tuning
How to avoid excessive lock contention?
How to minimize overhead of logging and unnecessary dumping of data?
How to optimize buffer size and scheduling of processes?
How to allocate resources such as disks, RAM and processes for most efficient utilization?
Introduction to Database Tuning
Tuning Queries
Indications for tuning queries
A query issues too many disk accesses
The query plan shows that relevant indexes are not being used.
Introduction to Database Tuning
Tuning Queries
Typical instances for query tuning
Many query optimizers do not use indexes in the presence of arithmetic expressions, numerical comparisons of attributes of different sizes and precision, NULL comparisons, and sub-string comparisons.
Indexes are often not used for nested queries using IN;
Introduction to Database Tuning
Tuning Queries
Some DISTINCTs may be redundant and can be avoided without changing the result.
Unnecessary use of temporary result tables can be avoided by collapsing multiple queries into a single query unless the temporary relation is needed for some intermediate processing.
If multiple options for join condition are possible, choose one that uses a clustering index and avoid those that contain string comparisons.
Introduction to Database Tuning
Tuning Queries
7. The order of tables in the FROM clause may affect the join processing.
8. Some query optimizers perform worse on nested queries compared to their equivalent un-nested counterparts.
9. Many applications are based on views that define the data of interest to those applications. Sometimes these views become an overkill.
END
Introduction to Database Administration
Concepts of Keys
Super Keys
Example – 1
Example – 2
Concepts of Keys & Super Keys
Concepts of Keys
A key is a combination of one or more columns that is used to identify rows in a relation
Super Keys
A super key is a combination of columns that uniquely identifies any row within a relational database management system (RDBMS) table.
END
Introduction to Database Administration
Candidate Keys
For NADRA: Citizen (CNIC#, Fname, Lname, FatherName, DOB, OldCNIC#, PAddr, PCity, TAddr, TCity, TelNo, Mobile#)
END
Introduction to Database Administration
Exercises
Course (cid, cname, deptno) Semester (sid, syear, startdate) ClassAllocation (sid, cid, sec#,
building#, room#)
dentify candidate keys in each of the above relations
Candidate Keys with Examples
Primary Keys
Primary key
An Alternate Definition
This means that no subset of the primary key is sufficient to provide unique identification of tuples.
NULL values is not allowed in primary key attribute.
Primary Key
Primary Keys
Example-1:
STUDENT(StuID, FirstName, FamilyName, DOB, …)
Example-4:
BankBranch(Branch-Name, City, TotalAsset)
• There may be many branches in one city then finalize this relation with possible constraints
End
Introduction to Database Administration
Primary Keys
Serial# starts with 1 and incremented by 1 005384 Company has many branch codes
005
007004 invoices issued from branches
Primary Keys Examples
Primary key Fixed Length
Primary Keys Examples
Primary key - Fixed Length
Primary Keys Examples
Primary key - Indexing
Introduction to Database Administration
Primary Keys
Roll# is issued by some University or college Format or instance, meaningful column for PK
Introduction to Database Administration
Surrogated Keys
Definition
Examples
Exercises
Surrogated Keys
Surrogated Key Definition
A surrogate key as an artificial column added to a relation to serve as a primary key:
• DBMS supplied
• Short, numeric and never changes – an ideal primary key!
• Has artificial values that are
meaningless to users
• Needs attributes?
End
Introduction to Database Administration
More Examples
Interesting example
Surrogated Keys Examples
More Examples
ATMTransaction
(Card#, Serial#, Amount, DrawDate)
More Examples
InsurancePaid
(Policy#, PaidYear, PaidDate, Amount)
Surrogated Keys
Solving example-1
Solving example-2
Surrogated Keys Examples
Solving Example-1
Let us choose
How staff of hall will store event’s details? Which columns are required.
Introduction to Database Administration
Surrogated Keys
Solving example-1
Solving example-2
Comparisons of Keys
Surrogated Keys Examples
Solving Example-1 (Saving blogs)
Surrogated Keys Examples
Solving Example-2
Let’s us assume a loan of Rs. 50,000
How to decide which columns are required to fill up to keep track record of installments?
Surrogated Keys Examples
Comparisons of keys
Let us discuss
End
Introduction to Database Administration
Foreign Keys
Definition
Example-1
Example-2
Foreign Keys
Definition
A foreign key is an attribute that refers to a primary key of same or different relation to form a link (constraint) between the relations:
• The term refers to the fact that key values are foreign to the relation in which they appear as foreign key values
Example-2
CUSTOMER(CustId, Name, Address, Telno, Email)
ORDERS(Invoice#, InvType, InvDate,
CustomerId)
Does Invoice# always contain serial numbers?
End
Introduction to Database Administration
Relationship details
Recursive Relationship
Foreign Keys Examples
FK is a part of Composite keys
NOTE: PK column is underlined
TEM (Item#, Item_Name, Department, Buyer)
ORDER_ITEM (OrderNumber, Item#,
Quantity, Price, ExtendedPrice)
Integrity rules
Integrity Example
Foreign Keys Examples
Integrity Rules
• Entity integrity
• No two rows with the same primary
key value
• ON UPDATE CASCADE
• ON UPDATE RESTRICT
There could be other choices besides these three. , e.g.,
Composite Keys
Definition
Basic examples
Other Examples
Composite Keys
Composite Key
LabTest has FKs PatID, VisitSNO, referring to same corresponding composite keys in PatientVisit
Preparing datasets
Composite Keys Examples
Course Offering Example
Let’s us assume, there are 1000 number of courses in a table or list.
Students want to register in courses.
Can they register by looking at 1000 courses? NO
What to do?
CrsOffer(SemID, CrsID, Sec, InstrName, B#, R#) What to do for keys, PK or Composite Key? CrsOffer(SemID, CrsID, Sec, InstrName, B#, R#)
CrsReg(SemID, Roll#, CrsID, Sec, TotMarks, Grade)
Composite Keys Examples
Preparing Dataset for Composite Keys
CrsOffer(SemID, CrsID, Sec, InstrName, Building#, Room#)