A single query can be executed in many ways. Query optimization helps choose the most efficient plan by comparing different execution methods to find the one with the lowest cost.
Importance: The goal of query optimization is to reduce the system resources required to fulfill a query, and ultimately provide the user with the correct result set faster.
- First, it provides the user with faster results, which makes the application seem faster to the user.
- Secondly, it allows the system to service more queries in the same amount of time, because each request takes less time than unoptimized queries.
- Thirdly, query optimization ultimately reduces the amount of wear on the hardware (e.g. disk drives), and allows the server to run more efficiently (e.g. lower power consumption, less memory usage).
Optimizing Query
To optimize a query, we use equivalence rules to rewrite it into simpler, equivalent relational algebra expressions. Below are some of the ways to optimize queries:
1. Conjunctive selection operations can be written as a sequence of individual selections. This is called a sigma-cascade.
\sigma_{\theta_{1}\Lambda\theta_{2} }(E)=\sigma_{\theta_{1}}(\sigma_{\theta_{2}}(E))
Explanation: Applying condition \theta_{1}intersection \theta_{2}is expensive. Instead, filter out tuples satisfying condition \theta_{2}(inner selection) and then apply condition \theta_{1}(outer selection) to the then resulting fewer tuples. This leaves us with less tuples to process the second time. This can be extended for two or more intersecting selections. Since we are breaking a single condition into a series of selections or cascades, it is called a "cascade".
2. Selection is commutative.
\sigma_{\theta_{1}}(\sigma_{\theta_{2}}(E))=\sigma_{\theta_{2}}(\sigma_{\theta_{1}}(E))
Explanation: \sigmacondition is commutative in nature. This means, it does not matter whether we apply \sigma_{1}first or \sigma_{2}first. In practice, it is better and more optimal to apply that selection first which yields a fewer number of tuples. This saves time on our outer selection.
3. All following projections can be omitted, only the first projection is required. This is called a pi-cascade.
\pi_{L_{1}}(\pi_{L_{2}}(...(\pi_{L_{n}}(E))...)) = \pi_{L_{1}}(E)
Explanation: A cascade or a series of projections is meaningless. This is because in the end, we are only selecting those columns which are specified in the last, or the outermost projection. Hence, it is better to collapse all the projections into just one i.e. the outermost projection.
4. Selections on Cartesian Products can be re-written as Theta Joins.
- Equivalence 1
\sigma_{\theta}(E_{1} \times E_{2}) = E_{1} \bowtie_{\theta} E_{2}
Explanation: The cross product operation is known to be very expensive. This is because it matches each tuple of E1 (total m tuples) with each tuple of E2 (total n tuples). This yields m*n entries. If we apply a selection operation after that, we would have to scan through m*n entries to find the suitable tuples which satisfy the condition \theta. Instead of doing all of this, it is more optimal to use the Theta Join, a join specifically designed to select only those entries in the cross product which satisfy the Theta condition, without evaluating the entire cross product first.
- Equivalence 2
\sigma_{\theta_{1}}(E_{1} \bowtie_{\theta_{2}} E_{2}) = E_{1} \bowtie_{\theta_{1} \Lambda \theta_{2}} E_{2}
Explanation: Theta Join radically decreases the number of resulting tuples, so if we apply an intersection of both the join conditions i.e. \theta_{1}and \theta_{2}into the Theta Join itself, we get fewer scans to do. On the other hand, a \sigma_{1}condition outside unnecessarily increases the tuples to scan.
5. Theta Joins are commutative.
E_{1} \bowtie_{\theta} E_{2} = E_{2} \bowtie_{\theta} E_{1}
Explanation: Theta Joins are commutative, and the query processing time depends to some extent which table is used as the outer loop and which one is used as the inner loop during the join process (based on the indexing structures and blocks).
6. Join operations are associative.
- Natural Join:
(E_{1} \bowtie E_{2}) \bowtie E_{3} = E_{1} \bowtie (E_{2} \bowtie E_{3})
Explanation: Joins are all commutative as well as associative, so one must join those two tables first which yield less number of entries, and then apply the other join.
- Theta Join
(E_{1} \bowtie_{\theta_{1}} E_{2}) \bowtie_{\theta_{2} \Lambda \theta_{3}} E_{3} = E_{1} \bowtie_{\theta_{1} \Lambda \theta_{3}} (E_{2} \bowtie_{\theta_{2}} E_{3})
Explanation: Theta Joins are associative in the above manner, where \theta_{2}involves attributes from only E2 and E3.
7. Selection operation can be distributed.
- Equivalence 1
\sigma_{\theta_{1}\Lambda\theta_{2}}(E_{1}\bowtie_{\theta}E_{2})=(\sigma_{\theta_{1}}(E_{1}))\bowtie_{\theta}(\sigma_{\theta_{2}}(E_{2}))
Explanation: Applying a selection after doing the Theta Join causes all the tuples returned by the Theta Join to be monitored after the join. If this selection contains attributes from only E1, it is better to apply this selection to E1 (hence resulting in a fewer number of tuples) and then join it with E2.
- Equivalence 2
\sigma_{\theta_{0}}(E_{1}\bowtie_{\theta}E_{2})=(\sigma_{\theta_{0}}(E_{1}))\bowtie_{\theta}E_{2}
Explanation: This can be extended to two selection conditions, \theta_{1}and \theta_{2}, where Theta1 contains the attributes of only E1 and \theta_{2}contains attributes of only E2. Hence, we can individually apply the selection criteria before joining, to drastically reduce the number of tuples joined.
8. Projection distributes over the Theta Join.
- Equivalence 1
\pi_{L_{1}\cup L_{2}}(E_{1}\bowtie_{\theta}E_{2})=(\pi_{L_{1}}(E_{1}))\bowtie_{\theta}(\pi_{L_{2}}(E_{2}))
Explanation: The idea discussed for selection can be used for projection as well. Here, if L1 is a projection that involves columns of only E1, and L2 another projection that involves the columns of only E2, then it is better to individually apply the projections on both the tables before joining. This leaves us with a fewer number of columns on either side, hence contributing to an easier join.
- Equivalence 2
\pi_{L_{1}\cup L_{2}}(E_{1}\bowtie_{\theta}E_{2})=\pi_{L_{1}\cup L_{2}}((\pi_{L_{1}\cup L_{3}}(E_{1}))\bowtie_{\theta}(\pi_{L_{2}\cup L_{3}}(E_{2})))
Explanation: Here, when applying projections L1 and L2 on the join, where L1 contains columns of only E1 and L2 contains columns of only E2, we can introduce another column E3 (which is common between both the tables). Then, we can apply projections L1 and L2 on E1 and E2 respectively, along with the added column L3. L3 enables us to do the join.
9. Union and Intersection are commutative.
E_{1}\ \cup E_{2}\ =\ E_{2}\ \cup\ E_{1}
E_{1}\ \cap E_{2}\ =\ E_{2}\ \cap\ E_{1}
Explanation: Union and intersection are both distributive; we can enclose any tables in parentheses according to requirement and ease of access.
10. Union and Intersection are associative.
(E_{1}\ \cup E_{2})\ \cup\ E_{3}=E_{1}\ \cup\ (E_{2}\ \cup\ E_{3})
(E_{1}\ \cap E_{2})\ \cap\ E_{3}=E_{1}\ \cap\ (E_{2}\ \cap\ E_{3})
Explanation: Union and intersection are both distributive; we can enclose any tables in parentheses according to requirement and ease of access.
11. Selection operation distributes over the union, intersection, and difference operations.
\sigma_{P}(E_{1}\ -\ E_{2})=\sigma_{P}(E_{1})\ -\ \sigma_{P}(E_{2})
Explanation: In set difference, we know that only those tuples are shown which belong to table E1 and do not belong to table E2. So, applying a selection condition on the entire set difference is equivalent to applying the selection condition on the individual tables and then applying set difference. This will reduce the number of comparisons in the set difference step.
12. Projection operation distributes over the union operation.
\pi_{L}(E_{1}\ \cup\ E_{2})=(\pi_{L}(E_{1}))\ \cup\ (\pi_{L}(E_{2}))
Explanation: Applying individual projections before computing the union of E1 and E2 is more optimal than the left expression, i.e. applying projection after the union step.
Minimality
A set of equivalence rules is said to be minimal if no rule can be derived from any combination of the others. A query is said to be optimal when it is minimal.
Examples: Assume the following tables:
instructor(ID, name, dept_name, salary)
teaches(ID, course_id, sec_id, semester, year)
course(course_id, title, dept_name, credits)
Query 1: Find the names of all instructors in the Music department, along with the titles of the courses that they teach
\pi_{\text{name, title}} \left( \sigma_{\text{dept\_name} = \text{"Music"}} \left( \text{instructor} \bowtie \left( \text{teaches} \bowtie \pi_{\text{course\_id, title}} (\text{course}) \right) \right) \right)
Here, dept_name is a field of only the instructor table. Hence, we can select out the Music instructors before joining the tables, hence reducing query time.
Optimized Query: Using rule 7a, and Performing the selection as early as possible reduces the size of the relation to be joined.
\pi_{\text{name, title}} \left( \left( \sigma_{\text{dept\_name} = "Music"} (\text{instructor}) \right) \bowtie \left( \text{teaches} \bowtie \pi_{\text{course\_id, title}} (\text{course}) \right) \right)
Query 2: Find the names of all instructors in the CSE department who have taught a course in 2009, along with the titles of the courses that they taught
\pi_{\text{name, title}} \left( \left( \sigma_{\text{dept\_name} = "Music"} (\text{instructor}) \right) \bowtie \left( \text{teaches} \bowtie \pi_{\text{course\_id, title}} (\text{course}) \right) \right)
Optimized Query: We can perform an "early selection", hence the optimized query becomes:
\pi_{\text{name, title}} \left( \left( \sigma_{\text{dept\_name} = "Music"} (\text{instructor}) \right) \bowtie \left( \text{teaches} \bowtie \pi_{\text{course\_id, title}} (\text{course}) \right) \right)
Features :
- Cost Estimation: The optimizer picks the query plan with the lowest estimated cost, based on disk I/O or CPU usage.
- Plan Exploration: It explores different ways to execute the query (plan space), which can be complex for queries with many joins.
- Query Rewriting: The optimizer may transform the query into an equivalent but more efficient form (e.g., reordering joins or applying filters early).
- Use of Statistics: It uses table stats (like row count, value distribution, indexes) to estimate plan costs.
- Index Selection: Chooses the best indexes to speed up data access based on the query.
- Caching: Frequently run queries may be cached to avoid repeated execution and improve speed.
Similar Reads
DBMS Tutorial â Learn Database Management System Database Management System (DBMS) is a software used to manage data from a database. A database is a structured collection of data that is stored in an electronic device. The data can be text, video, image or any other format.A relational database stores data in the form of tables and a NoSQL databa
7 min read
Basic of DBMS
Entity Relationship Model
Introduction of ER ModelThe Entity-Relationship Model (ER Model) is a conceptual model for designing a databases. This model represents the logical structure of a database, including entities, their attributes and relationships between them. Entity: An objects that is stored as data such as Student, Course or Company.Attri
10 min read
Structural Constraints of Relationships in ER ModelStructural constraints, within the context of Entity-Relationship (ER) modeling, specify and determine how the entities take part in the relationships and this gives an outline of how the interactions between the entities can be designed in a database. Two primary types of constraints are cardinalit
5 min read
Generalization, Specialization and Aggregation in ER ModelUsing the ER model for bigger data creates a lot of complexity while designing a database model, So in order to minimize the complexity Generalization, Specialization and Aggregation were introduced in the ER model. These were used for data abstraction. In which an abstraction mechanism is used to h
4 min read
Introduction of Relational Model and Codd Rules in DBMSThe Relational Model is a fundamental concept in Database Management Systems (DBMS) that organizes data into tables, also known as relations. This model simplifies data storage, retrieval, and management by using rows and columns. Coddâs Rules, introduced by Dr. Edgar F. Codd, define the principles
14 min read
Keys in Relational ModelIn the context of a relational database, keys are one of the basic requirements of a relational database model. Keys are fundamental components that ensure data integrity, uniqueness and efficient access. It is widely used to identify the tuples(rows) uniquely in the table. We also use keys to set u
6 min read
Mapping from ER Model to Relational ModelConverting an Entity-Relationship (ER) diagram to a Relational Model is a crucial step in database design. The ER model represents the conceptual structure of a database, while the Relational Model is a physical representation that can be directly implemented using a Relational Database Management S
7 min read
Strategies for Schema design in DBMSThere are various strategies that are considered while designing a schema. Most of these strategies follow an incremental approach that is, they must start with some schema constructs derived from the requirements and then they incrementally modify, refine or build on them. What is Schema Design?Sch
6 min read
Relational Model
Introduction of Relational Algebra in DBMSRelational Algebra is a formal language used to query and manipulate relational databases, consisting of a set of operations like selection, projection, union, and join. It provides a mathematical framework for querying databases, ensuring efficient data retrieval and manipulation. Relational algebr
9 min read
SQL Joins (Inner, Left, Right and Full Join)SQL joins are fundamental tools for combining data from multiple tables in relational databases. For example, consider two tables where one table (say Student) has student information with id as a key and other table (say Marks) has information about marks of every student id. Now to display the mar
4 min read
Join operation Vs Nested query in DBMSThe concept of joins and nested queries emerged to facilitate the retrieval and management of data stored in multiple, often interrelated tables within a relational database. As databases are normalized to reduce redundancy, the meaningful information extracted often requires combining data from dif
3 min read
Tuple Relational Calculus (TRC) in DBMSTuple Relational Calculus (TRC) is a non-procedural query language used to retrieve data from relational databases by describing the properties of the required data (not how to fetch it). It is based on first-order predicate logic and uses tuple variables to represent rows of tables.Syntax: The basi
4 min read
Domain Relational Calculus in DBMSDomain Relational Calculus (DRC) is a formal query language for relational databases. It describes queries by specifying a set of conditions or formulas that the data must satisfy. These conditions are written using domain variables and predicates, and it returns a relation that satisfies the specif
4 min read
Relational Algebra
Introduction of Relational Algebra in DBMSRelational Algebra is a formal language used to query and manipulate relational databases, consisting of a set of operations like selection, projection, union, and join. It provides a mathematical framework for querying databases, ensuring efficient data retrieval and manipulation. Relational algebr
9 min read
SQL Joins (Inner, Left, Right and Full Join)SQL joins are fundamental tools for combining data from multiple tables in relational databases. For example, consider two tables where one table (say Student) has student information with id as a key and other table (say Marks) has information about marks of every student id. Now to display the mar
4 min read
Join operation Vs Nested query in DBMSThe concept of joins and nested queries emerged to facilitate the retrieval and management of data stored in multiple, often interrelated tables within a relational database. As databases are normalized to reduce redundancy, the meaningful information extracted often requires combining data from dif
3 min read
Tuple Relational Calculus (TRC) in DBMSTuple Relational Calculus (TRC) is a non-procedural query language used to retrieve data from relational databases by describing the properties of the required data (not how to fetch it). It is based on first-order predicate logic and uses tuple variables to represent rows of tables.Syntax: The basi
4 min read
Domain Relational Calculus in DBMSDomain Relational Calculus (DRC) is a formal query language for relational databases. It describes queries by specifying a set of conditions or formulas that the data must satisfy. These conditions are written using domain variables and predicates, and it returns a relation that satisfies the specif
4 min read
Functional Dependencies & Normalization
Attribute Closure in DBMSFunctional dependency and attribute closure are essential for maintaining data integrity and building effective, organized and normalized databases. Attribute closure of an attribute set can be defined as set of attributes which can be functionally determined from it.How to find attribute closure of
4 min read
Armstrong's Axioms in Functional Dependency in DBMSArmstrong's Axioms refer to a set of inference rules, introduced by William W. Armstrong, that are used to test the logical implication of functional dependencies. Given a set of functional dependencies F, the closure of F (denoted as F+) is the set of all functional dependencies logically implied b
4 min read
Canonical Cover of Functional Dependencies in DBMSManaging a large set of functional dependencies can result in unnecessary computational overhead. This is where the canonical cover becomes useful. A canonical cover is a set of functional dependencies that is equivalent to a given set of functional dependencies but is minimal in terms of the number
7 min read
Normal Forms in DBMSIn the world of database management, Normal Forms are important for ensuring that data is structured logically, reducing redundancy, and maintaining data integrity. When working with databases, especially relational databases, it is critical to follow normalization techniques that help to eliminate
7 min read
The Problem of Redundancy in DatabaseRedundancy means having multiple copies of the same data in the database. This problem arises when a database is not normalized. Suppose a table of student details attributes is: student ID, student name, college name, college rank, and course opted. Student_ID Name Contact College Course Rank 100Hi
6 min read
Lossless Join and Dependency Preserving DecompositionDecomposition of a relation is done when a relation in a relational model is not in appropriate normal form. Relation R is decomposed into two or more relations if decomposition is lossless join as well as dependency preserving. Lossless Join DecompositionIf we decompose a relation R into relations
4 min read
Denormalization in DatabasesDenormalization is a database optimization technique in which we add redundant data to one or more tables. This can help us avoid costly joins in a relational database. Note that denormalization does not mean 'reversing normalization' or 'not to normalize'. It is an optimization technique that is ap
4 min read
Transactions & Concurrency Control
ACID Properties in DBMSIn the world of DBMS, transactions are fundamental operations that allow us to modify and retrieve data. However, to ensure the integrity of a database, it is important that these transactions are executed in a way that maintains consistency, correctness, and reliability. This is where the ACID prop
6 min read
Types of Schedules in DBMSScheduling is the process of determining the order in which transactions are executed. When multiple transactions run concurrently, scheduling ensures that operations are executed in a way that prevents conflicts or overlaps between them.There are several types of schedules, all of them are depicted
6 min read
Recoverability in DBMSRecoverability ensures that after a failure, the database can restore a consistent state by keeping committed changes and undoing uncommitted ones. It uses logs to redo or undo actions, preventing data loss and maintaining integrity.There are several levels of recoverability that can be supported by
5 min read
Implementation of Locking in DBMSLocking protocols are used in database management systems as a means of concurrency control. Multiple transactions may request a lock on a data item simultaneously. Hence, we require a mechanism to manage the locking requests made by transactions. Such a mechanism is called a Lock Manager. It relies
5 min read
Deadlock in DBMSA deadlock occurs in a multi-user database environment when two or more transactions block each other indefinitely by each holding a resource the other needs. This results in a cycle of dependencies (circular wait) where no transaction can proceed.For Example: Consider the image belowDeadlock in DBM
4 min read
Starvation in DBMSStarvation in DBMS is a problem that happens when some processes are unable to get the resources they need because other processes keep getting priority. This can happen in situations like locking or scheduling, where some processes keep getting the resources first, leaving others waiting indefinite
8 min read
Advanced DBMS
Indexing in DatabasesIndexing in DBMS is used to speed up data retrieval by minimizing disk scans. Instead of searching through all rows, the DBMS uses index structures to quickly locate data using key values.When an index is created, it stores sorted key values and pointers to actual data rows. This reduces the number
6 min read
Introduction of B TreeA B-Tree is a specialized m-way tree designed to optimize data access, especially on disk-based storage systems. In a B-Tree of order m, each node can have up to m children and m-1 keys, allowing it to efficiently manage large datasets.The value of m is decided based on disk block and key sizes.One
8 min read
Introduction of B+ TreeA B+ Tree is an advanced data structure used in database systems and file systems to maintain sorted data for fast retrieval, especially from disk. It is an extended version of the B Tree, where all actual data is stored only in the leaf nodes, while internal nodes contain only keys for navigation.C
5 min read
Bitmap Indexing in DBMSBitmap Indexing is a powerful data indexing technique used in Database Management Systems (DBMS) to speed up queries- especially those involving large datasets and columns with only a few unique values (called low-cardinality columns).In a database table, some columns only contain a few different va
3 min read
Inverted IndexAn Inverted Index is a data structure used in information retrieval systems to efficiently retrieve documents or web pages containing a specific term or set of terms. In an inverted index, the index is organized by terms (words), and each term points to a list of documents or web pages that contain
7 min read
SQL Queries on Clustered and Non-Clustered IndexesIndexes in SQL play a pivotal role in enhancing database performance by enabling efficient data retrieval without scanning the entire table. The two primary types of indexes Clustered Index and Non-Clustered Index serve distinct purposes in optimizing query performance. In this article, we will expl
7 min read
File Organization in DBMSFile organization in DBMS refers to the method of storing data records in a file so they can be accessed efficiently. It determines how data is arranged, stored, and retrieved from physical storage.The Objective of File OrganizationIt helps in the faster selection of records i.e. it makes the proces
5 min read
DBMS Practice