0% found this document useful (0 votes)
41 views41 pages

Dbms Question Bank Full Solution

Uploaded by

nandinidave07
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
41 views41 pages

Dbms Question Bank Full Solution

Uploaded by

nandinidave07
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 41

DBMS

Question Bank Full Solution


Chapter-1 to7

Unit-1 prepared by:Adarsh Dubey

### 1. Compare Traditional File Processing Systems and Database


Management Systems (DBMS).

### 2. What is DBMS? List out Applications of DBMS.

**DBMS (Database Management System)**:


A DBMS is a software system used to store, manage, and manipulate
data in databases. It provides an interface for users and applications to
interact with the data, ensuring efficient, secure, and consistent data
management.

**Applications of DBMS**:
1. **Banking**: Managing transactions, customer accounts, and
financial records.
2. **Airline Reservation Systems**: Handling flight bookings, schedules,
and customer details.
3. **Telecommunications**: Managing customer information, call data
records, and billing.
4. **Library Management Systems**: Managing books, borrowing
records, and member details.
5. **Hospital Management**: Managing patient records, doctor
appointments, and medical history.
6. **Inventory Systems**: Tracking products, stock levels, and sales
transactions.
7. **E-commerce**: Handling product catalogs, customer information,
and order transactions.

---

### 3. Draw and Explain the Three-Level Architecture of DBMS.

**Three-Level Architecture of DBMS**:

The three-level architecture of DBMS separates the database system


into three layers: internal, conceptual, and external.

1. **Internal Level** (Physical Level):


- Describes how the data is stored on storage devices.
- Deals with the physical storage of data in files, indexing, and
compression.
- Example: Data stored in blocks, pages, or indexes.

2. **Conceptual Level** (Logical Level):


- Describes what data is stored in the database and how it is logically
related.
- Represents the logical structure of the database without the details
of physical storage.
- Example: Tables, relationships, and constraints.

3. **External Level** (View Level):


- Describes how the data is viewed by different users.
- Provides user-specific views, showing only necessary data.
- Example: A student may see only their own grades, while a teacher
sees the grades of all students.

---
### 4. State How an Entity-Relationship (ER) Model Represents Real-
World Entities.

The **Entity-Relationship (ER) Model** is a high-level conceptual


framework used to model real-world entities and their relationships in a
database.

- **Entities** represent objects or things in the real world (e.g.,


`Student`, `Employee`).
- **Attributes** define the properties of the entities (e.g., `Name`, `ID`).
- **Relationships** describe associations between entities (e.g.,
`Student` enrolls in a `Course`).
- **Entity Sets** group entities of the same type (e.g., a set of
`Students`).
- **Relationship Sets** group relationships of the same type (e.g., a set
of `Enroll` relationships between students and courses).

---

### 5. Define Data Abstraction and Explain Different Levels of Data


Abstraction.

**Data Abstraction** is the process of hiding the complex details of data


storage and structure from users, providing a simplified view.

**Levels of Data Abstraction**:


1. **Physical Level**:
- Deals with how data is physically stored.
- Example: Data stored in files, blocks, or disk sectors.

2. **Logical Level**:
- Describes what data is stored and how it is logically structured.
- Example: Tables, views, and relationships in a relational database.

3. **View Level**:
- Describes how data is viewed by individual users or user groups.
- Example: A user's personal view of their account information in an
application.
---

### 6. Explain Data Independence.

**Data Independence** refers to the ability to change the schema at


one level (e.g., internal or conceptual) without affecting the schema at
the next higher level. It allows flexibility and easier maintenance of the
database system.

- **Logical Data Independence**: Ability to change the logical schema


(e.g., adding new fields or tables) without affecting the external views or
applications.
- **Physical Data Independence**: Ability to change the physical schema
(e.g., data storage devices or indexing methods) without affecting the
logical schema or external views.

---

### 7. Explain the DBMS Languages with Examples: DDL, DML, and DCL.

1. **DDL (Data Definition Language)**:


- Used to define the database structure (e.g., creating tables, defining
constraints).
- **Examples**: `CREATE`, `ALTER`, `DROP`.
- Example: `CREATE TABLE Students (StudentID INT PRIMARY KEY,
Name VARCHAR(100));`

2. **DML (Data Manipulation Language)**:


- Used to manipulate the data (e.g., inserting, updating, deleting, and
querying data).
- **Examples**: `SELECT`, `INSERT`, `UPDATE`, `DELETE`.
- Example: `SELECT * FROM Students WHERE StudentID = 1;`

3. **DCL (Data Control Language)**:


- Used to control access to the database (e.g., granting or revoking
privileges).
- **Examples**: `GRANT`, `REVOKE`.
- Example: `GRANT SELECT ON Students TO User1;`

---
### 8. Draw an E-R Diagram for the Library Management System.

Entities:
1. **Book**: Attributes: `BookID`, `Title`, `Author`.
2. **Member**: Attributes: `MemberID`, `Name`, `Address`.
3. **Librarian**: Attributes: `LibrarianID`, `Name`.

Relationships:
1. **Borrows**: Connects `Member` and `Book`.
2. **Manages**: Connects `Librarian` and `Book`.

---

### 9. Define E-R Diagram. Discuss Generalization in the E-R Diagram.

**E-R Diagram**:
An **Entity-Relationship Diagram** is a visual representation of entities,
their attributes, and the relationships between them.

**Generalization**:
Generalization is the process of abstracting common characteristics from
multiple entities into a generalized super-entity. It helps reduce
complexity in the design.

Example:
- A **Student** and **Teacher** entity can be generalized into an
**Employee** super-entity, with shared attributes like `ID`, `Name`, and
`Address`.

---

### 10. Differentiate Strong Entity Set and Weak Entity Set. Demonstrate
the Concept of Both Using Real-Time Examples.

- **Strong Entity Set**:


- An entity that can exist independently and has a **primary key**.
- Example: A `Student` entity with `StudentID` as a primary key.

- **Weak Entity Set**:


- An entity that cannot exist independently and relies on a strong entity
for identification.
- Example: `Dependent` (associated with an `Employee`), where
`EmployeeID` is used to uniquely identify a `Dependent`.

---

### 11. What Are the Constraints in DBMS? Explain with a Proper
Example.

**Constraints in DBMS** are rules that limit the data that can be
entered into the database to ensure data integrity.

- **Primary Key**: Ensures uniqueness of each record. Example:


`StudentID` in a `Student` table.
- **Foreign Key**: Ensures referential integrity by linking to a primary
key in another table. Example: `DepartmentID` in an `Employee` table
referencing `DepartmentID` in the `Department` table.
- **Unique**: Ensures all values in a column are unique. Example:
`Email` in a `User` table.
- **Check**: Ensures that values in a column meet a condition. Example:
`Age > 18` in a `Person` table.

---

### 12. Define Primary Key, Foreign Key, NOT NULL Constraints, and
Referential Integrity (Foreign Key) Constraint.

1. **Primary Key**: Uniquely identifies a record in a table. Cannot be


NULL.
- Example: `StudentID` in the `Student` table.
2. **Foreign Key**: Links a record in one table to a record in another
table, maintaining referential integrity.
- Example: `DepartmentID` in the `Employee` table referencing the
`Department` table.
3. **NOT NULL**: Ensures a column cannot have NULL values.
- Example: `Name` column in a `Customer` table must always have a
value.
4. **Referential Integrity**: Ensures that foreign keys correspond to
valid primary keys in other tables.
- Example: `EmployeeID` in a `Project` table must correspond to an
existing `EmployeeID` in the

`Employee` table.

---

### 13. Explain the Network Model and Relational Model in Brief.

1. **Network Model**:
- A type of database model that uses a graph structure with nodes
(representing entities) and arcs (representing relationships).
- Supports many-to-many relationships but is more complex than the
relational model.
- Example: Representing a company’s employee and project data
where each employee can work on multiple projects.

2. **Relational Model**:
- Represents data in tables (relations) with rows (tuples) and columns
(attributes).
- Highly flexible and uses SQL for querying and manipulating data.
- Example: A `Customer` table with `CustomerID`, `Name`, and
`Address` as columns.

Unit-2
### 14. **Difference Between Tuple Relational Calculus and Domain
Relational Calculus**

### 15. **Working of the Cartesian Product Operation**


The **Cartesian Product** operation in relational algebra combines
each row of one table with every row of another table. The result is a
new relation with columns from both tables.

#### Example:
Let’s say we have two tables:

**Table 1: Students**

| Student_ID | Name |
|------------|---------|
|1 | Alice |
|2 | Bob |

**Table 2: Courses**

| Course_ID | Course_Name |
|-----------|--------------|
| C1 | Math |
| C2 | Science |

The **Cartesian Product** of these two tables results in:

**Result (Students × Courses):**

| Student_ID | Name | Course_ID | Course_Name |


|------------|---------|-----------|-------------|
|1 | Alice | C1 | Math |
|1 | Alice | C2 | Science |
|2 | Bob | C1 | Math |
|2 | Bob | C2 | Science |

### 16. **Relational Algebra Operators**

Relational algebra has several operators used for querying data from
relational databases. The main operators are:

1. **Select (σ)** – Filters rows based on a condition.


2. **Project (π)** – Selects specific columns.
3. **Union (∪)** – Combines rows from two relations, eliminating
duplicates.
4. **Set Difference (-)** – Returns rows from one relation that are not in
another.
5. **Cartesian Product (×)** – Combines two relations, forming all
possible combinations of rows.
6. **Rename (ρ)** – Renames attributes in a relation.
7. **Join (⨝)** – Combines related tuples from two relations based on
a condition.

#### Example - **Select (σ)**:


Suppose we have a table **Students**:

| Student_ID | Name | Age |


|------------|---------|-----|
|1 | Alice | 20 |
|2 | Bob | 22 |
|3 | Carol | 21 |

To select students older than 20, we use:

`σ Age > 20 (Students)`

This results in:

| Student_ID | Name | Age |


|------------|---------|-----|
|2 | Bob | 22 |
|3 | Carol | 21 |

### 17. **Relational Algebra Operators**

This question is similar to **16**. The relational algebra operators


mentioned are:

- **Select (σ)**
- **Project (π)**
- **Union (∪)**
- **Set Difference (-)**
- **Cartesian Product (×)**
- **Rename (ρ)**
- **Join (⨝)**

Each operator has specific functions and applications to manipulate


relational data.

### 18. **Types of Joins**

There are several types of **joins** in relational databases:

1. **Inner Join** – Returns only the rows with matching values in both
tables.
- Example: Select all employees and their departments from
`Employees` and `Departments` where the department ID matches.

2. **Left Join (Left Outer Join)** – Returns all rows from the left table
and matched rows from the right table. If there is no match, NULL values
are returned for columns from the right table.
- Example: Select all employees and their department, even if some
employees don’t have a department.

3. **Right Join (Right Outer Join)** – Similar to the Left Join but returns
all rows from the right table, along with matching rows from the left.

4. **Full Join (Full Outer Join)** – Returns all rows from both tables,
with matching rows where available. Non-matching rows will have
NULLs in the columns of the other table.

5. **Self Join** – A join where a table is joined with itself, typically used
for hierarchical relationships.

#### Example - **Inner Join**:


Assume two tables: **Employees** and **Departments**.

**Employees Table:**

| Emp_ID | Emp_Name | Dept_ID |


|--------|----------|---------|
| 1 | John | D1 |
| 2 | Jane | D2 |
**Departments Table:**

| Dept_ID | Dept_Name |
|---------|-----------|
| D1 | HR |
| D2 | IT |

The result of an **Inner Join** on `Dept_ID` would be:

| Emp_ID | Emp_Name | Dept_ID | Dept_Name |


|--------|----------|---------|-----------|
| 1 | John | D1 | HR |
| 2 | Jane | D2 | IT |

### 19. **Importance of Achieving a Lossless Design in Relational


Database Design**

A **lossless design** means that no information is lost when dividing a


relational schema into multiple relations. It is essential to ensure that
when decomposing a relation into smaller relations, we can still
reconstruct the original relation without any data loss.

- **Importance:**
- **Data Integrity**: Ensures that the data remains consistent and
accurate after decomposition.
- **Normalization**: Helps in achieving efficient storage and data
retrieval by eliminating redundancy.

- **Is it always possible?**


Yes, a lossless decomposition is always possible if the decomposition is
done based on functional dependencies that maintain the original
information.

### 20. **Relational Schema and Normalization up to 3NF**

#### Example Relational Schema:

Let's take a database for a **Student** system.


**Schema:**

- **Student (Student_ID, Name, Address)**


- **Course (Course_ID, Course_Name)**
- **Enrollment (Student_ID, Course_ID, Enrollment_Date)**

#### Normalization Process:

1. **1st Normal Form (1NF)**:


- All attributes have atomic values.
- Example: Each attribute contains indivisible values, e.g., `Name`
should not store a combination of `First Name` and `Last Name`.

2. **2nd Normal Form (2NF)**:


- 1NF + All non-key attributes are fully dependent on the primary key.
- Decompose any partial dependencies (where non-key attributes
depend on part of a composite key).
- For the `Enrollment` relation, ensure that `Enrollment_Date` is fully
dependent on the composite key (`Student_ID`, `Course_ID`).

3. **3rd Normal Form (3NF)**:


- 2NF + No transitive dependencies.
- For example, if `Student` had `Department` (dependent on
`Student_ID`), we would separate `Department` into a new table.

#### Final Schema after 3NF:

- **Student (Student_ID, Name, Address)**


- **Course (Course_ID, Course_Name)**
- **Enrollment (Student_ID, Course_ID, Enrollment_Date)**

### 21. **Trivial and Non-Trivial Dependencies**

- **Trivial Dependency**: A dependency where the right-hand side is a


subset of the left-hand side. Example: `A → A`.
- **Non-Trivial Dependency**: A dependency where the right-hand side
is not a subset of the left-hand side. Example: `Student_ID → Name`.
These dependencies affect schema design by ensuring that no
unnecessary dependencies exist, helping in **normalization** and the
removal of redundant data.

### 22. **Relational Algebra Operators (Repeated Question)**

This question is similar to **16** and **17**, and has been addressed
above.

Unit-3
### 23. **Armstrong's Axioms**

Armstrong's Axioms are a set of inference rules used to derive all


functional dependencies (FDs) from a given set of FDs in a relational
schema. These axioms are the foundation for reasoning about functional
dependencies in database theory.

The axioms are:

1. **Reflexivity**: If `Y` is a subset of `X`, then `X → Y`.


- Example: `AB → A` because `A` is a subset of `AB`.

2. **Augmentation**: If `X → Y`, then `XZ → YZ` for any `Z`.


- Example: If `A → B`, then `AC → BC`.

3. **Transitivity**: If `X → Y` and `Y → Z`, then `X → Z`.


- Example: If `A → B` and `B → C`, then `A → C`.

These axioms help in computing the closure of a set of functional


dependencies and are essential for tasks like normalization.

### 24. **Functional Dependency (FD)**

A **functional dependency (FD)** is a relationship between two sets of


attributes in a relation. We say that attribute `Y` is functionally
dependent on attribute `X` (denoted as `X → Y`) if, for every valid
instance of the relation, each value of `X` is associated with exactly one
value of `Y`.
#### Example:
For a relation `Student(Student_ID, Name, Address)`, the functional
dependency `Student_ID → Name` means that the student's name is
uniquely determined by the student ID. In other words, if two rows have
the same `Student_ID`, they must have the same `Name`.

### 25. **Computing Closure of a Set of Functional Dependencies**

Given functional dependencies:


- `A → BC`
- `CD → E`
- `B → D`
- `E → A`

We need to compute the **closure** of the set of attributes, starting


with the attributes we are interested in. Let’s assume we start with the
set `{A}` and compute the closure `{A}+` for the relation schema `r(A, B, C,
D, E)`.

#### Steps to Compute Closure of `{A}`:

1. **Start with `{A}`**.


2. Apply the functional dependency `A → BC`: Now the closure `{A}+`
contains `{A, B, C}`.
3. Apply the functional dependency `B → D`: Now the closure `{A}+`
contains `{A, B, C, D}`.
4. Apply the functional dependency `CD → E`: Now the closure `{A}+`
contains `{A, B, C, D, E}`.
5. Apply the functional dependency `E → A`: This doesn’t add any new
attributes since `A` is already in the closure.

Thus, the closure of `{A}` is `{A, B, C, D, E}`.

#### Candidate Keys:

The closure of `{A}` contains all attributes, so `{A}` is a **candidate key**.


To check for other candidate keys, we would need to repeat this process
for other subsets of attributes, but `{A}` is clearly a candidate key.
### 26. **Normalization and Its Need**

**Normalization** is the process of organizing data in a relational


database to reduce redundancy and improve data integrity. The primary
goal is to separate data into multiple related tables to eliminate
anomalies such as **update anomalies**, **insertion anomalies**, and
**deletion anomalies**.

#### Need for Normalization:


1. **Eliminate Redundancy**: Avoid repeated data by splitting large
tables into smaller ones.
2. **Improve Data Integrity**: Ensure that the database maintains
accurate and consistent data.
3. **Simplify Queries**: Well-organized data reduces complexity in
query formulation.
4. **Efficient Storage**: Reduced redundancy means more efficient use
of storage.

#### Various Normalization Forms:


1. **1st Normal Form (1NF)**:
- Each attribute contains atomic values (no repeating groups).
- Example: A table with `Name, Phone Numbers` should have each
phone number in a separate row.

2. **2nd Normal Form (2NF)**:


- 1NF + All non-key attributes are fully functionally dependent on the
primary key.
- Example: A table with `Student_ID, Course_ID, Instructor_Name`
should not have partial dependencies like `Instructor_Name` depending
on `Course_ID` only.

3. **3rd Normal Form (3NF)**:


- 2NF + No transitive dependencies.
- Example: If `Instructor_Name` depends on `Course_ID`, and
`Course_ID` depends on `Student_ID`, then `Instructor_Name` should be
moved to a separate table to avoid transitive dependency.

4. **Boyce-Codd Normal Form (BCNF)**:


- Every determinant is a candidate key. It’s a stricter version of 3NF.
5. **4th Normal Form (4NF)**:
- 3NF + No multi-valued dependencies.

6. **5th Normal Form (5NF)**:


- There are no join dependencies, and every non-trivial join
dependency is implied by the candidate keys.

### 27. **Importance of Achieving a Lossless Design in Relational


Database Design**

A **lossless decomposition** means that when a relation is


decomposed into multiple smaller relations, we can always reconstruct
the original relation without losing any information. This is important
because:

- **Preserves Data Integrity**: Ensures that no data is lost during


decomposition.
- **Simplifies Maintenance**: It makes the schema easier to manage
while retaining all original data.
- **Prevents Anomalies**: Avoids anomalies like update, insertion, and
deletion problems.

#### Is it always possible?


Yes, it is always possible to achieve a lossless decomposition **if the
decomposition is based on functional dependencies and satisfies certain
conditions** (i.e., the decomposition must be lossless if the set of
functional dependencies is preserved).

### 28. **True or False: Any Relation Schema that Satisfies BCNF Also
Satisfies 3NF**

**True**. Every relation schema that satisfies **BCNF** also satisfies


**3NF**. BCNF is a stricter version of 3NF where, in addition to
satisfying 3NF, **every functional dependency’s left-hand side must be a
superkey**. Since BCNF addresses all issues of 3NF, a relation in BCNF
automatically satisfies 3NF.

### 29. **Relational Schema and Normalization up to 3NF**

**Given Schema:**
Consider the following table for a **Bookstore**:

| Book_ID | Title | Author | Author_Address | Publisher |


|---------|-----------|------------|-----------------|------------|
|1 | Book A | Author 1 | Address 1 | Pub A |
|2 | Book B | Author 2 | Address 2 | Pub B |

#### Normalization Process:

1. **1st Normal Form (1NF)**: Ensure that all attributes have atomic
values. This table is already in 1NF.

2. **2nd Normal Form (2NF)**: Remove partial dependencies. We


notice that `Author_Address` depends only on `Author` and not on
`Book_ID`. So, we split the table:

**Book (Book_ID, Title, Author, Publisher)**

**Author (Author, Author_Address)**

3. **3rd Normal Form (3NF)**: Remove transitive dependencies. For


example, `Publisher` depends on `Book_ID`, but if we had
`Publisher_Address` based on `Publisher`, this would be a transitive
dependency. Hence, the final schema:

**Book (Book_ID, Title, Author, Publisher_ID)**

**Author (Author, Author_Address)**

**Publisher (Publisher_ID, Publisher_Name, Publisher_Address)**

### 30. **Trivial and Non-Trivial Dependencies Affecting Database


Schema Design**

- **Trivial Dependency**: A functional dependency is trivial if the right-


hand side is a subset of the left-hand side. These dependencies do not
affect schema design significantly since they don’t impose any
constraints.
- Example: `A → A` is trivial because the right-hand side (`A`) is part of
the left-hand side.
- **Non-Trivial Dependency**: A functional dependency is non-trivial if
the right-hand side is not a subset of the left-hand side. Non-trivial
dependencies are crucial in designing schemas since they help in
**normalization** and eliminating redundancy.
- Example: `Student_ID → Name` is non-trivial because the right-hand
side (`Name`) is not part of the left-hand side (`Student_ID`).

Trivial dependencies can be ignored during normalization, whereas non-


trivial dependencies must be considered carefully to ensure a lossless
and efficient database design.

UNIT - 4

### 31. **Structure of a B-tree**

A **B-tree** is a balanced search tree that maintains sorted data and


allows efficient insertion, deletion, and search operations. It is used
extensively in databases and file systems to manage large amounts of
data.

#### Structure:
- **Nodes**: Each node in a B-tree can contain multiple keys and child
pointers.
- **Root**: The root node can have fewer keys than other nodes.
- **Internal Nodes**: Internal nodes hold keys and child pointers. They
help in navigating the tree.
- **Leaf Nodes**: These contain only keys and no child pointers. They
store the actual data or references to the data.
- **Properties**:
- Every node can have a minimum and maximum number of children,
usually defined as a "degree" `t`.
- A node can have between `t-1` and `2t-1` keys, and between `t` and
`2t` children.
- All leaf nodes are at the same level.
- The keys in each node are kept sorted.
- The tree is balanced, ensuring that operations (insertion, deletion,
search) take logarithmic time.
### 32. **B-tree and Hashing**

Both **B-trees** and **hashing** are used for efficient data retrieval,
but they differ in structure and use cases.

- **B-tree**:
- A B-tree is a self-balancing tree structure that maintains sorted data,
allowing for efficient range queries, search, insertion, and deletion
operations.
- Operations like range queries are efficient in a B-tree, as the data is
sorted and can be traversed sequentially.

- **Hashing**:
- Hashing is a technique where a hash function maps keys to a fixed-size
table (hash table). Each key is hashed to a location in the table for quick
access.
- Hashing is ideal for equality searches but is inefficient for range
queries since the data is not sorted.

#### Comparison:
- **B-tree** is better for range queries, while **hashing** is faster for
direct lookups (equality searches).

### 33. **Static and Dynamic Hashing**

- **Static Hashing**:
- In static hashing, the hash table has a fixed size, and the hash function
maps data to a fixed number of slots.
- The size of the table cannot change dynamically, which can lead to
**collisions** (when two keys map to the same slot) or
**underutilization** (empty slots).

- **Dynamic Hashing**:
- In dynamic hashing, the hash table can grow or shrink as data is
inserted or deleted. This allows the table to dynamically adjust its size to
maintain performance.
- This approach uses techniques like **bucket splitting** or
**doubling** to handle collisions effectively.

### 34. **Indices in DBMS**


An **index** in a DBMS is a data structure that improves the speed of
data retrieval operations on a database table at the cost of additional
space. An index allows for quick access to rows based on the values of
one or more columns.

- **Types of Indices**:
- **Primary Index**: Created on the primary key of a table. It
guarantees uniqueness of each record.
- **Secondary Index**: Created on non-primary key columns to speed
up queries based on those columns.
- **Clustered Index**: The rows of the table are stored in the order of
the index.
- **Non-clustered Index**: The index stores pointers to the rows in the
table, and the rows are not necessarily in index order.

Indices are essential for improving query performance, particularly in


operations like **search**, **join**, and **range queries**.

### 35. **Steps of Query Processing with a Neat Diagram**

Query processing in a DBMS involves multiple steps to convert a high-


level query (such as SQL) into an efficient execution plan. The main steps
of query processing are:

1. **Parsing**: The SQL query is parsed to check for syntax errors and to
generate an initial query tree.
2. **Translation**: The parsed query is translated into a relational
algebra expression.
3. **Optimization**: The query optimizer generates the most efficient
execution plan based on available indexes, cost of operations, and
database statistics.
4. **Execution**: The optimized plan is executed, and the result is
returned to the user.

A diagram for query processing might look like this:

```
SQL Query → Parser → Query Tree → Query Optimizer → Optimized
Query Plan → Execution → Result
```

### 36. **Different Join Strategies for a Query and Their Performance**

There are several join strategies used in relational databases, each with
different performance implications based on the data size and indexes
available:

1. **Nested Loop Join**:


- For each row in the first table, check every row in the second table.
- **Performance**: O(n*m) for two tables of size `n` and `m`.
- **Used** when no indexes are available.

2. **Sort-Merge Join**:
- Sort both tables based on the join key, and then merge them by
matching sorted rows.
- **Performance**: O(n log n + m log m) for two sorted tables.
- **Used** for large tables with sorted data.

3. **Hash Join**:
- Create a hash table for the smaller table and probe it using the join
key for the larger table.
- **Performance**: O(n + m) for two tables of size `n` and `m`.
- **Used** when one table is much smaller than the other.

4. **Index Join**:
- Use indexes to find the matching rows.
- **Performance**: Depends on index structure (typically faster than a
full table scan).

### 37. **Differences Between Primary and Secondary Indices**

| **Attribute** | **Primary Index** |


**Secondary Index** |
|----------------------|------------------------------------------------------|----------------
----------------------------------------|
| **Definition** | An index built on the primary key of a table. |
An index built on non-primary key attributes. |
| **Uniqueness** | Ensures unique values (one row for each key).
| Does not guarantee uniqueness (multiple rows can have the same
value). |
| **Data Ordering** | Rows in the table are physically ordered by the
primary key. | Rows are not physically ordered; index stores pointers. |
| **Usage** | Optimizes queries on the primary key. |
Optimizes queries on non-primary key attributes. |

### 38. **B-tree: Insertion and Search**

- **Insertion**: Inserting data into a B-tree involves:


- Starting at the root and traversing down to the appropriate leaf node.
- If the leaf node has space, insert the key; otherwise, split the node
and promote the middle key to the parent node.

- **Search**: Searching in a B-tree involves:


- Starting from the root and comparing the search key with the keys in
the current node.
- Depending on the result, move to the left or right child, and repeat
until the key is found or a leaf node is reached.

### 39. **Linear Search and Binary Search Algorithms**

- **Linear Search**:
- It sequentially checks each element in the list.
- **Complexity**: O(n) where `n` is the number of elements.
- **Used** for unsorted data.

- **Binary Search**:
- It works by repeatedly dividing the sorted list in half to locate the
search key.
- **Complexity**: O(log n) where `n` is the number of elements.
- **Used** for sorted data.

### 40. **Role of Query Processing in DBMS**

**Query processing** in a DBMS is responsible for translating high-level


SQL queries into executable plans. The steps involved are:
- **Parsing**: To validate and convert the query into a format that can
be processed.
- **Optimization**: To identify the most efficient way to execute the
query.
- **Execution**: To run the optimized query plan and return the results.

Query processing improves the system's efficiency by selecting the best


execution plan and minimizing resource usage.

### 41. **Evaluation Expression Process in Query Optimization**

Query optimization involves selecting the most efficient execution plan


for a given query by evaluating various execution strategies. The steps
are:

1. **Generate possible plans**: The optimizer generates multiple


possible execution plans for the query.
2. **Cost estimation**: The cost of each plan is calculated based on
factors such as I/O, CPU usage, and memory consumption.
3. **Choose the best plan**: The optimizer selects the plan with the
least estimated cost and executes it.

### 42. **External Sort-Merge Algorithm**

The **External Sort-Merge Algorithm** is used for sorting large datasets


that do not fit in memory. It works as follows:

1. **Divide**: Split the large file into smaller chunks that can fit in
memory.
2. **Sort**: Sort each chunk in memory and write the sorted chunks to
disk.
3. **Merge**: Merge the sorted chunks using a merge process (like in
merge sort), reading and writing data to disk.

### 43. **How Hashing Contributes to Efficient Data Retrieval**

Hashing improves data retrieval by using a **hash function** to map


keys directly to positions in a hash table. When looking for data, the
system can access the corresponding slot in constant time, making it
much faster than sequential search methods.
- **Efficiency**: Hashing allows for quick lookup, insertion, and deletion
operations, often in O(1) time, as long as the hash function and collision
resolution methods are effective.
- **Collisions**: When

two keys hash to the same slot, hashing uses techniques like
**chaining** or **open addressing** to resolve these collisions.

Unit-5
### 44. **ACID Properties of Transactions**

ACID is a set of properties that ensure that database transactions are


processed reliably. The ACID properties are:

1. **Atomicity**:
- A transaction is treated as a single unit, which either fully completes
or fully fails. If any part of the transaction fails, the entire transaction is
rolled back.
- **Example**: If a bank transfer fails midway, neither the debit nor
the credit takes place. The system reverts to the state before the
transaction started.

2. **Consistency**:
- A transaction ensures that the database transitions from one valid
state to another valid state. It preserves the integrity constraints of the
database, such as primary keys, foreign keys, etc.
- **Example**: A transaction that transfers money from one account
to another maintains the rule that the total balance in the system
remains the same before and after the transaction.

3. **Isolation**:
- Transactions are executed in isolation from each other. Intermediate
states of a transaction are invisible to other transactions until the
transaction is committed.
- **Example**: If two people are transferring money from the same
account, their transactions are executed in such a way that neither can
see the other’s intermediate state.
4. **Durability**:
- Once a transaction is committed, its changes are permanent and will
survive any system failure (such as a crash).
- **Example**: After committing a transaction to withdraw money,
the money will not be lost even if the database crashes immediately
afterward.

### 45. **Transaction and Its States**

A **transaction** is a sequence of database operations (like insert,


update, delete) that is treated as a single logical unit. Transactions must
follow the ACID properties.

#### **States of a Transaction**:


1. **New**: The transaction is newly created but not yet started.
2. **Active**: The transaction is currently executing.
3. **Partially Committed**: The transaction has completed its
operations, but the changes have not yet been committed.
4. **Committed**: The transaction has successfully completed, and its
changes are now permanent in the database.
5. **Aborted**: The transaction has failed and needs to be rolled back
to restore the previous state.

#### **Diagram of Transaction States**:

```
New → Active → Partially Committed → Committed

Aborted
```

### 46. **Conflict Serializability vs View Serializability**

- **Conflict Serializability**:
- A schedule is **conflict serializable** if it can be transformed into a
serial schedule (one transaction after another) by swapping non-
conflicting operations (where two operations do not access the same
data or do not conflict in a way that violates serial execution).
- **Example**: If transactions `T1` and `T2` access different data items,
their operations are conflict-free and can be swapped without affecting
the result.

- **View Serializability**:
- A schedule is **view serializable** if it results in the same final state
as some serial schedule, i.e., the view of data read by transactions is the
same in both schedules.
- **Example**: Even if operations are interleaved, as long as the final
outcome is the same as some serial schedule, it is view serializable.

**Difference**:
- Conflict serializability is a stricter criterion than view serializability.
Every conflict-serializable schedule is view serializable, but not every
view serializable schedule is conflict serializable.

### 47. **Concurrency Problems and Strict Two-Phase Locking


Protocol**

Concurrency problems occur when multiple transactions are executed


simultaneously and interact with shared data, potentially causing
inconsistencies. Common problems include:

1. **Lost Update**: One transaction overwrites the update of another.


2. **Temporary Inconsistency**: A transaction reads data that is in an
intermediate or inconsistent state.
3. **Uncommitted Data (Dirty Read)**: A transaction reads data that
has been modified by another transaction but not yet committed.

#### **Strict Two-Phase Locking Protocol**:


The **Strict Two-Phase Locking (2PL)** protocol ensures that
transactions hold all the locks they need until they commit, thus
preventing issues like lost updates and temporary inconsistency.

- **Protocol**:
- Transactions must acquire locks on data before accessing it and
release them only after the transaction has committed.
- It guarantees **serializability** (conflict serializability) and prevents
**deadlocks**.
- **Example**:
- Transaction 1 locks data items A and B, performs updates, and then
releases the locks only after it commits. Meanwhile, Transaction 2 must
wait for Transaction 1 to release the locks before it can proceed.

### 48. **Shared Lock vs Exclusive Lock**

1. **Shared Lock**:
- Allows other transactions to read the data but prevents any
transaction from modifying it.
- **Example**: Transaction 1 acquires a shared lock on a data item,
and Transaction 2 can read the data but cannot modify it.

2. **Exclusive Lock**:
- Prevents other transactions from both reading and modifying the
data.
- **Example**: Transaction 1 acquires an exclusive lock on a data item,
blocking all other transactions from accessing it until the lock is released.

**Difference**:
- Shared locks allow read access by multiple transactions, while exclusive
locks block all other access to the data item.

### 49. **Timestamp-Based Protocols**

A **timestamp-based protocol** uses timestamps to manage the


concurrency of transactions. Each transaction is given a unique
timestamp when it starts, and the system uses this timestamp to
determine the order of operations.

- **Working**:
- When a transaction performs a read or write, the system checks
whether the transaction's timestamp is earlier or later than the
conflicting transaction's timestamp.
- **No Read-Write Conflict**: A transaction can only read data if no
subsequent transaction has written it.
- **No Write-Write Conflict**: A transaction can only write data if no
subsequent transaction has read or written it.

- **Advantages**:
- Provides an easy-to-implement mechanism for serializability.
- Prevents issues like **lost updates** and **dirty reads**.

### 50. **Locking and Two-Phase Locking**

**Locking** is the mechanism used to prevent conflicts between


concurrent transactions in a database. A transaction must acquire a lock
before accessing data and release the lock after finishing.

#### **Two-Phase Locking**:


- **Basic Principle**: A transaction must follow two phases:
1. **Growing Phase**: The transaction can acquire locks but cannot
release any.
2. **Shrinking Phase**: The transaction can release locks but cannot
acquire any new ones.

- **Types**:
1. **Basic 2PL**: A transaction must acquire all locks before it starts
releasing any.
2. **Strict 2PL**: A transaction holds all locks until it commits or aborts,
ensuring no other transaction can access data during the transaction’s
execution.

### 51. **Wait-Die & Wound-Wait**

- **Wait-Die**:
- When a younger transaction requests a lock held by an older
transaction, it **waits**.
- When an older transaction requests a lock held by a younger
transaction, the younger transaction **dies** (is aborted).

- **Wound-Wait**:
- When a younger transaction requests a lock held by an older
transaction, it **waits**.
- When an older transaction requests a lock held by a younger
transaction, the younger transaction **wounds** (is aborted).

### 52. **Deadlock with Example**


**Deadlock** occurs when two or more transactions are waiting for
each other to release resources, but none of the transactions can
proceed. This creates a cycle where all transactions are blocked.

#### Example:
- **Transaction 1** locks **Data A** and waits for **Data B**.
- **Transaction 2** locks **Data B** and waits for **Data A**.
- Both transactions are in a deadlock state because each is waiting for
the other to release a lock, and neither can proceed.

Deadlocks are typically resolved by using techniques like **timeouts**,


**rollbacks**, or **transaction prioritization**.

Unit-6

### 53. **Cryptography Techniques to Secure Data**

Cryptography is the practice of securing information by transforming it


into an unreadable format, only accessible to those with the proper
decryption key. The main cryptographic techniques used to secure data
are:

1. **Symmetric Key Cryptography (Private Key Encryption)**:


- In this method, the same key is used for both encryption and
decryption.
- **Example**: **AES (Advanced Encryption Standard)** is one of the
most widely used symmetric encryption algorithms.
- **Advantages**: Fast and efficient for large datasets.
- **Disadvantages**: Key distribution problem (securely sharing the
key between parties).

2. **Asymmetric Key Cryptography (Public Key Cryptography)**:


- This method uses a pair of keys: a **public key** (used for
encryption) and a **private key** (used for decryption).
- **Example**: **RSA** (Rivest-Shamir-Adleman) is a commonly used
asymmetric algorithm.
- **Advantages**: No need for secure key distribution, as the public
key can be freely shared.
- **Disadvantages**: Slower than symmetric key cryptography due to
computational complexity.

3. **Hash Functions**:
- A hash function transforms input data into a fixed-size string of
characters, which is typically a hash value or digest.
- **Example**: **SHA-256** (Secure Hash Algorithm 256-bit) is
commonly used in various applications like digital signatures and
certificate generation.
- **Advantages**: Used for integrity checks and ensuring data has not
been tampered with.
- **Disadvantages**: Not reversible (you cannot recover the original
data from the hash).

4. **Digital Signatures**:
- Digital signatures use asymmetric encryption to ensure the
authenticity and integrity of a message or document.
- **How it works**: The sender creates a hash of the message,
encrypts it with their private key, and sends it along with the message.
The recipient can decrypt the signature using the sender's public key and
verify the message integrity.
- **Example**: **RSA**, **DSA** (Digital Signature Algorithm).

5. **Elliptic Curve Cryptography (ECC)**:


- ECC is a public key cryptography method that uses the algebraic
structure of elliptic curves over finite fields. It offers higher security with
shorter keys than traditional methods like RSA.
- **Advantages**: More efficient with less computational power
needed, making it ideal for mobile and IoT devices.

6. **Hybrid Cryptography**:
- Hybrid systems combine both symmetric and asymmetric
cryptography. Typically, asymmetric encryption is used to securely
exchange a symmetric key, which is then used to encrypt the data.
- **Example**: **SSL/TLS** protocols use this approach for secure
communication over the internet.

### 54. **SQL Injection**


**SQL Injection (SQLi)** is a security vulnerability that occurs when an
attacker manipulates an SQL query by injecting malicious SQL code
through user input fields in a web application. This allows the attacker to
bypass authentication, retrieve sensitive data, or even modify the
database.

#### **How SQL Injection Works**:


1. An attacker inputs SQL code into a user input field (like a login form,
search box, etc.) instead of valid data.
2. If the application fails to sanitize or validate the input, the injected
SQL code is executed on the backend database, allowing the attacker to
manipulate queries.
3. Depending on the vulnerability, an attacker could:
- **Extract data**: Retrieve usernames, passwords, or other sensitive
information.
- **Modify or delete data**: Insert, update, or delete records from the
database.
- **Bypass authentication**: Log in as an admin by injecting SQL
commands that bypass the login logic.

#### **Example**:
- Consider a vulnerable login form where the username and password
are used directly in an SQL query like this:
```sql
SELECT * FROM users WHERE username = 'user_input' AND password =
'password_input';
```
- An attacker could inject the following input into the username field:
```sql
' OR '1'='1
```
This would change the SQL query to:
```sql
SELECT * FROM users WHERE username = '' OR '1'='1' AND password =
'password_input';
```
This would always return true (`'1'='1'` is always true), allowing the
attacker to bypass authentication.

#### **Preventing SQL Injection**:


- **Parameterized Queries**: Use prepared statements with
placeholders for user input, ensuring the input is treated as data rather
than executable code.
- **Stored Procedures**: Use stored procedures with proper input
validation.
- **Input Validation**: Always validate and sanitize user inputs (e.g.,
using whitelist validation).
- **Least Privilege**: Use database accounts with limited privileges for
applications (e.g., read-only access if writing is not needed).
- **Web Application Firewalls (WAF)**: Use WAFs to filter malicious SQL
input patterns.

### 55. **DAC, MAC, and RBAC Models in Detail**

Access control models govern how resources are accessed in a computer


system. Three common models are:

#### 1. **Discretionary Access Control (DAC)**:


- **Definition**: In DAC, the owner of the resource (e.g., a file or
database table) has the discretion to decide who can access it. The
owner can grant or deny permissions to other users.
- **How it works**: Permissions are based on **Access Control Lists
(ACLs)**, where each object (like a file) has a list of users and their
corresponding permissions (e.g., read, write, execute).
- **Advantages**: Simple to implement and flexible.
- **Disadvantages**: Less secure because the owner can assign
permissions without constraints.
- **Example**: A file system where the owner can grant read/write
access to other users.

#### 2. **Mandatory Access Control (MAC)**:


- **Definition**: In MAC, access to resources is controlled by a central
authority based on predefined policies. The user does not have the
discretion to change access rights. Security labels (e.g., classification
levels like "Confidential", "Top Secret") are applied to objects, and users
have corresponding clearance levels.
- **How it works**: The system enforces access rules based on
classifications and security labels. Users cannot change the access
control settings.
- **Advantages**: Very secure and suitable for environments where
data sensitivity is high, such as military or government organizations.
- **Disadvantages**: Less flexible and harder to manage.
- **Example**: In a classified government environment, documents
are classified as "Top Secret", and only users with the appropriate
security clearance can access them.

#### 3. **Role-Based Access Control (RBAC)**:


- **Definition**: In RBAC, access rights are assigned based on roles
rather than individual users. Each role corresponds to a set of
permissions, and users are assigned one or more roles depending on
their job functions.
- **How it works**: Users are assigned roles, and roles define what
actions the user can perform. The roles are based on organizational
needs, and permissions can be grouped by roles.
- **Advantages**: Scalable, easier to manage in large organizations,
and suitable for enforcing the principle of least privilege.
- **Disadvantages**: Less granular than DAC and MAC, as permissions
are based on roles rather than individual users.
- **Example**: A system with roles such as "Admin", "Manager", and
"Employee". Admins have full access, managers have access to specific
resources, and employees have the least privileges.

These models provide different levels of control over access to resources,


and the choice of model depends on the security requirements of the
system.
Unit-7
### 56. **UNIQUE Constraint vs PRIMARY KEY Constraint**

The **UNIQUE** and **PRIMARY KEY** constraints both ensure that


values in a column (or combination of columns) are unique across the
table. However, there are some key differences:

1. **Uniqueness**:
- **PRIMARY KEY**: Ensures that the values in the column (or columns)
are unique, and it also automatically enforces **NOT NULL**. There can
only be one **PRIMARY KEY** constraint in a table.
- **UNIQUE**: Also ensures uniqueness, but it allows **NULL** values.
A column with a **UNIQUE** constraint can have multiple **NULLs**
(since NULL is considered a unique value).

2. **Nullability**:
- **PRIMARY KEY**: Cannot have **NULL** values.
- **UNIQUE**: Can have **NULL** values (multiple **NULLs** are
allowed).

3. **Usage**:
- **PRIMARY KEY**: Used to uniquely identify each record in the table.
- **UNIQUE**: Used to ensure that the values in the column(s) are
unique but does not necessarily act as a primary identifier for the
records.

**Example**:
```sql
CREATE TABLE students (
id INT PRIMARY KEY, -- Primary Key constraint, ensures uniqueness
and non-nullability
email VARCHAR(255) UNIQUE -- Unique constraint, allows NULL values
but ensures uniqueness
);
```

---

### 57. **SQL Aggregate Functions with Examples**


SQL aggregate functions are used to perform calculations on multiple
rows of a table’s column and return a single value.

1. **COUNT()**: Returns the number of rows in a set.


- Example:
```sql
SELECT COUNT(*) FROM students;
```
This counts the number of records in the `students` table.

2. **SUM()**: Returns the sum of a numeric column.


- Example:
```sql
SELECT SUM(age) FROM students;
```
This calculates the total sum of all ages from the `students` table.

3. **AVG()**: Returns the average value of a numeric column.


- Example:
```sql
SELECT AVG(age) FROM students;
```
This calculates the average age from the `students` table.

4. **MAX()**: Returns the maximum value in a set.


- Example:
```sql
SELECT MAX(age) FROM students;
```
This retrieves the highest age from the `students` table.

5. **MIN()**: Returns the minimum value in a set.


- Example:
```sql
SELECT MIN(age) FROM students;
```
This retrieves the lowest age from the `students` table.

---
### 58. **SQL Queries for the Given Tables**

#### Tables:
- **T1 (rollno, stuname, age, city, branchcode)**
- **T2 (branchcode, branchname)**

1. **Retrieve students details whose branchcode is 5**:


```sql
SELECT * FROM T1 WHERE branchcode = 5;
```

2. **Find the average age of all students**:


```sql
SELECT AVG(age) FROM T1;
```

3. **Add a new branch in T2 table**:


```sql
INSERT INTO T2 (branchcode, branchname)
VALUES (6, 'Computer Science');
```

4. **Display rollno, stuname, and age of students whose city is


Chennai**:
```sql
SELECT rollno, stuname, age FROM T1 WHERE city = 'Chennai';
```

5. **Change age of student to 20 whose roll no is 1**:


```sql
UPDATE T1 SET age = 20 WHERE rollno = 1;
```

6. **Delete student details whose age is 18**:


```sql
DELETE FROM T1 WHERE age = 18;
```

7. **Retrieve branch information in descending order**:


```sql
SELECT * FROM T2 ORDER BY branchname DESC;
```

---

### 59. **GRANT and REVOKE Commands**

- **GRANT**: The `GRANT` command is used to give specific privileges


(such as `SELECT`, `INSERT`, `UPDATE`, etc.) to users or roles.
- Example:
```sql
GRANT SELECT, INSERT ON students TO user1;
```
This grants `SELECT` and `INSERT` privileges on the `students` table to
`user1`.

- **REVOKE**: The `REVOKE` command is used to remove specific


privileges from a user or role.
- Example:
```sql
REVOKE SELECT ON students FROM user1;
```
This revokes the `SELECT` privilege from `user1` on the `students`
table.

---

### 60. **ROLLBACK and COMMIT Commands**

- **COMMIT**: The `COMMIT` command is used to save all changes


made during the current transaction to the database. After a `COMMIT`,
the changes are permanent.
- Example:
```sql
COMMIT;
```
- **ROLLBACK**: The `ROLLBACK` command is used to undo changes
made during the current transaction. It restores the database to the
state it was in before the transaction started.
- Example:
```sql
ROLLBACK;
```

---

### 61. **Aggregation Functions with Suitable Examples**

As discussed in **SQL Aggregate Functions**, the most commonly used


aggregation functions in SQL are:

1. **COUNT()**:
- Example: To count the number of students in the table:
```sql
SELECT COUNT(*) FROM students;
```

2. **SUM()**:
- Example: To calculate the total amount of sales:
```sql
SELECT SUM(sales_amount) FROM sales;
```

3. **AVG()**:
- Example: To find the average salary of employees:
```sql
SELECT AVG(salary) FROM employees;
```

4. **MAX()**:
- Example: To find the highest score from a students table:
```sql
SELECT MAX(score) FROM exam_results;
```

5. **MIN()**:
- Example: To find the lowest temperature recorded:
```sql
SELECT MIN(temperature) FROM weather;
```

---

### 62. **Short Note on Cursors**

A **cursor** is a database object used to retrieve and manipulate rows


from a result set one at a time. It is often used in PL/SQL when working
with a large number of rows and you need to fetch or update data row
by row.

#### **Types of Cursors**:


1. **Implicit Cursor**: Automatically created by SQL when a SELECT
statement is executed and doesn't require the user to explicitly define it.
2. **Explicit Cursor**: Defined by the user when they need to retrieve
multiple rows with complex logic.

#### **Example**:
```sql
DECLARE
CURSOR student_cursor IS
SELECT name FROM students;
student_name VARCHAR(50);
BEGIN
OPEN student_cursor;
FETCH student_cursor INTO student_name;
WHILE student_cursor%FOUND LOOP
DBMS_OUTPUT.PUT_LINE(student_name);
FETCH student_cursor INTO student_name;
END LOOP;
CLOSE student_cursor;
END;
```

---

### 63. **UNIQUE vs PRIMARY KEY**


This question is a repeat of **56**, and the answer remains the same.

---

### 64. **Triggers with Example**

A **trigger** is a stored procedure in a database that is automatically


executed or fired when certain events (like `INSERT`, `UPDATE`, `DELETE`)
occur on a table or view.

#### Example:
```sql
CREATE TRIGGER before_insert_student
BEFORE INSERT ON students
FOR EACH ROW
BEGIN
IF :new.age < 18 THEN
RAISE_APPLICATION_ERROR(-20001, 'Student age must be 18 or
older');
END IF;
END;
```
This trigger checks if a student's age is less than 18 before inserting into
the `students` table and raises an error if true.

---

### 65. **Stored Procedures with Example**

A **stored procedure** is a set of SQL statements that can be stored


and executed on the database server. It allows you to encapsulate logic
in the database.

#### Example:
```sql
CREATE PROCEDURE get_student_details (IN student_id INT)
BEGIN
SELECT * FROM students WHERE rollno = student_id;
END;
```
This procedure fetches the details of a student by their `rollno`.

---

### 66. **PL/SQL Code to Print Sum of Even Numbers Between 1 to


100**

```plsql
DECLARE
total_sum NUMBER := 0;
BEGIN
FOR i IN 1..100 LOOP
IF MOD(i, 2) = 0 THEN
total_sum := total_sum + i;
END IF;
END LOOP;
DBMS_OUTPUT.PUT_LINE('Sum of even numbers between 1 and 100: '
|| total_sum);
END;
```
This PL/SQL block calculates the sum of all even numbers between 1 and
100 and outputs the result.

You might also like