0% found this document useful (0 votes)
7 views4 pages

DBMS

The document outlines various constraints in relational databases, including domain, entity integrity, and referential integrity constraints, which ensure data accuracy and consistency. It also describes SQL commands such as DROP TABLE, INSERT, DELETE, and UPDATE, along with their syntax and usage. Additionally, it compares single-level and multi-level indexing, as well as B-Trees and B+ Trees, highlighting their structures, performance, and use cases.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views4 pages

DBMS

The document outlines various constraints in relational databases, including domain, entity integrity, and referential integrity constraints, which ensure data accuracy and consistency. It also describes SQL commands such as DROP TABLE, INSERT, DELETE, and UPDATE, along with their syntax and usage. Additionally, it compares single-level and multi-level indexing, as well as B-Trees and B+ Trees, highlighting their structures, performance, and use cases.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

Constraints are rules /conditions that must be satisfied by every valid state of relation in relational db.

They ensure accuracy, consistency, validity of data in db.

2. Domain Constraint
A domain constraint specifies that the value of each attribute (column) must be from a specific, predefined set of valid values — known as attribute's domain.
• Every attribute in a relation has a domain which defines the type of values (e.g., integer, string, date) it can take.
• If a value outside this domain is inserted, the domain constraint is violated.
Example:
Consider a relation:
STUDENT(RollNo INT, Name VARCHAR, Age INT)
• Domain of Age is integers from 0 to 120
• Inserting Age = 'abc' or Age = -5 violates the domain constraint.

3. Entity Integrity Constraint


The entity integrity constraint states that no attribute of a primary key in a relation can be NULL.
• The primary key is used to uniquely identify each row (tuple) in a table.
• If any part of the primary key is NULL, the tuple cannot be identified uniquely, violating entity integrity.
Example:
In the table:
EMPLOYEE(EmpID INT PRIMARY KEY, Name VARCHAR)
• EmpID is the primary key.
• Inserting a row with EmpID = NULL is not allowed because it violates entity integrity.

4. Referential Integrity Constraint


A referential integrity constraint is a rule that maintains consistency among tuples in two relations.
• If a tuple in one relation (called the referencing relation) has a foreign key (FK) value, that value must either:
1. Match the primary key (PK) value of some tuple in another relation (called the referenced relation), or
2. Be NULL (if the foreign key is not part of the primary key in the referencing table)
Example:
Consider two relations:
1. DEPARTMENT(DeptID INT PRIMARY KEY, DeptName VARCHAR)
2. EMPLOYEE(EmpID INT PRIMARY KEY, Name VARCHAR, DeptID INT)
Here, EMPLOYEE.DeptID is a foreign key referencing DEPARTMENT.DeptID.
• If we insert an employee with DeptID = 10, and 10 does not exist in the DEPARTMENT table, it violates referential integrity.
• But inserting DeptID = NULL is allowed, provided DeptID is not part of the primary key.

New Section 4 Page 1


DROP TABLE INSERT
to completely remove table from db, including its structure, data, constraints, indexes, Definition:
permissions. The INSERT command is used to add new records (tuples) into a table.
Syntax: There are three main forms of the INSERT command:
DROP TABLE table_name; ✅ Method 1: Insert values for all columns (in order)
Example: INSERT INTO STUDENT VALUES (101, 'Rosu', 'CSE', 20);
DROP TABLE STUDENT; ✅ Method 2: Insert values for selected columns
INSERT INTO STUDENT (RollNo, Name) VALUES (102, 'Anna');
• Columns not mentioned will be set to NULL or their DEFAULT value.
DELETE ✅ Method 3: Insert values from another query
removes records (tuples) from a table based on a condition. INSERT INTO TOPPER (Name, Department)
Usage: SELECT Name, Department FROM STUDENT WHERE Marks > 90;
DELETE FROM STUDENT WHERE RollNo = 101;
• Deletes only those records that satisfy the WHERE condition. UPDATE
Definition:
DELETE FROM STUDENT; The UPDATE command is used to modify existing values in one or more
• This keeps the table structure but removes all data. rows.
Syntax and Example:
ALTER TABLE UPDATE STUDENT SET Name = 'Rose Mary' WHERE RollNo = 101;
to add, delete, or modify columns and constraints in an existing table. • You can update multiple columns:
• ✅ Add a column: UPDATE STUDENT SET Name = 'Rose', Age = 21 WHERE RollNo = 101;
ALTER TABLE STUDENT ADD Email VARCHAR(100);
• ❌ Drop a column:
ALTER TABLE STUDENT DROP COLUMN Email;

Comman Purpose Can Affect Can Affect
ALTER TABLE STUDENT MODIFY Name VARCHAR(150);
d Structure? Data?
• Add NOT NULL constraint:
ALTER TABLE STUDENT MODIFY Name VARCHAR(100) NOT NULL; DROP Delete table,data ✅ ✅ Yes Permanent removal
• Add PRIMARY KEY:
ALTER Modify table ✅ ❌ No Add/remove columns
ALTER TABLE STUDENT ADD PRIMARY KEY (RollNo); structure /constraints
• ✅ Add UNIQUE constraint:
ALTER TABLE STUDENT ADD CONSTRAINT unique_email UNIQUE (Email); INSERT Add new rows ❌ ✅ Yes 3 forms supported
• Add CHECK constraint: DELETE Remove existing ❌ ✅ Yes Use WHERE carefully
ALTER TABLE STUDENT ADD CONSTRAINT check_age CHECK (Age >= 18); rows
• UPDATE Modify existing ❌ ✅ Yes Needs WHERE to target
ALTER TABLE STUDENT DROP CONSTRAINT check_age; values rows

New Section 4 Page 2


✅ When Multi-Level Indexing is More Significant:
1. Large Databases:
When the dataset is so large that the single-level index itself cannot fit into memory or a single disk block. Multi-level
indexing helps by reducing the number of disk accesses.
2. Faster Search Required:
In systems requiring faster lookup times (e.g., OLAP systems), multi-level indexing significantly reduces search time by
narrowing down index levels step-by-step.
3. High Volume of Transactions:
In enterprise systems with frequent queries, multi-level indexing optimizes performance by minimizing I/O cost and search
time.

✅ When Single-Level Indexing is More Significant:


1. Small to Medium Datasets:
When the dataset and index can fit comfortably in memory or within a few disk blocks, single-level indexing is sufficient
and more efficient due to its simplicity.
2. Low Query Frequency:
In applications where queries are infrequent or performance isn’t a critical concern, single-level indexing avoids the
overhead of managing multiple levels.
3. Simple Applications or Educational Use:
For learning environments or small tools (like personal apps or simple inventory software), single-level indexing is easier to
implement and manage.

Feature Single-Level Indexing Multi-Level Indexing


Structure A single index file with entries pointing Index on index; hierarchy of indexes (first-
directly to data blocks level, second-level, etc.)
Use Case Suitable when the index can fit in memory or Best when even the index is large and
needs only a few blocks spans many blocks
Speed (Search Cost) O(log₂b) for data blocks + 1 block for index O(log₂bi) per level + 1 block for data block
access → total reduced access
Space Requirement Needs fewer blocks (sparse index) More storage for multiple index levels but
highly optimized
Example Index Primary Index (Sparse), Dense Index Any large index (primary, secondary,
clustering) with many entries
Performance Expensive, causes shifting of records/entries Localized updates; better scalability
(Insert/Delete)
Index Entry Count Equal to number of data blocks (sparse) or Decreases at each level (fan-out effect)
records (dense)
Example (Search) Binary search on index → data block Binary search on top-level index → mid-
level → base → data block

Single-Level Example
• Data File: 30,000 records, 3000 blocks.
• Primary Index: 3000 index entries.
• Index File: 45 blocks (Ri = 15B, B = 1024 → bfri = 68)
• Binary search: log₂(45) = 7 accesses + 1 = 8 accesses total.
Multi-Level Example
• Same Index File (First-Level): 3000 entries → 45 blocks.
• Second-Level Index: 1 entry per 45 blocks → ~1 block (fits in memory).
• Access cost: log₂(1) + log₂(45) + 1 = 0 + 7 + 1 = 8 accesses.
• If first-level was larger (e.g., 100,000 entries), this advantage increases drastically.

New Section 4 Page 3


Aspect B-Tree B+ Tree
Node Structure Internal nodes store both keys and pointers to child nodes and Internal nodes store only keys and pointers to child nodes. Data
data records. records are only stored at leaf nodes.
Data Pointers Data pointers can be stored at both internal and leaf nodes. Data pointers are only stored at leaf nodes, while internal nodes store
only keys.
Leaf Node Structure Leaf nodes may store keys and data pointers. Leaf nodes store only keys and data pointers, with an additional
pointer to the next leaf node.
Traversal Data can be retrieved from internal nodes. Data is only available in leaf nodes, requiring traversal to the leaf level.
Tree Balance B-Tree ensures balance by splitting nodes, but internal nodes B+ Tree ensures balance and uniformity by having all leaves at the
can be unevenly filled. same level.
Insertion & Deletion Insertion may cause splitting at the internal node level, leading Insertion and deletion involve splitting leaf nodes and may cause
to potential propagation. internal node splitting or merging.
Key Replication No key replication is needed, as keys and data pointers are in Key replication occurs, as the key is copied up from a full leaf node to
internal nodes and leaves. the parent internal node.
Search Efficiency Search can be performed in internal or leaf nodes, making the Search only occurs at the leaf level, meaning searches may require
tree potentially shallower. traversing down to the leaf level.
Range Queries Less efficient for range queries, as the search stops at the first Highly efficient for range queries due to leaf node linkage, allowing
match. sequential traversal.
Structure & Memory Nodes can vary in size, and the tree structure can be more More structured with uniformity in the leaf node level, leading to
Usage complex in terms of balancing. potentially more efficient storage.
Splitting Nodes Splitting nodes can propagate up the tree, with keys from the Splitting only occurs at the leaf level, and keys are copied to the
split node moved to the parent. parent node, maintaining balance.
Performance in Large Suitable for larger data sets with both index and data storage in Optimized for larger data sets, with a clear distinction between index
Data Sets the same nodes. and data, leading to faster access.

New Section 4 Page 4

You might also like