0% found this document useful (0 votes)
36 views15 pages

2.:A Binomial Heap Is A Collection of Binomial Trees: H 2h2 (Key) 1h1 (Key)

The document discusses various data structures and algorithms including Binary Heaps, Binomial Heaps, and Binary Search Trees, detailing their properties and applications. It also covers collision resolution techniques in hashing, the Boyer-Moore algorithm for string searching, and the concept of Abstract Data Types (ADTs). Additionally, it explains Insertion Sort and the process of inserting elements into heaps, highlighting best and worst-case scenarios.

Uploaded by

vasantha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
36 views15 pages

2.:A Binomial Heap Is A Collection of Binomial Trees: H 2h2 (Key) 1h1 (Key)

The document discusses various data structures and algorithms including Binary Heaps, Binomial Heaps, and Binary Search Trees, detailing their properties and applications. It also covers collision resolution techniques in hashing, the Boyer-Moore algorithm for string searching, and the concept of Abstract Data Types (ADTs). Additionally, it explains Insertion Sort and the process of inserting elements into heaps, highlighting best and worst-case scenarios.

Uploaded by

vasantha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 15

1. BINARY HEAP:The main application of Binary Heap is as implement a priority queue.

Binomial Heap is an
extension of Binary Heap that provides faster union or merge operation with other operations provided by
Binary Heap.

2. BINOMIAL HEAP:A Binomial Heap is a collection of Binomial Trees


A Binomial Tree of order k the has following properties.
 It has exactly 2k nodes.
 It has depth as k.
 There are exactly kaiCi nodes at depth i for i = 0, 1, . . . , k.
 The root has degree k and children of the root are themselves Binomial Trees with order k-1, k-2,.. 0 from
left to right.

3.BINOMIAL TREE: A binomial tree is a graphical representation of possible intrinsic values that an option may
take at different nodes or time periods. The value of the option depends on the underlying stock or bond, and
the value of the option at any node depends on the probability that the price of the underlying asset will
either decrease or increase at any given node.

 A binomial tree is a representation of the intrinsic values an option may take at different time periods.
 The value of the option at any node depends on the probability that the price of the underlying asset
will either decrease or increase at any given node.
 On the downside—an underlying asset can only be worth exactly one of two possible values, which is
not realistic.

4.DOUBLE HASHING: Double hashing is a collision resolution technique used in hash tables. It works by using
two hash functions to compute two different hash values for a given key. The first hash function is used to
compute the initial hash value, and the second hash function is used to compute the step size for the probing
sequence.
Double hashing has the ability to have a low collision rate, as it uses two hash functions to compute the hash
value and the step size. This means that the probability of a collision occurring is lower than in other collision
resolution techniques such as linear probing or quadratic probing.
However, double hashing has a few drawbacks. First, it requires the use of two hash functions, which can
increase the computational complexity of the insertion and search operations. Second, it requires a good
choice of hash functions to achieve good performance. If the hash functions are not well-designed, the
collision rate may still be high.
Advantages of Double hashing
 The advantage of Double hashing is that it is one of the best forms of probing, producing a uniform
distribution of records throughout a hash table.
 This technique does not yield any clusters.
 It is one of the effective methods for resolving collisions.
 Double hashing can be done using :
(hash1(key) + i * hash2(key)) % TABLE_SIZE
Here hash1() and hash2() are hash functions and TABLE_SIZE
is size of hash table.

Apply the first hash function ℎ1h1(key) over your key to get the location to store the key.
1. Example: Take the key you want to store on the hash-table.
2.

If the location is already filled, apply the second hash function ℎ2h2(key) in combination
3. If the location is empty, place the key on that location.

with the first hash function ℎ1h1(key) to get the new location for the key.
4.
5.BOYER MOORE: The B-M algorithm takes a 'backward' approach: the pattern string (P) is aligned with the
start of the text string (T), and then compares the characters of a pattern from right to left, beginning with
rightmost character.

If a character is compared that is not within the pattern, no match can be found by analyzing any further
aspects at this position so the pattern can be changed entirely past the mismatching character.

For deciding the possible shifts, B-M algorithm uses two preprocessing strategies simultaneously. Whenever a
mismatch occurs, the algorithm calculates a variation using both approaches and selects the more significant
shift thus, if make use of the most effective strategy for each case

The two strategies are called heuristics of B - M as they are used to reduce the search. They are:

1. Bad Character Heuristics


2. Good Suffix Heuristics

Example:Let’s apply the Boyer-Moore algorithm to the following example:


 Text: “AABAACAADAABAABA”
 Pattern: “AABA”
1. We find the last occurrence of “AABA” in the pattern (at position 1).
2. We shift the pattern to align with the occurrence in the text.
3. We repeat this process for other occurrences.
Result:
Pattern found at index 0
Pattern found at index 9
Pattern found at index 12

6.Smart Unions
 Union-by-Size:
o When merging two sets, make the smaller tree a subtree of the larger tree.
o Break ties using any method (e.g., selecting the bigger index).
o Helps maintain balanced trees and reduces tree depth.
o Result: Depth of any node is never more than lg N.

Example:
Suppose we have 10 individuals: a, b, c, d, e, f, g, h, i, and j. We have the following relationships:
 a <-> b
 b <-> d
 c <-> f
 c <-> i
 j <-> e
 g <-> j

We want to create groups based on friendships:

1. G1: {a, b, d}
2. G2: {c, f, i}
3. G3: {e, g, j}
4. G4: {h}
7. BINARY SEARCH TREE (or BST) is a special kind of binary tree in which the values of all the nodes of the left
subtree of any node of the tree are smaller than the value of the node. Also, the values of all the nodes of the
right subtree of any node are greater than the value of the node.

How to Insert a value in a Binary Search Tree:


A new key is always inserted at the leaf by maintaining the property of the binary search tree. We start
searching for a key from the root until we hit a leaf node. Once a leaf node is found, the new node is added as
a child of the leaf node. The below steps are followed while we try to insert a node into a binary search tree:
 Check the value to be inserted (say X) with the value of the current node (say val) we are in:
 If X is less than val move to the left subtree.
 Otherwise, move to the right subtree.
 Once the leaf node is reached, insert X to its right or left based on the relation between X and the leaf
node’s value.
8.INSERTION SORT ALGORITHM
 Overview:
o Insertion sort builds the final sorted array (or list) one element at a time.
o It’s like sorting a deck of playing cards in your hands—gradually arranging them in order.
o The array is virtually split into a sorted part and an unsorted part.
o Values from the unsorted part are picked and placed in the correct position within the sorted part.
 Working:

1. Initial State:
 Consider an example array: arr[] = {12, 11, 13, 5, 6}.
 Initially, the first two elements (12 and 11) are compared.
2. First Pass:
 Since 12 is greater than 11, they are not in ascending order.
 Swap 11 and 12, resulting in {11, 12, 13, 5, 6}.
 Now, 11 is stored in the sorted sub-array.
3. Second Pass:
 Move to the next two elements (12 and 13).
 Since 13 is greater than 12, no swapping occurs.
 Both 11 and 12 are now part of the sorted sub-array.
4. Third Pass:
 Compare 13 and 5.
 Swap them to get {11, 5, 12, 13, 6}.
 Again, swap 11 and 5 to get {5, 11, 12, 13, 6}.
 Finally, 5 is at its correct position.
5. Fourth Pass:
 Compare 13 and 6.
 Swap them to get {5, 11, 6, 12, 13}.
 Further swaps result in {5, 6, 11, 12, 13}.
 The array is now completely sorted.

9. ABSTRACT DATA TYPE (ADT) is a type (or class) for objects whose behavior is defined by a set of values and a
set of operations. Here are the key points about ADTs:
1. Definition:
o An ADT defines a high-level interface for a data structure.
o It specifies what operations can be performed on the data type but does not dictate how these operations are
implemented.
o ADTs provide an implementation-independent view, hiding the internal details.
2. Abstraction:
o ADTs encapsulate data and operations, allowing users to interact with them without knowing the
implementation details.
o Think of ADTs as black boxes—users only need to know what a data type can do, not how it’s implemented.
3. Example:
o Consider the built-in data type integer.
o Operations on integers include addition, subtraction, multiplication, and modulo.
o Users don’t need to know how integers are implemented; they use them based on their behavior.
Example: List ADT
Let’s define a simple ADT for a list. A list is a collection of elements stored in a specific order. Here are the
essential operations for a list ADT:
1. get(): Return an element from the list at any given position.
2. insert(): Insert an element at any position in the list.
3. remove(): Remove the first occurrence of any element from a non-empty list.
4. removeAt(): Remove the element at a specified location from a non-empty list.
5. replace(): Replace an element at any position with another element.
6. size(): Return the number of elements in the list.
7. isEmpty(): Return true if the list is empty; otherwise, return false.
8. isFull(): Return true if the list is full; otherwise, return false.

10.HASHING is a fundamental technique used to map data (keys) to specific indices in an array. It enables
efficient storage, retrieval, and manipulation of data based on its key. Here are the key concepts related to
hashing:
1. Hash Table:
o A hash table (also known as a hash map) is a data structure that stores key-value pairs.
o It operates on the concept of hashing, where each key is translated by a hash function into a distinct index in an
array.
o The index serves as a storage location for the corresponding value.
o In simple words, a hash table maps keys to values.
2. Load Factor:
o The load factor of a hash table measures how many elements are stored relative to the table’s size.
o A high load factor can lead to cluttered tables, longer search times, and collisions.
o Proper table resizing and a good hash function help maintain an ideal load factor.
3. Hash Function:
o A hash function translates keys into array indices.
o The goal is to evenly distribute keys across the array to reduce collisions and ensure quick lookups.
o Common hash functions include:
 Hashing by Division: Uses the remainder after dividing the key by the array size.
 Hashing by Multiplication: Multiplies the key by a constant between 0 and 1, then uses the fractional part to
determine the index.
4. 11.Choosing a Hash Function:
o A good hash function should:
 Distribute keys uniformly across the table to minimize collisions.
 Be computationally efficient for speedy hashing and retrieval.
 Prevent deducing the key from its hash value (security).
 Adapt to changing data (flexibility).

11.COLLISION RESOLUTION TECHNIQUES

Collisions occur when multiple keys hash to the same array index. Here are some common techniques for
handling collisions:
1. Chaining:
o Each array index contains a linked list (or other data structure) to store multiple key-value pairs.
o Colliding keys are added to the same index, forming a chain.
o Efficient for handling collisions but requires additional memory.
2. Open Addressing:
o Colliding keys are stored directly in the array.
o When a collision occurs, probe sequentially through the array to find the next available slot.
o Common methods:
 Linear Probing: Check adjacent slots.
 Quadratic Probing: Probe using quadratic increments.
 Double Hashing: Use a second hash function to determine the step size.
3. Double Hashing:
o Combines the benefits of open addressing and hash functions.
o Uses a secondary hash function to calculate the step size for probing.
o Helps distribute keys more evenly and reduces clustering.
1.BINOMIAL TREES:

 A binomial tree is an ordered tree defined recursively.


 A Binomial Tree of order 0 consists of a single node.
 A Binomial Tree of order k can be constructed by taking two binomial trees of order k-1 and making one
the leftmost child of the other.
 Properties of a Binomial Tree of order k:
o It has exactly 2^k nodes.
o Its depth is k.
o There are exactly kC_i nodes at depth i for i = 0, 1, …, k.
The root has degree k, and its children are themselves Binomial Trees with orders k-1, k-2, …, 0 from left to
right.
k = 0 (single k = 1 (2 nodes) k = 2 (4 nodes) k = 3 (8 nodes)
nodes) [We take two k = 0 order [We take two k = 1 [We take two k = 2
[We take two k = 0 Binomial Trees, and make order Binomial order Binomial Trees,
order Binomial one as a child of other] Trees, and make one and
Trees, and make o as a child of other] make one as a child of
one as a child of / o other]
other] o / \ o
o o o / |\
/ o o o
o /\ |
o oo
/
o
Node)
2. ALGORITHM FOR INSERT AN ELEMENT INTO A MAX/MIN HEAP:
Append Element: Add the new element to the end of the heap.
Initialize Current Index: Set a variable current_index to the index of the newly added element (which is the last
index in the heap).
Heapify Up:
 While the current_index is greater than 0 (not at the root):
 Calculate the parent_index using the formula: (current_index - 1) // 2.
 If the value at the current_index is [max-greater/min-less] than the value at the parent_index:
 Swap the values at current_index and parent_index.
 Update current_index to be the parent_index.
 Otherwise, if the max heap property is satisfied, break out of the loop.
Heap Property Restored: The new element has been successfully inserted, and the max heap property is
restored.

a max heap from the given data {1, 2, 3, 4, 5, 6, 7, 8}, let’s analyze whether it falls under the best-case or worst-
case scenario:
1. Best Case:
o The best-case scenario for heap creation occurs when the input elements are already in
descending order (from largest to smallest).
o In this case, the heapify operation (which ensures the max heap property) requires minimal
work.
o Specifically, the number of comparisons and assignments during insertion remains relatively low.
o However, this best-case scenario is somewhat impractical because it rarely occurs naturally.
2. Worst Case:
o The worst-case scenario for heap creation happens when the input elements are in ascending
order (from smallest to largest).
o In this situation, each element inserted into the heap must be compared with its parent and
potentially swapped multiple times to maintain the max heap property.
o The real work during heap creation lies in these insertions and adjustments.
o Despite the worst-case complexity, heapsort still performs efficiently overall.

Therefore, for the given data {1, 2, 3, 4, 5, 6, 7, 8}, the worst-case scenario applies during max heap
creation. The number of comparisons andassignments will be higher due to the need for
reheapification after each insertion1

3.BINARY HEAP:
 A binary heap is a specialized binary tree-based data structure that satisfies the heap property.
 It is commonly used to implement efficient priority queues (PQs).
 Key properties of a binary heap:
1. Complete Binary Tree: A binary heap is a complete binary tree, meaning that all levels are filled
except possibly the last level, which is filled from left to right.
2. Heap Property:
 In a max heap, the value of each node is greater than or equal to the values of its children.
 In a min heap, the value of each node is less than or equal to the values of its children.
3. Array Representation: Binary heaps are commonly implemented using arrays, where each
element represents a node in the heap.
4. Optimized for Priority Queues: Binary heaps allow efficient insertion, deletion, and retrieval of
the maximum (or minimum) element.
Types of Binary Heaps:
1. Max Heap:
o The root node contains the maximum value.
o All other nodes satisfy the max heap property.
o Used for applications like priority queues where the highest-priority element needs to be quickly
accessible.
2. Min Heap:
o The root node contains the minimum value.
o All other nodes satisfy the min heap property.
o Useful for tasks like scheduling events with minimum time requirements.

Heap Data Structure:

 A heap is a complete binary tree (meaning all levels are filled except possibly the last level, which is
filled from left to right).
 It satisfies the heap property:
o In a max heap, the value of each node is greater than or equal to the values of its children.
o In a min heap, the value of each node is less than or equal to the values of its children.

4.DOUBLE HASHING:

 Double hashing is a collision resolution technique used in hash tables.


 It works by using two hash functions to compute two different hash values for a given key.
 The first hash function computes the initial hash value, and the second hash function determines the
step size for the probing sequence.
 Double hashing aims to achieve a low collision rate by using two hash functions.
 The probability of a collision occurring is lower compared to other collision resolution techniques like
linear probing or quadratic probing.
Advantages of Double Hashing:
1. Uniform Distribution:
o Double hashing produces a uniform distribution of records throughout the hash table.
o It avoids clustering of elements.
2. Effective Collision Resolution:
o It is an effective method for resolving collisions.
o The collision rate is generally lower than other techniques.
Double Hashing Formula:
 Double hashing can be done using the following formula:
 (hash1(key) + i * hash2(key)) % TABLE_SIZE
o Here, hash1() and hash2() are hash functions, and TABLE_SIZE is the size of the hash table.
o We repeat this process by increasing i when a collision occurs.
Consider a hash table with a size of 5 and two keys: 10 and 15. We’ll use two hash functions for double hashing:
1. First Hash Function:
o h₁(key) = key % 5
2. Second Hash Function:
o h₂(key) = key % 7
Now let’s insert the keys into the hash table:
1. Insert Key 10:
o Compute the initial hash value using h₁(10) = 10 % 5 = 0.
o Since the slot at index 0 is empty, we place 10 there.
2. Insert Key 15:
o Compute the initial hash value using h₁(15) = 15 % 5 = 0.
o Collision! The slot at index 0 is already occupied.
o Use the second hash function to determine the step size: h₂(15) = 15 % 7 = 1.
o Probe the next slot (index 1).
o Since the slot at index 1 is empty, we place 15 there.
The resulting hash table looks like this:
Index: 0 1 2 3 4
-------------------------------------------
Keys: 10 15

5.LOAD FACTOR IN HASH TABLES:

 The load factor of a hash table is a measure that tells us how full the hash table is.
 It is defined as the ratio of the number of elements (key-value pairs) to the number of buckets (slots) in
the hash table.
 Mathematically, the load factor (denoted as α) is calculated as: [ \text{Load Factor} (\alpha) = \frac{\
text{Number of Elements}}{\text{Number of Buckets}} ]
Significance of Load Factor:
 A good hash table aims to strike a balance between efficiency and memory usage.
 If the load factor is too high (close to 1), the hash table becomes crowded, leading to more collisions and
reduced performance.
 If the load factor is too low (close to 0), memory is underutilized, and the hash table may not be
efficient.
Maximum Number of Buckets Examined in Unsuccessful Search:

 When performing an unsuccessful search (i.e., searching for a key that is not present in the hash table),
we need to examine buckets to find an empty slot or reach the end of the probing sequence.
 The maximum number of buckets examined during an unsuccessful search depends on the collision
resolution method used:
1. Open Addressing (Linear Probing, Quadratic Probing, Double Hashing):
 In open addressing, we probe adjacent buckets until an empty slot is found.
 The maximum number of buckets examined is determined by the load factor and the specific
probing method.
 For linear probing, the worst-case scenario occurs when all buckets up to the next empty slot are
examined.
 The expected number of probes can be bounded by a formula based on the load factor (assuming
uniform hashing).
2. Separate Chaining:
 In separate chaining, each bucket contains a linked list of key-value pairs.
 The maximum number of buckets examined is the length of the linked list at the hash index.
 The load factor affects the average length of these lists.
In summary, maintaining a reasonable load factor helps control the number of buckets examined during
unsuccessful searches and ensures efficient hash table operations.

6.AVL TREE:
An AVL tree (named after its inventors Adelson-Velsky and Landis) is a self-balancing binary search tree. Let’s
explore its key characteristics, operations, advantages, and disadvantages:
1. Definition:
o An AVL tree is a binary search tree (BST) where the difference between the heights of the left and
right subtrees for any node cannot be more than one.
o The balance factor of a node represents the difference between the heights of its left and right
subtrees.
2. Properties:
o The AVL tree maintains the following properties:
 The height difference between left and right subtrees for every node is less than or equal
to 1.
 The tree remains balanced even after insertions and deletions.
3. Operations on an AVL Tree:
o Insertion: Add a new element while maintaining the balance factor. Perform rotations if
necessary.
o Deletion: Remove a node while ensuring the balance property.
o Searching: Similar to performing a search in a BST.
7.BOYER-MOORE ALGORITHM:

 The Boyer-Moore algorithm is used for pattern searching in a given text.


 Unlike some other string matching algorithms, Boyer-Moore starts matching from the last character of
the pattern.
 It combines two heuristics: Bad Character Heuristic and Good Suffix Heuristic.
Bad Character Heuristic:
 The idea of the bad character heuristic is simple:
o When a mismatch occurs, we identify the bad character (the character in the text that doesn’t
match the current character in the pattern).
o We then look up the position of the last occurrence of the bad character in the pattern.
o If the bad character exists in the pattern, we shift the pattern so that it aligns with the bad
character in the text.
o This heuristic helps us skip unnecessary comparisons.
Example of Bad Character Heuristic:
Consider the following example:
 Text: “THIS IS A TEST TEXT”
 Pattern: “TEST”
1. We get a mismatch at position 3 (character “S” in the text and “T” in the pattern).
2. The last occurrence of “S” in the pattern is at position 1.
3. We shift the pattern 2 times so that “T” in the pattern aligns with “T” in the text.
Result:
Text: THIS IS A TEST TEXT
Pattern: TEST
Good Suffix Heuristic:
 The good suffix heuristic focuses on cases where a part of the pattern matches a suffix of the text.
 It helps us skip unnecessary comparisons by considering the longest suffix of the pattern that matches a
substring in the text.
Example of Boyer-Moore Algorithm:
Let’s apply the Boyer-Moore algorithm to the following example:
 Text: “AABAACAADAABAABA”
 Pattern: “AABA”
4. We find the last occurrence of “AABA” in the pattern (at position 1).
5. We shift the pattern to align with the occurrence in the text.
6. We repeat this process for other occurrences.
Result:
Pattern found at index 0
Pattern found at index 9
Pattern found at index 12

8.RED-BLACK TREE:

 A Red-Black Tree is a type of self-balancing binary search tree.


 It ensures that the height of the tree remains balanced, resulting in efficient search, insertion, and
deletion operations.
 Each node in a Red-Black Tree is colored either red or black.
Properties of Red-Black Trees:
1. Root Property:
o The root is always black.
2. External Property:
o Every leaf (which is a NULL child of a node) is black.
3. Internal Property:
o The children of a red node are black.
o Therefore, a red node cannot have a red parent or red child.
4. Depth Property:
o All leaves have the same black depth (the number of black nodes along any path from the root to
the leaf).
5. Path Property:
o Every simple path from the root to a descendant leaf node contains the same number of black
nodes.
Insertion Algorithm:
1. Insertion:
o Add a new node while maintaining the Red-Black Tree properties.
o Initially, color the new node as red.
o Perform standard BST insertion.
o Adjust the tree to satisfy the Red-Black Tree properties.
o If the new node is the root, change its color to black (increasing the black height of the entire
tree).
Example of Red-Black Tree:
Consider the following Red-Black Tree:
30 (B)
/ \
20 (R) 40 (R)
/ \ \
10 (B) 25 (B) 50 (B)
In this example:
 (B) represents a black node.
 ® represents a red node.

9.SPLAY TREE:

 A Splay Tree is a self-adjusting binary search tree.


 Unlike other balanced trees (such as AVL trees or Red-Black trees), splay trees do not maintain a strict
balance.
 Instead, they reorganize themselves dynamically based on the accessed or inserted elements.
 The main goal of a splay tree is to bring the most recently accessed or inserted element closer to the
root by performing a sequence of tree rotations (called splaying).
Properties of Splay Trees:
1. Self-Adjusting:
o Splay trees automatically reorganize themselves based on access patterns.
o Frequently accessed or inserted elements move closer to the root.
2. No Strict Balance:
o Unlike AVL trees or Red-Black trees, splay trees do not guarantee a balanced structure.
o They prioritize frequently accessed elements over strict balance.
Splaying Process:
 When an element is accessed or inserted, it is splayed:
1. Perform a sequence of rotations to bring the accessed/inserted element to the root.
2. Gradually move other nodes closer to the root during the splaying process.
Example of Splay Tree:
Consider the following splay tree:
30
/ \
20 40
/ \ \
10 25 50
In this example:
 When accessing element 25, it becomes the new root after splaying.
 The tree reorganizes itself to prioritize frequently accessed elements.

10.2-3 TREE:

 A 2-3 Tree (also known as a 2-4 Tree) is a type of self-balancing search tree.
 It is a generalization of a binary search tree where each internal node can have either two or three
children.
 2-3 Trees maintain balance by ensuring that all leaf nodes are at the same level.
Properties of 2-3 Trees:
1. Node Types:
o Nodes in a 2-3 Tree can be of two types:
 2-node: Contains one data value and has two children.
 3-node: Contains two data values and has three children.
2. Data Storage:
o Data is stored in sorted order within each node.
o In a 2-node, the data value separates the two children.
o In a 3-node, the two data values separate the three children.
3. Balanced Structure:
o All leaf nodes are at the same level.
o The height of the tree remains balanced.
Insertion Algorithm:
1. Insertion:
o When inserting a new value, follow these steps:
1. If the tree is empty, create a new root node with the value.
2. Otherwise, traverse the tree to find the appropriate leaf node for insertion.
3. If the leaf node is a 2-node, insert the value into it, creating a 3-node.
4. If the leaf node is a 3-node, split it into two 2-nodes and promote the middle value to the
parent.
5. Repeat the process up the tree if necessary to maintain the 2-3 Tree properties.
Example of 2-3 Tree:
Consider the following 2-3 Tree:
20
/ \
10 30
In this example:
 The root node contains one data value (20) and has two children (10 and 30).
 All leaf nodes (10 and 30) are at the same level.
Why 2-3 Trees?
 2-3 Trees guarantee an upper bound of O(log n) for search, insertion, and deletion operations.
 They are simpler than other self-balancing trees like AVL trees or Red-Black trees.
 2-3 Trees are used as building blocks for more complex data structures like B-trees.

You might also like