0% found this document useful (0 votes)
18 views10 pages

Mod 3 Bcs 401

The document discusses the concept of 'Transform and Conquer' in algorithm design, focusing on balanced search trees like AVL trees and 2-3 trees, which maintain efficient operations through self-balancing mechanisms. It also explains heaps as a data structure suitable for priority queues, detailing their properties, construction methods, and time complexities for insertion and deletion operations. The efficiency of these structures is highlighted, with AVL trees offering logarithmic time complexity for operations while also noting their drawbacks.

Uploaded by

aftab4395575
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views10 pages

Mod 3 Bcs 401

The document discusses the concept of 'Transform and Conquer' in algorithm design, focusing on balanced search trees like AVL trees and 2-3 trees, which maintain efficient operations through self-balancing mechanisms. It also explains heaps as a data structure suitable for priority queues, detailing their properties, construction methods, and time complexities for insertion and deletion operations. The efficiency of these structures is highlighted, with AVL trees offering logarithmic time complexity for operations while also noting their drawbacks.

Uploaded by

aftab4395575
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Module3:

Transform and conquer


Space time trade offs

1.Transform and conquer:


It works in two stages:first in the transformation stage the problems instance is modified,Second in
the conquering stage it is solved

Balanced Search trees


Balanced search trees are binary search trees that automatically maintain their height and order after
insertions or deletions. They provide O(log n) time complexity for search, insert, and delete
operations.
AVL tree
AVL tree (named after inventors Adelson-Velsky and Landis) is a self-balancing binary
search tree.An AVL tree is a binary search tree in which the balance factor of every node,
which is defined as the difference between the heights of the node’s left and right subtrees, is
either 0 or +1 or 1. (The height of the empty tree is − − defined as 1. Of course, the balance
factor can also be computed as the difference between the numbers of levels rather than the
height difference of the node’s left and right subtrees.)
A rotation in an AVL tree is a local transformation of its subtree rooted at a node whose
balance has become either 2 or 2. There are only four types of rotations;
• The first rotation type is called the single right rotation, or R-rotation.
• The symmetric single left rotation, or L-rotation, is the mirror image of the single R-
rotation.
• The second rotation type is called the double left-right rotation (LR- rotation). It is, in
fact, a combination of two rotations: we perform the L-rotation of the left subtree of
root r followed by the R-rotation of the new tree rooted at r.
• The double right-left rotation (RL-rotation) is the mirror image of the double LR-
rotation
Construction of an AVL tree for the list 5,6,8,3,2,4,7 by successive insertions. The
parenthesized number of a rotation’s abbreviation indicates the root of the tree being
reorganized.

How efficient are AVL trees? As with any search tree, the critical characteristic is the tree’s height. It
turns out that it is bounded both above and below.

The inequalities immediately imply that the operations of searching and insertion are (log n) in the
worst case.
The drawbacks of AVL trees are frequent rotations and the need to maintain bal ances for its nodes.
These drawbacks have prevented AVL trees from becoming the standard structure for implementing
dictionaries. At the same time, their un derlying idea—that of rebalancing a binary search tree via
rotations—has proved to be very fruitful and has led to discoveries of other interesting variations of
the classical binary search tree

2-3 Trees:
. A 2-3 tree is a tree that can have nodes of two kinds:2-nodesand3-nodes.A2-
nodecontainsasinglekeyK and has two children: the left child serves as the root of a subtree whose
keys are less than K, and the right child serves as the root of a subtree whose keys are greater than K.
(In other words, a 2-node is the same kind of node we have in the classical binary search tree.) A 3-
node contains two ordered keys K1 and K2 (K1<K2) and has three
children. The leftmost child serves as the root of a subtree with keys less than K1, the middle child
serves as the root of a subtree with keys between K1 and K2, and the rightmost child serves as the
root of a subtree with keys greater than K2
As for any search tree, the efficiency of the dictionary operations depends on the tree’s height. A 2-3
tree of height h with the smallest number of keys is a full tree of 2-nodes (such as the final tree in for
h = 2). Therefore, for any 2-3 tree of height h with n nodes, we get the inequality

Heaps and Heapsort:


The data structure called the “heap” is definitely not a disordered pile of items as the word’s definition
in a standard dictionary might suggest. Rather, it is a clever, partially ordered data structure that is
especially suitable for implementing priority queues. Recall that a priority queue is a multiset of items
with an orderable characteristic called an item’s priority, with the following operations:
Notion of the Heap
DEFINITION A heap can be defined as a binary tree with keys assigned to its nodes, one
key per node, provided the following two conditions are met:
1. The shape property—the binary tree is essentially complete (or simply com plete), i.e., all
its levels are full except possibly the last level, where only some rightmost leaves may be
missing.
2. The parental dominance or heap property—the key in each node is greater than or equal
to the keys in its children.

Key values in a heap are ordered top down; i.e.,a sequence of values on any path from the
root to a leaf is decreasing (non increasing, if equal keys are allowed). However, there is no
left-to-right order in key values; i.e., there is no relationship among key values for nodes
either on the same level of the tree or, more generally, in the left and right subtrees of the
same node.
Important properties of heaps,
• To construct a heap for a given list of keys
• There are two principal alternatives for doing this.
• The first is the bottom-up heap construction algorithm-It initializes the essentially complete
binary tree with n nodes by placing keys in the order given and then “heapifies” the tree as
follows. Starting with the last parental node, the algorithm checks whether the parental
dominance holds for the key in this node. If it does not, the algorithm exchanges the node’s key
K with the larger key of its children and checks whether the parental dominance holds for K in
its new position. This process continues until the parental dominance for K is satisfied.
(Eventually, it has to because it holds automatically for any key in a leaf.) After completing the
“heapification” of the subtree rooted at the current parental node, the algorithm proceeds to do
the same for the node’s immediate predecessor. The algorithm stops after this is done for the
root of the tree.

Let h be the height of the tree. According to the first property of heaps in the list at the
beginning of the section, h =⌊log2 n⌋ or just ⌈log2 (n + 1) ⌉ −1=k −1 for the specific values of
n we are considering.
The total number of key comparisons involving a key on level i will be 2(h − i).Therefore,
the total number of key comparisons in the worst case will be

• The alternative (and less efficient) algorithm constructs a heap by successive insertions of a
new key into a previously constructed heap; some people call it the top-down heap
construction algorithm.
So how can we insert a new key K into a heap? First, attach a new node with key K in it after
the last leaf of the existing heap. Then sift K up to its appropriate place in the new heap as
follows. Compare K with its parent’s key: if the latter is greater than or equal to K, stop (the
structure is a heap); otherwise, swap these two keys and compare K with its new parent. This
swapping continues until K is not greater than its last parent or it reaches the root
Inserting a key (10) into the heap constructed.The new key is sifted up via a swap with
its parent until it is not larger than its parent (or is in the root).

The time efficiency of insertion is in O(log n).

Delete an item from a heap

The efficiency of deletion is determined by the number of key comparisons needed to


“heapify” the tree after the swap has been made and the size of the tree is decreased by 1.
Since this cannot require more key comparisons than twice the heap’s height, the time
efficiency of deletion is in O(log n) as well.
Since we already know that the heap construction stage of the algorithm is in
O(n), we have to investigate just the time efficiency of the second stage. For the
Number of key comparisons, C(n) ,needed for eliminating the root keys from the
Heaps of diminishing sizes from n to 2

You might also like