Transform and Conquer
Transform and Conquer
Objectives
To learn and appreciate the following concepts:
Presorting
Balanced Search Trees
Heaps and Heapsort
Problem Reduction
• What is presorting
• Problem reduction
•Two stages
1. The transformation stage - the problem’s instance is modified to be, for
one reason or another, more amenable to solution.
2. The conquering stage, - it is solved.
1. INSTANCE SIMPLIFICATION:
Transformation of an instance of a problem to an instance of the same
problem with some special property that makes the problem easier to
solve.
Example: list pre-sorting, AVL trees
2. REPRESENTATION CHANGE:
Changing one representation of a problem’s instance into another
representation of the same instance.
Example: 2-3 trees, heaps and heapsort
3.PROBLEM REDUCTION:
Transforming a problem given to another problem that can be solved by a
known algorithm.
Example: reduction to graph problems, reduction to linear programming
• The running time of this algorithm is the sum of the time spent on sorting
and the time spent on checking consecutive elements.
• Since the former requires at least n log n comparisons and the latter
needs no more than n − 1 comparisons, it is the sorting part that will
determine the overall efficiency of the algorithm.
• So, if we use a quadratic sorting algorithm here, the entire algorithm will
not be more efficient than the brute-force one. But if we use a good
sorting algorithm, such as mergesort, with worst-case efficiency in (n log
n), the worst-case efficiency of the entire presorting-based algorithm will
be also in (n log n):
• which is inferior to sequential search. The same will also be true for the
averagecase efficiency. Of course, if we are to search in the same list
more than once, the time spent on sorting might well be justified.
• a. Does the problem always have a solution? Does it always have a unique
solution?
• b. Design a reasonably efficient algorithm for solving this problem and indicate
its efficiency class.
13/11/2024 IT_2222 Design and Analysis of Algorithms- 2024 22
Balanced Search Trees
• Computer scientists have expended a lot of effort in trying to find a structure that preserves the
good properties of the classical binary search tree—principally, the logarithmic efficiency of the
dictionary operations and having the set’s elements sorted—but avoids its worst-case degeneracy.
They have come up with two approaches:
(a) The first approach is of the instance-simplification variety:
• An unbalanced binary search tree is transformed into a balanced one. Because of this, such trees are
called self-balancing.
• Specific implementations of this idea differ by their definition of balance.
• An AVL tree requires the difference between the heights of the left and right subtrees of every node
never exceed 1.
• A red-black tree tolerates the height of one subtree being twice as large as the other subtree of the
same node.
• If an insertion or deletion of a new node creates a tree with a violated balance requirement, the tree
is restructured by one of a family of special transformations called rotations that restore the balance
required.
13/11/2024 IT_2222 Design and Analysis of Algorithms- 2024 23
Balanced Search Trees
(b) The second approach is of the representation-change variety:
• Allow more than one element in a node of a search tree.
• Specific cases of such trees are 2-3 trees, 2-3-4 trees, and more general
and important B-trees.
• They differ in the number of elements admissible in a single node of a
search tree, but all are perfectly balanced.
• DEFINITION: An AVL tree is a binary search tree in which the balance factor of every node,
which is defined as the difference between the heights of the node’s left and right
subtrees, is either 0 or +1 or −1. (The height of the empty tree is defined as −1. Of course,
the balance factor can also be computed as the difference between the numbers of levels
rather than the height difference of the node’s left and right subtrees.)
13/11/2024 IT_2222 Design and Analysis of Algorithms- 2024 25
Balanced Search Trees – AVL Trees
• If an insertion of a new node makes an AVL tree unbalanced, we
transform the tree by a rotation.
• A rotation in an AVL tree is a local transformation of its subtree rooted at
a node whose balance has become either +2 or −2.
• If there are several such nodes, we rotate the tree rooted at the
unbalanced node that is the closest to the newly inserted leaf.
• There are only four types of rotations; in fact, two of them are mirror
images of the other two.
The first rotation type is called the single right rotation, or R-rotation. Note
that this rotation is performed after a new key is inserted into the left
subtree of the left child of a tree whose root had the balance of +1 before the
insertion.
The symmetric single left rotation, or L-rotation, is the mirror image of the
single R-rotation. It is performed after a new key is inserted into the right
subtree of the right child of a tree whose root had the balance of −1 before
the insertion
The time efficiencies of searching, insertion, and deletion are all in (log n) in both the worst and average case.
2.a. For n = 1, 2, 3, 4, and 5, draw all the binary trees with n nodes that satisfy the balance requirement of
AVL tree.
b. Draw a binary tree of height 4 that can be an AVL tree and has the smallest number of nodes among all
such trees.
3. a. Construct a 2-3 tree for the list C, O, M, P, U, T, I, N, G. Use the alphabetical order of the letters and
insert them successively starting with the empty tree.
b. Assuming that the probabilities of searching for each of the keys (i.e., the letters) are the same, find the
largest number and the average number of key comparisons for successful searches in this tree.
13/11/2024 IT_2222 Design and Analysis of Algorithms- 2024 48
What is a heap?
• A heap is a complete binary tree, and the binary tree is a
tree in which the node can have the atmost two children.
• A complete binary tree is a binary tree in which all the
levels except the last level, i.e., leaf node, should be
completely filled, and all the nodes should be left-justified.
Deleting the root’s key from a heap. The key to be deleted is swapped with the last key after which the smaller tree is
“heapified” by exchanging the new key in its root with the larger key in its children until the parental dominance
requirement is satisfied
13/11/2024 IT_2222 Design and Analysis of Algorithms- 2024 57
Insertion and Deletion in Heaps
• Process of Deletion:
• Since deleting an element at any intermediary position in
the heap can be costly, so we can simply replace the
element to be deleted by the last element and delete the
last element of the Heap.
• Replace the root or element to be deleted by the last
element.
• Delete the last element from the Heap.
• Since, the last element is now placed at the position of the
root node. So, it may not follow the heap property.
Therefore, heapify the last node placed at the position of
root.
13/11/2024 IT_2222 Design and Analysis of Algorithms- 2024 58
13/11/2024 IT_2222 Design and Analysis of Algorithms- 2024 59
Process:
The last element is 4.
This means that C(n) ∈ O(n log n)for the second stage of
heapsort. For both stages, we get O(n) + O(n log n) = O(n log n).
A more detailed analysis shows that the time efficiency of
heapsort is, in fact, in (n log n) in both the worst and average
cases
13/11/2024 IT_2222 Design and Analysis of Algorithms- 2024 68
Heaps and Heapsort - TRY
1. a. Construct a heap for the list 1, 8, 6, 5, 3, 7, 4 by the bottom-up algorithm.
• b. Construct a heap for the list 1, 8, 6, 5, 3, 7, 4 by successive key insertions
(top-down algorithm).
• c. Is it always true that the bottom-up and top-down algorithms yield the
same heap for the same input?
• Its adjacency matrix A and its square A^2 indicate the number of paths of
length 1 and 2, respectively, between the corresponding vertices of the
graph.
• In particular, there are three paths of length 2 that start and end at vertex
a: a - b- a, a - c- a, and a - d -a, but only one path of length 2 from a to c: a -
d -c.
13/11/2024 IT_2222 Design and Analysis of Algorithms- 2024 76
Reduction of maximization problems: Maximization and minimization problem.
• Suppose now that you have to find a minimum of some function f (x) and
you know an algorithm for maximizing the function.
• The answer lies in the simple formula
• min f(x) =-max[-f(x)].
• This formula suggests that to find the minimum of a
function f(x), you can maximize its negative −f(x) instead.
Then, to obtain the correct minimum value of the original
function, you simply change the sign of the maximum
value found.
• Similarly
• max f(x) =-min[-f(x)] is valid as well; it shows how a maximization
problem can be reduced to an equivalent minimization problem.
• This property simplifies optimization problem-solving by allowing
algorithms designed for one type of optimization (maximization or
minimization) to be adapted easily to solve the other type.