0% found this document useful (0 votes)
11 views26 pages

ADA Module 3

Algorithm analysis and design

Uploaded by

g3045327
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views26 pages

ADA Module 3

Algorithm analysis and design

Uploaded by

g3045327
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 26

Module 3

Textbook : ‘Introduction to Design and Analysis of Algorithms’ by Anany Levitin 3rd edition

Prepared by
Prof. Deepthi N
Assistant Professor

Dept. of ISE 1
Syllabus
• TRANSFORM-AND-CONQUER: Balanced Search Trees, Heaps and
Heapsort.
• SPACE-TIME TRADEOFFS: Sorting by Counting: Comparison counting
sort, Input Enhancement in String Matching: Horspool’s Algorithm.

Dept. of ISE 2
TRANSFORM-AND-CONQUER
(chapter-6)

Dept. of ISE 3
Introduction
• Many algorithmic problems are easier to solve if their input is sorted.
• The Transform-and-Conquer strategy works as follows:

simpler instance
or
problem's another representation
solution
instance or
another problem's
instance

Dept. of ISE 4
Transform and Conquer
Either the problem or algorithm can be transformed in one of three
ways:
• The instances of the problem can be transformed into an easier
instance to solve called Instance simplification
• The data structure can be transformed so that it is more efficient
called Representation change.
• The problem can be transformed to an easier problem to solve
called Problem reduction.

Dept. of ISE 5
Balanced Search Tree
• A balanced binary search tree is a tree that automatically keeps its
height small (guaranteed to be logarithmic) for a sequence of
insertions and deletions. This structure provide efficient
implementations for abstract data structures such as associative
arrays.
• An AVL tree requires the difference between the heights of the left
and right subtrees of every node never exceed.
• Allowing more than one element in a node of a search tree. Specific
cases of such trees are 2-3 trees, 2-3-4 trees, and more general and
important B-trees.

Dept. of ISE 6
Balanced Search Tree Contd.
• Two approaches of Balanced Search Trees are:

Balanced
Search
trees

AVL B
Trees Trees

Dept. of ISE 7
AVL Tree
• AVL tree is a self-balancing Binary Search Tree (BST) where the
difference between heights of left and right subtrees cannot be more
than one for all nodes.
• The height h of any AVL tree with n nodes satisfies the inequalities.
log2 n ≤h<1.4405 log2(n + 2) − 1.3277.

Dept. of ISE 8
Two-Three Tree
• A 2-3 tree is a tree that can have nodes of two kinds:
• 2-nodes
• 3-nodes
• A 2-node contains a single key K and has two children.
• A 3-node contains two ordered keys K1 and K2 (K1<K2) and has three
children.
These lower and upper bounds on height h.
log3(n + 1) − 1≤ h ≤ log2(n + 1) − 1.

Dept. of ISE 9
Heap
• A heap is a binary tree with keys at its nodes (one key per node) such
that:
• It is essentially complete, i.e., all its levels are full except possibly the last level,
where only some rightmost keys may be missing.
• The key at each node is ≥ keys at its children (this is called a max-heap)
• The heap is also the data structure that serves as a cornerstone of a
theoretically important sorting algorithm called heapsort.
• The key values in a heap are ordered top down; i.e., a sequence of
values on any path from the root to a leaf is decreasing

Dept. of ISE 10
Heap Sort
ALGORITHM:
HeapBottomUp(H[1..n])
//Constructs a heap from elements of a given array
// by the bottom-up algorithm
//Input: An array H[1..n] of orderable items
//Output: A heap H[1..n]
for i ←n/2 downto 1 do
k←i; v←H[k]
heap←false
while not heap and 2 ∗ k ≤ n do
Dept. of ISE 11
Heap Bottom Up Algorithm
HeapBottomUp(H [1..n])
//Constructs a heap from elements of a given array // by the bottom-up algorithm
//Input: An array H [1..n] of orderable items
//Output: A heap H [1..n]
for i ← n/2 downto 1 do
k ← i; v ← H [k]
heap ← false
while not heap and 2 ∗ k ≤ n do
←2∗ k
if j < n //there are two children
if H [j ] < H [j + 1] j ← j + 1
if v ≥ H [j ]
heap ← true
else H [k] ← H [j ]; k←j
H [k] ← v

Dept. of ISE 12
Maximum Key Deletion from a heap
• Maximum Key Deletion from a heap is as follows:
• Step 1 Exchange the root’s key with the last key K of the heap.
• Step 2 Decrease the heap’s size by 1.
• Step 3 “Heapify” the smaller tree by sifting K down the tree exactly in the
same way we did it in the bottom-up heap construction algorithm.
• As a result, the array elements are eliminated in decreasing order.

Dept. of ISE 13
Space and Time Trade-Offs
(chapter-7)

Dept. of ISE 14
Introduction
• A space-time or time-memory tradeoff is a way of solving a problem
or calculation in less time by using more storage space(or memory),
or by solving a problem in very little space by spending a long time.
Most computers have a large amount of space, but not infinite space.
• Space and time trade-offs in algorithm design are a well-known issue
for both theoreticians and practitioners of computing.
• In somewhat more general terms, the idea of using space and time
trade-offs is to preprocess the problem’s input, in whole or in part,
and store the additional information obtained to accelerate solving
the problem afterward.

Dept. of ISE 15
Input enhancement in Space and Time Trade-
Off
• The approach used for processing this technique is approach input
enhancement which is a standard term, used for preprocessing and
preconditioning.
• These terms can also be applied to methods that use the idea of
preprocessing but do not use extra space
• The following algorithms based on it are:
• counting methods for sorting
• Boyer-Moore algorithm for string matching

Dept. of ISE 16
Techniques for Space and Time Trade-off
• type of technique that exploits space-for-time trade-offs simply uses
extra space to facilitate faster and/or more flexible access to the data.
We call this approach pre-structuring.
• Unlike the input-enhancement variety, it deals with access structuring.
We illustrate this approach by:
• Hashing
• Indexing with B-trees
• Dynamic Programming in Space-Time Trade Off
This strategy is based on recording solutions to overlapping sub-problems of a
given problem in a table from which a solution to the problem in question is
then obtained.

Dept. of ISE 17
Sorting by Counting (comparison-counting
sort )
• The idea of Sorting by Counting is to count for each element of a list
to be sorted and the total number of elements smaller than this
element and record the results in a table.
• These numbers will indicate the positions of the elements in the
sorted list: e.g., if the count is 10 for some element, it should be in
the 11th position (with index 10, if we start counting with 0) in the
sorted array.

Dept. of ISE 18
Comparison-Counting sort Algorithm
ComparisonCountingSort(A[0..n − 1])
//Sorts an array by comparison counting
//Input: An array A[0..n − 1] of orderable elements
//Output: Array S[0..n − 1] of A’s elements sorted in non-decreasing order
for i ← 0 to n − 1 do Count[i] ← 0
for i ← 0 to n − 2 do
for j ← i + 1 to n − 1 do
if A[i] < A[j ]
else
for i ← 0 to
return S

Dept. of ISE 19
Time Efficiency of Comparison-Counting sort
Algorithm
• The time efficiency should be quadratic because the algorithm
considers all the different pairs of an n-element array.
• Thus, the efficiency of the algorithm is –

�−2 �−1 �−2 �−1 �


C(n) = �=0 � =�+1
1 = �=0
[ � − 1 − � + 1 + 1] =
2

• Thus, the algorithm makes the same number of key comparisons as


selection-sort and in addition uses a linear amount of extra space.

Dept. of ISE 20
Input Enhancement in String Matching
• It String Matching works by finding an occurrence of a given string of
m characters called the pattern in a longer string of n characters
called the text.
• It simply matches corresponding pairs of characters in the pattern and
the text left to right and, if a mismatch occurs, shifts the pattern one
position to the right for the next trial.
• Since the maximum number of such trials is n − m + 1 and, in the
worst case, m comparisons need to be made on each of them, so:
• worst-case efficiency of the brute-force algorithm is in the O(nm) class
• average-case efficiency of the brute-force algorithm is in the O(n + m) class

Dept. of ISE 21
Input Enhancement in String Matching Contd.
• Several string searching algorithms are based on the input
enhancement idea of preprocessing the pattern are:
• Knuth-Morris-Pratt (KMP) algorithm preprocesses pattern left to right to get
useful information for later searching
• Boyer -Moore algorithm preprocesses pattern right to left and store
information into two tables
• Horspool’s algorithm simplifies the Boyer-Moore algorithm by using just one
table

Dept. of ISE 22
Horspool’s Algorithm
• In addition to being simpler than Boyer-Moore algorithm, Horspool’s
algorithm is not necessarily less efficient than the Boyer-Moore
algorithm on random strings.
• The working of the Algorithm is as follows:
• Starting with the last R of the pattern and moving right to left, we compare
the corresponding pairs of characters in the pattern and the text.
• If all the pattern’s characters match successfully, a matching substring is found.
Then the search can be either stopped altogether or continued if another
occurrence of the same pattern is desired.
• If a mismatch occurs, we need to shift the pattern to the right.

Dept. of ISE 23
Horspool’s Algorithm Case Study
• Consider the following example of the pattern BARBER, the following
four possibilities can occur that are:
1. Case 1 – If there are no characters in the pattern – we can safely shift the
pattern by its entire length
2. Case 2 – If there are occurrences of character characters in the pattern but
it is not the last one there
3. Case 3 – If characters happens to be the last character in the pattern but
there are no characters among its other m − 1 characters
4. Case 4 – If characters happens to be the last character in the pattern and
there are other characters among its first m − 1 characters

Dept. of ISE 24
Horspool’s Algorithm Shift Table Algorithm
ShiftTable(P [0..m − 1])
//Fills the shift table used by Horspool’s and Boyer-Moore algorithms
//Input: Pattern P [0..m − 1] and an alphabet of possible characters
//Output: Table[0..size − 1] indexed by the alphabet’s characters and
filled with shift sizes computed by formula
for i ← 0 to size − 1 do Table[i] ← m
for j ← 0 to m − 2 do Table[P [j ]] ← m − 1 − j
return Table

Dept. of ISE 25
Horspool’s algorithm pseudocode
HorspoolMatching(P [0..m − 1], T [0..n − 1])
//Implements Horspool’s algorithm for string matching
//Input: Pattern P [0..m − 1] and text T [0..n − 1]
//Output: The index of the left end of the first matching substring or −1 if there are no matches
ShiftTable(P [0..m − 1])
i←m−1
while i ≤ n − 1 do
k←0
while k ≤ m − 1 and P [m − 1 − k] = T [i − k] do
k←k+1
if k = m
return i − m + 1
else i ← i + Table[T [i]]
return −1

Dept. of ISE 26

You might also like