Daa Notes Nep
Daa Notes Nep
What is Algorithm?
“An algorithm is a step-by-step procedure for performing some task in a finite amount of time".
Characterstics of an Algorithm:
1 Finiteness: An algorithm must always terminate after a number of steps. It means that an
algorithm cannot go on forever.
Example: An algorithm to calculate the sum of all the numbers in a list would iterate through each
number in the list once and then stop. It doesn't matter if the list has ten numbers or ten million, the
algorithm will always finish in a finite number of steps.
2. Definiteness: Each step of the algorithm must be clearly defined and unambiguous.
Example: Consider an algorithm for preparing a coffee: the steps can't just say "add some coffee
powder" or "add sugar". They must specify exact measurements and times, like "add 2 spoons of
coffee powder" and "add I spoon sugar", so there is no ambiguity about what to do.
3. Input: An algorithm must take zero or more inputs, which are the values that the algorithm
operates on to produce the output.
Example: A GPS navigation algorithm takes as input the current location and the destination
location. Without these specific inputs, the algorithm would not be able to provide the correct
output.
4. Output: An algorithm produces at least one or more outputs. Output is the result of applying the
algorithm to the input(s).
Example: In GPS navigation, the output would be the step-by-step directions to reach the
destination from the current location. The output is directly dependent on the provided inputs. be
able to carryout
5. Effectiveness: Each operation should be effective i.e., the operation must in finite amount of time.
Example: Consider an algorithm to sort a list of numbers in ascending order. In this algorithm,
operations might include comparing two numbers and swapping them if they're in the wrong order.
These are basic operations that must be completed in finite amount of time for an algorithm to be
effective.
6. Versatility or Flexibility: This feature represents the capacity of an algorithm to solve a single
problem by different algorithm techniques.
Example: Sorting a list of numbers can be accomplished by various algorithms such as Bubble Sort,
Quick Sort, or Merge Sort. Each of these algorithms has its own strengths and weaknesses and
performs better under certain conditions. Another classic example is searching for an item in a list.
A simple linear search could be used or efficient binary search could be used if the list is sorted.
(a) Understanding Computational Capabilities: Understanding the nature of the device
on which an algorithm will run is critical.
Example: Consider two different types of computers: one is based on the Random
Access Machine (RAM) model, common in many traditional computing environments.
This model assumes that instructions are executed one after another, sequentially
Another type of computer could be a modern parallel processing system, which allows
multiple operations to execute concurrently. An algorithm designed for the RAM model
may not take full advantage of the parallel processing system's capabilities.
(b) Considering Speed and Memory: The specific speed and memory of the machine can
Significantly impact the design of an algorithm. Example: Suppose a machine is slow and has limited
memory. In that case, an algorithm designed to process a large dataset (such as analysing Aadhar
details of entire nation) might be inefficient or impractical. However, if the machine is powerful with
ample memory, this could allow for more complex and resource-intensive algorithms.
(c) Choosing between Exact and Approximate Problem Solving: The nature of the
problem also influences the choice of algorithm design. Based on the nature of the
problem, we can either choose Exact Algorithm or Approximation Algorithm.
Example: A problem like calculating the square root of a number might be solved using an exact
algorithm A problem like weather prediction model could provide a sufficiently accurate forecast
without needing an exact calculation.
(d) Choosing Algorithm Design Techniques: Choosing the right algorithm design techniques is
crucial in problem-solving.
Examples: Divide and Conquer, Greedy Algorithms, Dynamic Programming. Backtracking, Branch
and Bound, etc. Each technique has its unique characteristics, advantages, and drawbacks, so
knowing how and when to use them is a big part of solving problems with algorithms.
3. Designing an Algorithm and Data Structures:
Designing an algorithm involves determining the actual steps required to solve a problem. This
process includes selecting appropriate data structures, such as arrays, lists, or trees, based on the
nature of the operations that need to be performed
Example: If an algorithm needs to store a list of numbers and perform sorting operations, an array
data structure might be ideal because it allows for efficient sorting
4.Proving an Algorithm's Correctness:
The correctness of an algorithm signifies that it provides the desired output for every valid input
within a finite amount of time. After the specification of an algorithm, an important phase is the
proof of its correctness. This process assures that the algorithm produces an expected outcome for
all suitable inputs within a finite amount of time.
5.Analyzing the Algorithm:
Analyzing an algorithm means checking how well it performs. This involves looking at two things:
(a) Time Efficiency: This refers to how quickly an algorithm can execute and deliver the desired
outcome. A good algorithm should be quick and not take too much time to finish.
Example: The Bubble Sort algorithm is known for its simplicity but it's not time- efficient when it
comes to large data sets. Quick Sort is a sorting algorithm that performs significantly faster with
large data sets, making it more time-efficient in such cases.
(b) Space Efficiency: This is about how much memory or space the algorithm needs to do its work.
A good algorithm should not require too much extra memory.
Example: The Merge Sort algorithm is time-efficient but not space-efficient because it requires
extra space proportional to the size of the input data. Heap Sort performs sorting without needing
additional memory and hence it more space-efficient than Merge Sort.
6. Coding an Algorithm:
The final step involves translating the algorithm into a programming language. Converting an
algorithm into computer code is what we call "coding an algorithm".
Example: An example of coding an algorithm is implementing the bubble sort algorithm
programming language like Python, Java, C, C++ etc.
Problem Types:
The import problem types in computer science are given below.
Sorting
String Processing
Graph Problems
Searching
Combinatorial Problems
Geometric Problems
Numerical Problems
1. Sorting Problems:
Sorting refers to the operation of arranging the given data items either in ascending (numerical or
lexicographically) or descending order.
There are different methods that are used to sort the data. They can be divided in to two categories:
Internal sorting and External sorting. If the data that is to be sorted is in main memory, it is called
Internal Sorting and if it is stored on the secondary storage devices. It is called External Sorting.
Examples of Sorting Algorithms:
Bubble Sort
Insertion Sort
Selection Sort
Merge Sort
Radix Sort
Quick Sort
Heap Sort
2. Searching Problems:
Searching refers to finding for an item in any list of entries, It is one of the common operation in
data processing. Searching an employee details from the database or searching a telephone number
from the telephone directory are few of the daily life instances.
Examples of Searching Algorithms
• Linear Search
• Binary Search
• Hash Search
• Breadth First Search
• Depth-First Search
3. String Processing Problems:
Strings is a sequences of characters and String processing problems involve manipulating strings to
perform various operations such as searching matching and editing.
Examples of String Processing Problems:
• Searching for a particular word or pattern in a text file.
• Replacing all instances of one word with another in a text file.
• Counting the number of occurrences of a particular character or substring in a string
• Matching DNA sequences to identify genetic mutations
• Parsing and analyzing programming languages.
4. Combinatorial Problems:
Combinatorial problems are a type of problem in computer science and mathematics that involve
counting or generating combinations or permutations of subjects.
Examples of Combinatorial problems:
• Traveling Salesman Problem
• Shortest Path Problem
• Identifying all possible subsets of a set.
• Generating all possible permutations of a set
• Selecting k objects from n objects,
5. Graph Problems:
Graph problems involve analyzing and manipulating graphs, which are mathematical structures
consisting of nodes (also called vertices) connected by edges .
Examples of Graph Problems:
1. Shortest Path Problem: Finding the shortest path between two nodes in a graph. This problem
is commonly used in navigation systems to find the fastest route between two locations..
2. Minimum Spanning Tree Problem: Finding the minimum set of edges that connects all nodes in
a graph. This problem is commonly used in network design to minimize the cost of connecting
multiple locations.
3. Graph Coloring Problem: Assigning colors to nodes in a graph such that no two adjacent nodes
have the same color. This problem is commonly used in scheduling problems where tasks need to be
assigned to resources without conflicts.
4. Topological Sort: Topological sorting is a technique used to order the nodes in a directed acyclic
graph (DAG) such that for every directed edge from node A to node B, node A comes before node B
in the ordering. This technique is commonly used in scheduling problems where tasks need to be
performed in a specific order without conflicts.
5. Geometric Problems: Geometric problems involve analyzing and manipulating geometric
objects such as points lines, and polygons. These problems can arise in a wide range of applications
such as computer graphics, robotics, and tomography.
6. Numerical Problem:
Numerical problems involve computations with numbers. This could include tasks like finding roots
of equations, performing matrix operations, or solving differential equations.
Examples of Numerical Problems:
• Finding Roots of Equations
• Performing Matrix Operations,
• Solving Differential Equations
(b) (b) The Variable Dynamic Part: This is the space that changes depending on the specific
problem the algorithm is solving at runtime. This could include space for variables whose size
depends on the problem being solved, space for referenced variables, and the recursion stack space.
The space required for the dynamic part of a program is denoted as Sp.
The overall space requirements for an algorithm is the sum of both the fixed static part storage and
variable dynamic part storage. If P be a program, then space required for program P will be denoted
by S(P).
S(P)=Cp+Sp
2.1.2 Time Complexity
The amount of time needed to run the program is termed as time efficiency or time complexity. The
total time taken by a program is the sum of the compile time and runtime. The compile time does
not depend on the instance characteristics and it can be assumed as a constant factor so we
concentrate on the runtime of a program. Let this runtime is denoted by tp (instance characteristic),
then
tp (n)= taADD(n) + tsSUB(n) + tmMUL(M) +....
Where n indicates the instance characteristics and ta, ts, tm--- denote the time needed for an addition,
subtraction, multiplication, and so on. ADD, SUB, MUL - - - represent the functions and they are
performed when the code for the program is used on an instance characteristic 'n'.
Variations of Decrease-and-Conquer
There are three major variations of decrease-and-conquer:
1. Decrease by a Constant: In this variation, the problem size is reduced by a constant
amount (most often by one) at each iteration.
Example: If we are trying to search for an item in an array of 100 items, and we are
checking one item at a time, we are reducing the problem size by a constant of 1 at each
step.
2. Decrease by a Constant Factor: In this variation, the problem size is reduced by a
constant factor (often by half) on each iteration. This means the problem size is not
reduced by a constant amount, but rather by a proportion of its current size.
Example: Let us consider binary search algorithm. If we are trying to search for an item
in a sorted list of 1024 items, we can cut the list in half at each step depending on whether
the item we are searching for is larger or smaller than the middle item. After the first step,
we only need to search through 512 items. After the second step, we only need to search
through 256 items, and so forth. Here, the problem size decreases by a constant factor
(1/2) at each step.
3. Variable Size Decrease: In this variation, the amount by which the problem size
decreases varies from one iteration to another.
Examples: Consider the process of identifying the factors of a number (lets say 100). The
first step might involve dividing by 2 to get 50, then by 2 again to get 25, followed by
dividing by 5 to get 5.and finally dividing by 5 again to get 1. The size of the problem is
decreasing at each step, but not by a constant amount or a constant factor. The amount by
which the problem decreases can vary based on the particular number being factored.
Let us discuss the below examples which uses decrease by constant method.
(a) Insertion Sort
(b) Topological Sorting
(c) Algorithm for Generating Combinatorial Objects.
(a) Insertion Sort:
Insertion Sort is a simple sorting algorithm that works by iteratively building a sorted
list from an unsorted list of elements. The algorithm works by iterating over each element
in the unsorted list and inserting it into its correct position among the already sorted
elements. At each iteration, the size of the unsorted portion of the list is reduced by one
element, which is removed and inserted into its correct position among the sorted
elements. The insertion sort is efficient only when the numbers to be sorted are very less.
Topological Sorting
A DAG (Directed Acyclic Graph) is a type of graph where the edges have a direction and
there are no cycles. In other words, you can't start at a vertex and follow the edges to get
back to the same vertex.
Meaning and Definition: Topological Sorting
Topological sorting is a way to order the vertices in a DAG such that if there is an edge
from vertex A to B then A comes before B in the ordering.
Algorithm 1: Topological Sort (G)
1 Find the in-degree INDEG(N) of each node N of G.
2. Put all the nodes with zero in-degree in a queue Q
3. Repeat Step 4 and 5 until Queue become empty.
4.Remove the front node N of the queue Q and add it to T .
(Set Front = Front + 1)
5.Repeat the following for each neighbor M of the node N.
a. Set INDEG(M) = INDEG(M)-1
[delete the edges from N to MJ
b. If INDEG(M) = 0 then Add M to the rear end of the Q
6. Exit
We'll find a node with an indegree of zero and add it to the topological ordering.
Once a node is added to the topological ordering, we can take the node, and its outgoing
edges, out of the graph.
Now we'll grab a node with an indegree of 0, add it to our topological ordering and remove it
from the graph:
and repeat
and repeat
Problem-02:
Find the number of different topological orderings possible for the given graph-
Solution-
The topological orderings of the above graph are found in the following steps-
Step-01:
Write in-degree of each vertex-
Step-02:
Step-03:
Vertex-B has the least in-degree.
So, remove vertex-B and its associated edges.
Now, update the in-degree of other vertices.
Step-04:
There are two vertices with the least in-degree. So, following 2 cases are possible-
In case-01,
Remove vertex-C and its associated edges.
Then, update the in-degree of other vertices.
In case-02,
Remove vertex-D and its associated edges.
Then, update the in-degree of other vertices.
Now, the above two cases are continued separately in the similar manner.
In case-01,
Remove vertex-D since it has the least in-degree.
Then, remove the remaining vertex-E.
In case-02,
Remove vertex-C since it has the least in-degree.
Then, remove the remaining vertex-E.
Merge Sort
In the merge sort, the given elements are divided into two sets A [0] A[(n/2)-1] and A
[n/2] [n-1]. These two sets are individually sorted in ascending order and finally merged
to produce single sorted sequence of n elements.
Quick sort
Quick Sort
Quick sort is a popular sorting algorithm that uses the divide and conquer technique to
sort an array of elements. As the name implies, quick sort is the fastest known sorting
algorithm in practice.
How Quick Sort Works?
The high-level description of the Quick Sort algorithm: 1. If the array has one or zero
elements, then return the array as it is already sorted
2. Select a pivot element from the array. The choice of the pivot can vary - it can be the
first element, the last element, the middle element, or even a random element.
3. Partition the elements into two groups elements less than the pivot and elements
greater than the pivot.
4. Recursively apply the above steps to the sub-array of elements with smaller values and
separately to the sub-array of elements with greater values.
Example
Algorithm
Binary Tree Traversal and Related Properties
What is Binary Tree?
A binary tree is a special form of a tree. Contrary to a general tree, a binary tree is more
important and frequently used in various applications of computer science. Likewise a
general tree, a binary tree can also be defined as:
Binary Tree Traversals
Binary Tree Traversal refers to the process of visiting each node in the binary tree exactly
once.
There are three common types of binary tree traversals:
1. Inorder Traversal: In an inorder traversal, the traversal process follows the sequence
Left-Node-Right, starting from the root. This means that the left subtree is first traversed
(recursively applying the same sequence), then the current node is "visited", and finally,
the right subtree is traversed.
2. Preorder Traversal: In a preorder traversal, the traversal process follows the
sequence Node-Left-Right, starting from the root. This means that the current node is
visited first, then the left subtree is traversed, and finally, the right subtree is traversed.
3. Postorder Traversal: In a postorder traversal, the traversal process follows the
sequence Left-Right-Node, starting from the root. This means that the left subtree is first
traversed, then the right subtree is traversed, and finally, the current node is visited.
Strassen's Matrix Multiplication
The conventional method of matrix multiplication involves three nested loops and
performs in O(n) time, where n is the dimension of the square matrices. Let A and B are
two matrices of size n x n.
The product matrix C can be obtained using the formula.
In the above procedure, there exists three for loops. Each for loop executes 'n' times and
therefore running time for the above conventional method is O(n3).
Strassen's Matrix Multiplication
To improve the efficiency of matrix multiplication problem, Strassen suggested that for
matrix multiplication it is sufficient to have 7 multiplication and 18 additions or
subtractions.
Advantages and Disadvantages of Divide and Conquer Technique
Advantages of Divide and Conquer Technique
Efficiency: The divide and conquer technique can be very efficient for solving problems
that can be broken down into smaller subproblems. This is because the technique can
solve each subproblem independently, which can speed up the overall solution time.
Flexibility: The divide and conquer technique can be used to solve a wide variety of
problems, This is because the technique can be applied to any problem that can be broken
down into smaller subproblems
Simplicity: The divide and conquer technique is relatively simple to understand and
implement This makes it a good choice for problems that need to be solved quickly or by
people with limited programming experience.
Parallel Computing: Divide and Conquer involves breaking down a problem into
independent sub- problems, and hence it's possible to execute the sub-problems in
parallel by taking advantage of multi- core processors and distributed computing systems
Disadvantages of Divide and Conquer Technique
Complexity: The divide and conquer technique can be more complex than other
techniques for solving problems. This is because the technique requires the problem to be
broken down into smaller sub- problems which can be time-consuming and error-prone.
Memory Usage: The divide and conquer technique can require more memory than other
techniques for solving problems. This is because the technique requires the sub-problems
to be stored in memory which can be a problem for problems with large data sets.
Inefficient for Small Problems: The divide and conquer technique can be inefficient for
small problems. This is because the time it takes to divide the problem into sub-problems
and then recombine the solutions can be more than the time it would take to solve the
problem directly.
Overhead of Recursion: Divide and Conquer algorithms are usually implemented with
recursion, which leads to a certain overhead due to the recursive function calls and
merging of sub-solutions.
Chapter 6 -Space and Time Trade-offs
In the field of computer science, one common trade-off is between time and space
(memory), referred to a the space-time trade-off. This is the decision programmers face
when they have to choose between reducing runtime by using more memory or
conserving more memory by allowing a program to run more slowly. In many cases,
algorithms can be optimized for either speed or space, but not both.
Example
1 Consider an example where a program processes a large volume of data. The program's
speed could be increased by storing the processed data in memory (thus consuming more
space), allowing for quicker access later (saving time). This is a time-space trade-off since
more memory (space) is used to reduce the time it takes for the program to run.
There are many different algorithms for sorting data, each with its own trade-offs
between time and space complexity. For example, quicksort is a fast algorithm that
requires relatively little memory while mergesort is a slower algorithm that requires
more memory.
Hash tables are a common data structure used for fast lookups. However, the size of the
hash table can have a big impact on performance. A larger has…
"Precomputing" or "Input Enhancement" is a strategy in algorithm design where certain
Calculations are performed in advance, before the main execution of the algorithm. The
results of these calculations are stored for later use. The main advantage of this approach
is to save computation time during the main execution by reusing precomputed results
instead of recalculating the same values multiple times. However, it requires additional
storage space to hold the precomputed values. This approach is a classic example of a
time-space trade-off.
Sorting by counting
Sorting by Counting is an algorithm that uses precomputing or input enhancement to sort
a list of numbers. The basic idea is to count for each element of the list, the total number
of elements smaller than that element and record the results in a table. These numbers
will indicate the positions of the elements in the sorted list. For example, if the count is 10
for some element, it should be in 10th position in the sorted array. Thus, we will be able to
sort the list by simply copying its elements to their appropriate positions in a new sorted
list.
Input Enhancement in String Matching
String matching is finding an occurrence of a given string of 'm' characters called the
pattern in a string of 'n' characters called the text
Search Step: Start comparing the pattern with the text from the right end of the peters. If
all the characters match, we have found an occurrence of the pattern. If a mismatch
occurs, look at the mismatched character in the text, shift the pattern according to the
shift table, and continue with the comparison.
Hash Table
Hashing in data structure uses hash tables to store the key-value pairs. The hash table then uses
the hash function to generate an index. Hashing uses this unique index to perform insert, update,
and search operations.
Hash Collision
Collision is said to occur when two keys generate the same value. Whenever there is more
than one key that point/map to the same slot in the hash table, the phenomenon is called a
collision. Thus, it becomes very important to choose a good hash function so that the hash
function does not generate the same index for multiple keys in the hash table.
2. Quadratic Probing
Linear probing has the disadvantage of clustering. In order to minimize the clustering
problem, quadratic probing can be used. Suppose a hash generating address is 'a' then in
the case of collision, linear probing searches the location a, a + 1, a + 2,... So the location
where the insertion and search takes place is (a + i) (for i = 0, 1, 2, ..). But in quadratic
probing, the location of insertion and
searching takes place in (a + i²) (for i = 0, 1, 2,) i.e, at the locations a, a + 1, a +4, a +9 So it
will decrease the problem of clustering but the problem in this technique is it cannot
search all the locations. If the 'tablesize' is prime number then it will search at least half of
the locations of thehash table.
Chapter 7 : Dynamic Programming
Dynamic programming is a computer programming technique where an algorithmic
problem is first broken down into sub-problems, the results are saved, and then the sub-
problems are optimized to find the overall solution — which usually has to do with
finding the maximum and minimum range of the algorithmic query.
Binomial Co-Efficient
Compute 6C3 using dynamic programming
Warshal’s Algorithm (Transitive Closure)
Definition: Let G = (V, E) be a simple graph where V is set of vertices and E is set of edges.
Let N be the number of vertices in graph G. The matrix P (of size N x N) whose elements
are given by
is called path matrix (transitive closure). It is clear from the above definition that the
element in ith row and jth column is 1 provided there exists a path from ith vertex to jth
vertex and 0 if there is no path. It is observed that the matrix P is square matrix and the
path matrix has O's or 1's and so the matrix is called bit matrix or Boolean matrix.
Algorithm : Warshall (n,A,P) to compute Transitive closure
Example:
Floyd’s Algorithm (for all pairs shortest paths problem)
Eample:
0/1 Knapsack Problem
Chapter: 8
Greedy technique ?
A greedy technique also known as the greedy algorithm is a problem-solving approach
that involves making locally optimal choices at each step with the hope of finding a
globally optimal solution. In this technique, the algorithm makes decisions based on the
current best choice without considering the potential consequences or future steps
Characteristics of Greedy Technique
The characteristics of the greedy technique are as follows:
1. Feasible: At each step, the choice made by the greedy algorithm must satisfy the
problem's constraints. This means that the chosen option should be a valid solution that
adheres to any given limitations or requirements.
2. Locally Optimal: The greedy technique selects the best local choice available at each
step. It means that among the feasible options, the algorithm chooses the one that
appears to be the most advantageous or beneficial in the immediate context. This decision
is made without considering the potential consequences or future steps.
3. Irrevocable: Once a choice is made by the greedy algorithm, it is considered final and
cannot be changed in subsequent steps. The algorithm does not revisit or revise previous
decisions based on new information or changes in the problem's state.
4. Global Choice Property: The global choice property is a key characteristic of the
greedy technique. It states that a globally optimal solution can be achieved by
consistently making locally optimal choices.
Control Abstraction for Greedy Method or general procedure
Application for greedy technique:
The Greedy technique is used in a wide variety of applications due to its efficiency and
simplicity.
Some common applications are:
1. Graph Algorithms: Many algorithms in graph theory use a greedy approach, such
as Prim's and Kruskal's for finding a minimum spanning tree, and Dijkstra's for finding
the shortest path from a single source.
2. Resource Scheduling Problems: In problems like job scheduling, where the goal is
to complete tasks using the least resources or in the least amount of time, greedy
algorithms can often provide good solutions.
3. Data Compression: Greedy algorithms are also used in data compression codes,
like Huffman coding.
4. Networking: The Internet's IP routing protocols use greedy routing strategies to
route packets from one network to another.
5. Cashier's Algorithm: This is used to dispense a certain amount of change with the
least number of coins and notes possible.
6. Knapsack Problem: The fractional knapsack problem, where items can be broken
into smaller pieces, can be solved using a greedy approach.
7. It's important to note that while greedy algorithms can solve a lot of problems,
they do not always provide the most optimal solution, particularly for complex problems
where the best choice at any given step may not lead to the overall best result.
Algorithm
Huffman Algorithm to Generate Huffman Tree and Codes
Step 1: Create a Priority Queue Q consisting of each unique character:
This step involves creating a priority queue, which is a data structure that stores
elements based on their priority. In this case, the priority is determined by the
frequencies of the characters.
Step 2: Sort frequencies in ascending order and store in priority queue.
The frequencies of the characters are sorted in ascending order and stored in the
priority queue Q. This ensures that the characters with the lowest frequencies will
have higher priority in the queue.
Step 3: Loop through all the unique characters in the queue:
(a) Create a newNode. This new node will eventually become a parent node.
(b) Extract minimum value from Q and assign it to leftChild of newNode.
We take the node with the smallest frequency from the front of the queue and
make it the left child of the new node.
(c) Extract minimum value from Q and assign it to rightChild of newNode We take
the next smallest frequency node from the queue and make it the right child of
the new node.
(d) Calculate the sum of these two minimum values and assign it to the value of
newNode. The frequency value of the new node is set to be the sum of its children's
frequencies.
(e) Insert this newNode into the queue
Repeat these step until only one node remains in the queue - this is the root of the
Huffman tree.
Step 4: Create Huffman Codes:
Starting from the root, create the codes by traversing the tree. Moving to the left
child adds a '0' to the code, and moving to the right child adds a '1'. When we reach
a leaf node (a symbol), assign the code accumulated during the traversal to this
symbol. In the end, the most frequent symbols will be represented by the shortest
codes, while less frequent symbols will have longer codes.
Advantages and Disadvantages of Greedy Technique
Advantages of Greedy Technique
1. Efficiency; Greedy algorithms are usually very efficient. They make optimal
choices at each step as they generally require polynomial time for solving
problems.
2. Simplicity: Greedy algorithms are often simpler to understand and easier to
implement than other techniques such as dynamic programming. They follow a
straightforward approach by making the best choice at each step.
3. Real-world Applications: Greedy algorithms can provide satisfactory optimal
solutions for many real- world problems such as graph theory (Dijkstra's
Algorithm), network routing, and data compression (Huffman Coding).
4. Useful for Optimization Problems: Greedy algorithms work well for optimization
problems (finding minimum/maximum), where the best result at the current step
leads to an overall optimal solution.
A space state tree is a tree representing all the possible states (solution or nonsolution)
of the problem from the root as an initial state to the leaf as a terminal state.
There are many problems that can be solved by the use of a backtracking
algorithm, and it can be used over a complex set of variables or constraints, they
are basically of two types :
1. Implicit constraint: a particular rule used to check how many each element is
in a proper sequence is related to each other.
Ex: in n- queens problem
i)No two queens should be on same column.
ii)No two queens should be on same diagonal.
2. Explicit constraint: the rule that restricts every element to get chosen from
the particular set.
Ex:In 4 queens problem the elements of the tuple must be chosen from set
S={1,2,3,4}
Backtrack(n)
if n is not the solution
return false
if n is new solution
add the list of soliution to the list
Backtrack (expdn n)
4 Queens Problem
Approach:
Here we have to place 4 queens say Q1, Q2, Q3, Q4 on the 4 x 4
chessboard such that no 2 queens attack each other.
Let’s suppose we’re putting our first queen Q1 at position (1, 1) now for Q2
we can’t put it in 1 row( because they will conflict ).
So for Q2 we will have to consider row 2. In row 2 we can place it
in column 3 I.e at (2, 3) but then there will be no option for placing
Q3 in row 3.
So we backtrack one step and place Q2 at (2, 4) then we
find the position for placing Q3 is (3, 2) but by this, no
option will be left for placing Q4.
Then we have to backtrack till ‘Q1’ and put it to (1, 2) instead of (1, 1) and
then all other queens can be placed safely by moving Q2 to the position (2,
4), Q3 to (3, 1), and Q4 to (4, 3).
Hence we got our solution as (2, 4, 1, 3), this is the one possible solution for the
4-Queen Problem. For another solution, we will have to backtrack to all possible
partial solutions
Consider a graph G = (V, E) shown in fig. we have to find a Hamiltonian circuit using
Backtracking method.
Solution: Firstly, we start our search with vertex 'a.' this vertex 'a' becomes the root of
our implicit tree.
Next, we select vertex 'f' adjacent to 'e.' The vertex adjacent to 'f' is d and e, but
they have already visited. Thus, we get the dead end, and we backtrack one step
and remove the vertex 'f' from partial solution.
From backtracking, the vertex adjacent to 'e' is b, c, d, and f from which vertex 'f' has
already been checked, and b, c, d have already visited. So, again we backtrack one step.
Now, the vertex adjacent to d are e, f from which e has already been checked, and
adjacent of 'f' are d and e. If 'e' vertex, revisited them we get a dead state. So again we
backtrack one step.
Now, adjacent to c is 'e' and adjacent to 'e' is 'f' and adjacent to 'f' is 'd' and adjacent to 'd'
is 'a.' Here, we get the Hamiltonian Cycle as all the vertex other than the start vertex 'a'
is visited only once. (a - b - c - e - f -d - a).
Here we have generated one Hamiltonian circuit, but another Hamiltonian circuit
can also be obtained by considering another vertex.