Ada Unit 4
Ada Unit 4
A backtracking algorithm is a problem-solving algorithm that uses a brute force approach for finding
the desired output.
The Brute force approach tries out all the possible solutions and chooses the desired/best solutions.
The term backtracking suggests that if the current solution is not suitable, then backtrack and try
other solutions. Thus, recursion is used in this approach.
This approach is used to solve problems that have multiple solutions. If you want an optimal solution,
you must go for dynamic programming.
A space state tree is a tree representing all the possible states (solution or nonsolution) of the
problem from the root as an initial state to the leaf as a terminal state.
Backtracking Algorithm
Backtrack(x)
if x is not a solution
return false
if x is a new solution
backtrack(expand x)
Problem: You want to find all the possible ways of arranging 2 boys and 1 girl on 3 benches.
Constraint: Girl should not be on the middle bench.
Solution: There are a total of 3! = 6 possibilities. We will try all the possibilities and get the possible
solutions. We recursively try all the possibilities.
The backtracking algorithm is a problem-solving approach that tries out all the possible solutions
and chooses the best or desired ones. Generally, it is used to solve problems that have multiple
solutions. The term backtracking suggests that for a given problem, if the current solution is not
suitable, eliminate it and then backtrack to try other solutions.
The problem has multiple solutions or requires finding all possible solutions.
When the given problem can be broken down into smaller subproblems that are similar to
the original problem.
If the problem has some constraints or rules that must be satisfied by the solution
The backtracking algorithm explores various paths to find a sequence path that takes us to the
solution. Along these paths, it establishes some small checkpoints from where the problem can
backtrack if no feasible solution is found. This process continues until the best solution is found.
In the above figure, green is the start point, blue is the intermediate point, red are points with no
feasible solution, grey is the end solution.
When backtracking algorithm reaches the end of the solution, it checks whether this path is a
solution or not. If it is the solution path, it returns that otherwise, it backtracks to one step behind in
order to find a solution.
Algorithm
1. Start
3. else
5. else-if current_position is not end point, explore and repeat above steps.
6. Stop
Complexity of Backtracking
Generally, the time complexity of backtracking algorithm is exponential (0(kn)). In some cases, it is
observed that its time complexity is factorial (0(N!)).
The backtracking algorithm is applied to some specific types of problems. They are as follows −
Optimization problem − It is used to find the best solution that can be applied.
Enumeration problem− It is used to find the set of all feasible solutions of the problem.
Here’s a purely theoretical explanation of the Backtracking algorithm, ideal for notes or exams:
It incrementally builds candidates for the solution and abandons ("backtracks") a candidate as soon
as it determines that the candidate cannot lead to a valid solution.
🔷 Definition:
Backtracking is a depth-first search method for solving problems recursively by trying partial
solutions and then abandoning them if they are not feasible (i.e., if constraints are violated).
🔷 Working Principle:
4. Backtrack: If the solution is invalid or complete, undo the last choice and try another.
🔷 General Structure:
If a partial solution is invalid, prune the search tree (i.e., skip further exploration).
Continue until all valid solutions are found or the best one is obtained.
🔷 Key Concepts:
Solution space tree: A tree that represents all possible solutions. Each level denotes a
decision.
Backtracking point: The moment where we undo a decision and try a new path.
🔷 Applications:
Problem Explanation
N-Queens Problem Place N queens on a chessboard such that no two attack each other.
Hamiltonian Path/Cycle Find a path/cycle that visits each vertex exactly once.
Word Search / Crossword Filling Find if a word can be placed in a grid following rules.
🔷 Advantages:
Simple to implement.
🔷 Limitations:
Not suitable for real-time or very large data problems unless optimized.
Worst-case time complexity: Often exponential, depends on the number of choices per level
and depth of recursion.
O(kn)where k=choices per step, n=stepsO(k^n) \quad \text{where } k = \text{choices per step},\ n = \
text{steps}
The 8 Queen Problem is a puzzle in which 8 queens must be placed on an 8x8 chessboard so that no
two queens threaten each other. It is a classic problem in computer science and mathematics. There
are 92 solutions to this problem. The eight queens puzzle problem was first posed in the mid-19th
century.
Backtracking is a recursive approach for solving any problem where we must search among all the
possible solutions following some constraints. More precisely, we can say that it is an improvement
over the brute force technique. In this blog, we will learn one popular DSA problem: 8 queens
problem using Backtracking.
Problem Statement
Given an 8x8 chess board, you must place 8 queens on the board so that no two queens attack each
other. Print all possible matrices satisfying the conditions with positions with queens marked with '1'
and empty spaces with '0'. You must solve the 8 queens problem using backtracking.
Note 1: A queen can move vertically, horizontally and diagonally in any number of steps.
Note 2: You can also go through the N-Queen Problem for the general approach to solving
this problem.
Sample Example
Example: One possible solution to the 8 queens problem using backtracking is shown below. In the
first row, the queen is at E8 square, so we have to make sure no queen is in column E and row 8 and
also along its diagonals. Similarly, for the second row, the queen is on the B7 square, thus, we have to
secure its horizontal, vertical, and diagonal squares. The same pattern is followed for the rest of the
queens.
Output:
00001000
01000000
00010000
00000010
00100000
00000001
00000100
10000000
Bruteforce Approach
Generate all possible permutations of the numbers 1 to 8, representing the columns on the
chessboard.
For each permutation, check if it represents a valid solution by checking that no two queens
are in the same row or diagonal.
While this approach works for small numbers, it quickly becomes inefficient for larger sizes as the
number of permutations to check grows exponentially. More efficient algorithms, such as
backtracking or genetic algorithms, can be used to solve the problem in a more optimized way.
Backtracking Approach
This approach rejects all further moves if the solution is declined at any step, goes back to the
previous step and explores other options.
Algorithm
Let's go through the steps below to understand how this algorithm of solving the 8 queens problem
using backtracking works:
Step 1: Traverse all the rows in one column at a time and try to place the queen in that
position.
Step 2: After coming to a new square in the left column, traverse to its left horizontal
direction to see if any queen is already placed in that row or not. If a queen is found, then
move to other rows to search for a possible position for the queen.
Step 3: Like step 2, check the upper and lower left diagonals. We do not check the right side
because it's impossible to find a queen on that side of the board yet.
Step 4: If the process succeeds, i.e. a queen is not found, mark the position as '1' and move
ahead.
Step 5: Recursively use the above-listed steps to reach the last column. Print the solution
matrix if a queen is successfully placed in the last column.
Step 6: Backtrack to find other solutions after printing one possible solution.
We place one queen column by column, starting from the leftmost column (column 0).
We go row by row in that column and try to place a queen in a safe position.
Before placing the queen in a row of the current column, we look to the left in the same row (i.e., in
previous columns) to see if any queen is already there.
If there is already a queen in that row, we can’t place another one there — they’ll attack each other.
So any queen that could attack the new one must already be on the left.
If there's a queen on any of these diagonals, we skip this row and try the next one.
So we place the queen there and mark it in the board matrix (like with a 1 or Q).
Now, we go to the next column on the right and repeat the same steps (1 to 4) to place the next
queen.
We keep doing this recursively (calling the same steps again and again) until all 8 queens are placed.
Step 6: Backtrack if Stuck
This helps us explore all possible combinations and not miss any solution.
Here is the theory-only explanation of the 8-Queens Problem using Backtracking, suitable for exams
and notes:
🟣 Problem Statement:
Place 8 queens on an 8×8 chessboard such that no two queens threaten each other. This means:
🔁 Approach: Backtracking
The 8-Queens problem is a classic example of backtracking, where we build the solution step by step
and backtrack when a conflict arises.
🟢 Steps Involved:
o Check whether it's safe (i.e., doesn't conflict with queens in previous rows).
4. If it is safe:
🔶 Safety Check:
🧠 Solution Space:
There are 888^8 total ways to place queens (8 choices per row).
But we restrict it to only one queen per row, and prune invalid placements using safety
checks.
📌 Key Properties:
Solution count: There are 92 distinct solutions for the 8-Queens problem (not counting
symmetric solutions).
🔍 Applications of 8-Queens:
A graph is an abstract data type (ADT) consisting of a set of objects that are connected via links.
The practical applications of hamiltonian cycle problem can be seen in the fields like network design,
delivery systems and many more. However, the solution to this problem can be found only for small
types of graphs, and not for larger ones.
Suppose the given undirected graph G(V, E) and its adjacency matrix are as follows −
The backtracking algorithm can be used to find a Hamiltonian path in the above graph. If found, the
algorithm returns the path. If not, it returns false. For this case, the output should be (0, 1, 2, 4, 3, 0).
The naive way to solve Hamiltonian cycle problem is by generating all possible configurations of
vertices and checking if any configuration satisfies the given constraints. However, this approach is
not suitable for large graphs as its time complexity will be (O(N!)).
First, create an empty path array and add a starting vertex 0 to it.
Next, start with vertex 1 and then add other vertices one by one.
While adding vertices, check whether a given vertex is adjacent to the previously added
vertex and hasn't been added already.
If any such vertex is found, add it to the path as part of the solution, otherwise, return false.
A Hamiltonian cycle is a path in a graph that visits every vertex exactly once and returns to the
starting vertex, forming a closed loop. For a Hamiltonian cycle to exist, the graph must be connected,
meaning there is a path between every pair of vertices.
For example, consider a graph with 4 vertices labeled A, B, C, & D. One possible Hamiltonian cycle in
this graph could be: A -> B -> C -> D -> A. This path visits each vertex once and loops back to the start.
Not all graphs have Hamiltonian cycles. Some graphs may have a Hamiltonian path but no cycle. And
some graphs have neither paths nor cycles that visit each vertex exactly once.
A Hamiltonian path is similar to a Hamiltonian cycle, but instead of forming a closed loop, it is a path
that visits each vertex in the graph exactly once without returning to the starting vertex.
Let's consider the same example graph from before with 4 vertices A, B, C, & D. An example of a
Hamiltonian path (but not a cycle) in this graph would be: A -> B -> C -> D. This path visits each vertex
once but does not loop back to A.
Every Hamiltonian cycle is also a Hamiltonian path, but not every Hamiltonian path is a cycle. If a
graph has a Hamiltonian cycle, it must also have a Hamiltonian path. However, a graph with a
Hamiltonian path may or may not have a cycle.
Finding Hamiltonian paths is also an NP-complete problem, just like finding Hamiltonian cycles. But
for small graphs, we can use algorithms like backtracking to find paths & cycles efficiently.
Now that we understand what Hamiltonian cycles are, let's see how we can find them in a graph
using a backtracking algorithm.
Backtracking is a general algorithmic technique that explores all possible solutions by incrementally
building candidates to the solution and abandoning a candidate ("backtracking") as soon as it
determines that the candidate cannot lead to a valid solution.
1. Start with an empty path and choose any vertex as the starting point.
2. Add the starting vertex to the path.
3. Recursively build the path by choosing the next unvisited vertex and adding it to the path.
4. If the path contains all vertices and the last vertex has an edge to the starting vertex, we have
found a Hamiltonian cycle.
5. If the path does not meet the criteria for a Hamiltonian cycle, backtrack by removing the last
vertex from the path and trying a different unvisited vertex.
6. Continue this process until a Hamiltonian cycle is found or all possibilities have been exhausted.
The backtracking algorithm explores all possible paths in the graph and finds a Hamiltonian cycle if
one exists. If no Hamiltonian cycle is found after exploring all paths, the graph does not have one.
Introduction
In graph coloring problems, we are asked to color the parts of the graph.
Vertex coloring is one of the most common graph coloring problems. In this problem, we are given a
graph and ‘m’ colors. We need to find a number of ways to color a graph with these m colors such
that no two adjacent nodes are colored the same. Some of the other graph coloring problems,
like Edge Coloring (No vertex is incident to two edges of the same color) and Face
Coloring (Geographical Map Coloring), can be transformed into vertex coloring.
Another famous graph coloring problem is Chromatic Number. In this problem, we need to find the
minimum number of colors are required to color the graph such that no adjacent nodes are colored
the same.
In the above graph, we can see that we need 3 colors to color the entire graph such that no two
nodes are colored the same.
Chromatic Number
The chromatic number of a graph is the smallest number of colors needed to color the vertices of
the graph such that no two adjacent vertices share the same color. It is a crucial concept in graph
theory, particularly in problems involving scheduling, register allocation, and frequency assignment
in networks. The chromatic number provides insights into the complexity of a graph and can help
determine optimal coloring strategies for various applications.
Graph coloring is an algorithmic technique for assigning colors to the vertices of a graph. The
backtracking approach systematically explores all possibilities to find a valid coloring solution. The
algorithm works by assigning colors to vertices one at a time and backtracking if a conflict arises,
ensuring that adjacent vertices do not share the same color.
1. Select a Vertex: Choose a vertex that has not yet been colored.
3. Check for Validity: For each color, check if it conflicts with the colors assigned to adjacent
vertices.
4. Backtrack: If a conflict arises, revert to the previous vertex and try the next color.
5. Repeat: Continue the process until all vertices are colored or all options are exhausted.
Here's a pure theory explanation of the Graph Colouring Problem using Backtracking—suitable for
exams, class notes, and conceptual clarity:
🔷 Problem Statement:
The Graph Colouring Problem involves assigning colours to the vertices of a graph such that no two
adjacent vertices have the same colour, using at most M colours.
🟡 Formal Definition:
Given:
Objective:
Assign a colour to each vertex such that:
4. Before assigning, check whether the colour is safe (i.e., not used by any adjacent vertex).
5. If safe:
6. If no colour is safe:
7. Repeat until:
🛡 Safety Check:
A colour is considered safe for a vertex if none of its adjacent vertices have the same colour.
⏱ Time Complexity:
Worst-case: O(MV)O(M^V)
o M = number of colours
🧠 Key Points:
Special Case: When M=2M = 2, the problem checks if the graph is bipartite.
📌 Applications:
Map colouring
Frequency assignment in mobile radio systems
Here is a detailed explanation of real-world applications of the Graph Colouring Problem, covering
diverse domains with clarity:
1. 🗺 Map Colouring
Objective: Assign colours to regions on a map so that no two adjacent regions (countries/states)
have the same colour.
Four Colour Theorem: Any map can be coloured with at most 4 colours such that no
adjacent regions share the same colour.
✅ Use Case: Geography textbooks, political maps, and digital mapping software.
Objective: Assign limited CPU registers to variables without conflicts during execution.
An edge exists if two variables are live at the same time (interfere).
Colouring the graph means assigning registers such that no two interfering variables share a
register.
✅ Use Case: Optimizing code in compilers for better memory management and performance.
Objective: Assign time slots to exams or classes such that no student or teacher has overlapping
sessions.
✅ Use Case: University exam scheduling, school timetable creation, conference planning.
Objective: Assign numbers to Sudoku cells such that no row, column, or box contains duplicates.
6. 🧰 Job Scheduling
An edge between two jobs means they share a resource and cannot run simultaneously.
7. 📚 Course Scheduling
Objective: Assign different time slots to courses with overlapping student enrollments.
Many CSPs (like n-queens, Sudoku, map colouring) can be modeled as graph colouring.
The Branch and Bound algorithm is a technique used in combinatorial optimization problems to
systematically search through the space of possible solutions. It is particularly useful for solving
problems where exhaustive search is not feasible due to the large number of possible solutions.
The basic idea behind Branch and Bound is to divide the problem into smaller subproblems, called
branches, and then solve each subproblem using some kind of systematic search method, such as
depth-first search or breadth-first search. As solutions are found for the subproblems, the algorithm
keeps track of the best solution found so far, and uses this information to prune the search tree by
eliminating branches that cannot possibly lead to a better solution than the current best solution.
1. Branching: The original problem is divided into smaller subproblems, typically by making a
series of decisions or choices.
2. Bounding: For each subproblem, an upper bound on the possible solutions is calculated. This
upper bound is used to determine whether the subproblem can be pruned (i.e., eliminated
from further consideration) without further exploration.
3. Searching: The algorithm systematically explores the space of possible solutions, typically
using depth-first search, breadth-first search, or some other search strategy.
4. Backtracking: If a subproblem cannot be pruned and does not lead to a feasible solution, the
algorithm backtracks to the previous decision point and explores a different branch.
5. Updating: As solutions are found for the subproblems, the algorithm updates the current
best solution found so far and adjusts the upper bounds accordingly.
The Branch and Bound algorithm continues this process until all branches have been explored or
pruned, at which point the best solution found is returned as the optimal solution to the original
problem.
Divide and Conquer Approach: The original problem is divided into smaller subproblems,
making it more manageable.
Upper Bound Calculation: An upper bound on the possible solutions is calculated for each
subproblem, aiding in pruning the search tree.
Pruning: Subproblems that cannot possibly lead to a better solution than the current best
solution are eliminated from further consideration.
Backtracking: If a subproblem does not lead to a feasible solution, the algorithm backtracks
to explore other branches.
Objective Function: It deals with optimization problems where an objective function needs
to be maximized or minimized.
Branch and Bound is extensively used in various areas of integer programming, including:
1. Traveling Salesman Problem (TSP): In the TSP, the goal is to find the shortest possible route
that visits each city exactly once and returns to the original city. Branch and Bound can be
used to systematically explore the space of possible routes, pruning branches that are
guaranteed to be suboptimal. It helps in finding an optimal solution among the exponentially
large number of possible routes.
2. Knapsack Problem: The knapsack problem involves selecting a subset of items to maximize
the total value while staying within a given weight constraint. Branch and Bound can be
applied to search through the combinations of items, pruning branches that exceed the
weight constraint or cannot lead to an optimal solution. It efficiently explores the space of
possible item selections to find the optimal solution.
3. Job Scheduling: Job scheduling problems involve assigning tasks to resources over time,
considering constraints such as resource availability, precedence relationships, and job
durations. Branch and Bound can be used to search through the space of possible schedules,
pruning branches that violate constraints or cannot lead to an optimal schedule. It helps in
finding an optimal schedule that minimizes completion time or maximizes resource
utilization.
4. Network Flow Optimization: Network flow optimization problems involve determining the
flow of resources through a network, subject to capacity constraints and other restrictions.
Branch and Bound can be employed to search for the optimal flow configuration, pruning
branches that violate capacity constraints or cannot lead to an optimal flow. It helps in
finding the most efficient utilization of resources in transportation, communication, and
other network-based systems.
5. Facility Location Problems: Facility location problems involve deciding the locations of
facilities to minimize costs or maximize coverage, considering factors such as demand,
transportation costs, and facility capacities. Branch and Bound can be utilized to search for
the optimal facility locations, pruning branches that exceed capacity constraints or cannot
lead to an optimal solution. It assists in making strategic decisions regarding the placement
of facilities such as warehouses, factories, or service centers to optimize overall system
performance.
Efficiency: Branch and Bound efficiently handles problems with large search spaces by
pruning the search tree.
Optimality: It ensures that the optimal solution is found, given the constraints and objective
function.
Flexibility: Branch and Bound can be adapted to various optimization problems, making it a
versatile technique.
Linking Branch and Bound and Decision Tree in Solving Complex Problems
Pruning Techniques: Similar pruning techniques are used in both Branch and Bound and
decision tree algorithms to reduce the search space.
The branch and bound algorithm is a technique used for solving combinatorial optimization
problems. First, this algorithm breaks the given problem into multiple sub-problems and then using a
bounding function, it eliminates only those solutions that cannot provide optimal solution.
Combinatorial optimization problems refer to those problems that involve finding the best solution
from a finite set of possible solutions, such as the 0/1 knapsack problem, the travelling salesman
problem and many more.
The branch and bound algorithm can be used in the following scenario −
As discussed earlier, this algorithm is also used to solve combinatorial optimization problem.
If the given problem is a mathematical optimization problem, then the branch and bound
algorithm can also be applied.
The branch and bound algorithm works by exploring the search space of the problem in a systematic
way. It uses a tree structure (state space tree) to represent the solutions and their extensions. Each
node in the tree is part of the partial solution, and each edge corresponds to an extension of this
solution by adding or removing an element. The root node represents the empty solution.
The algorithm starts with the root node and moves towards its children nodes. At each level, it
evaluates whether a child node satisfies the constraints of the problem to achieve a feasible solution.
This process is repeated until a leaf node is reached, which represents a complete solution.
Searching techniques in Branch and Bound
There are different approaches to implementing the branch and bound algorithm. The
implementation depends on how to generate the children nodes and how to search the next node to
expand. Some of the common searching techniques are −
Breadth-first search − It maintains a queue of nodes to expand, which means this searching
technique uses First in First out order to search next node.
Least cost search − This searching technique works by computing bound value of each node.
The algorithm selects the node with the lowest bound value to expand next.
Depth-first search − It maintains a stack of nodes to expand, which means this searching
technique uses Last in First out order to search the next node.
The branch and bound algorithm can produce two types of solutions. They are as follows −
Variable size solution − This type of solution is represented by a subset which is the optimal
solution to the given problem.
It can reduce the time complexity by avoiding unnecessary exploration of the state space
tree.
It has different search techniques that can be used for different types of problems and
preferences.
Some of the disadvantages of the branch and bound algorithm are as follows −
In the worst case scenario, it may search for all the combinations to produce solutions.
The Travelling Salesman Problem (TSP) using the Branch and Bound technique is a systematic way of
reducing the search space by pruning unpromising paths, making it more efficient than brute force.
🧭 Travelling Salesman Problem (TSP): Brief Recap
Given a set of n cities and the cost of travel between each pair of cities, the goal is to find the
shortest possible route that visits each city exactly once and returns to the starting city.
✅ Key Concepts:
2. Cost Matrix: A matrix where cost[i][j] represents the cost to travel from city i to city j.
3. Bounding Function: Estimates the lower bound (minimum cost) for a node.
4. Pruning: Discard a path if its lower bound > best known solution.
🔢 Algorithm Steps
Reduce the rows and columns of the cost matrix to get the initial lower bound.
Step 3: Branching
Pick the node with the lowest bound from the queue.
For each unvisited city from the current city, create a child node:
If a node's total estimated cost is greater than or equal to the best known cost, prune
(discard) it.
Step 5: Termination
When all cities are visited and the last city returns to the start, update the best solution.
When the queue is empty, the best solution is the shortest route.
A B C D
A ∞ 10 15 20
B 10 ∞ 35 25
C 15 35 ∞ 30
D 20 25 30 ∞
⏱ Time Complexity
✅ Advantages
❌ Disadvantages
Still exponential in worst case.
In Design and Analysis of Algorithms (DAA), a lower bound is a fundamental concept used to
describe the minimum amount of work any algorithm must do to solve a problem. It helps in
understanding how efficient an algorithm can possibly be — no algorithm can perform better than
this bound in the worst case.
A lower bound of a problem is the minimum number of operations or time complexity that any
algorithm must take to solve the problem, in the worst-case scenario.
Type of
Meaning
Bound
Maximum time an algorithm may take — denotes the algorithm’s worst-case time
Upper Bound
(Big-O).
Lower Bound Minimum time any algorithm must take — represents the best possible performance.
Tight Bound When an algorithm's upper and lower bounds are the same (Θ-notation).
1. Adversary Argument
Assume an adversary tries to make the algorithm do the maximum amount of work.
It can be shown that Ω(n log n) is the lower bound for any comparison sort.
3. Information Theory
Helps in deriving lower bounds for problems where we must "discover" hidden information.
🧠 Summary
A lower bound tells you the best time complexity you can hope for any algorithm solving a
problem.
An algorithm is a sequence of steps that take inputs from the user and after some computation,
produces an output. A parallel algorithm is an algorithm that can execute several instructions
simultaneously on different processing devices and then combine all the individual outputs to
produce the final result.
Concurrent Processing
The easy availability of computers along with the growth of Internet has changed the way we store
and process data. We are living in a day and age where data is available in abundance. Every day we
deal with huge volumes of data that require complex computing and that too, in quick time.
Sometimes, we need to fetch data from similar or interrelated events that occur simultaneously. This
is where we require concurrent processing that can divide a complex task and process it multiple
systems to produce the output in quick time.
Concurrent processing is essential where the task involves processing a huge bulk of complex data.
Examples include − accessing large databases, aircraft testing, astronomical calculations, atomic and
nuclear physics, biomedical analysis, economic planning, image processing, robotics, weather
forecasting, web-based services, etc.
What is Parallelism?
Parallelism is the process of processing several set of instructions simultaneously. It reduces the total
computational time. Parallelism can be implemented by using parallel computers, i.e. a computer
with many processors. Parallel computers require parallel algorithm, programming languages,
compilers and operating system that support multitasking.
In this tutorial, we will discuss only about parallel algorithms. Before moving further, let us first
discuss about algorithms and their types.
What is an Algorithm?
Sequential Computer
Parallel Computer
Parallel Algorithm − The problem is divided into sub-problems and are executed in parallel to
get individual outputs. Later on, these individual outputs are combined together to get the
final desired output.
It is not easy to divide a large problem into sub-problems. Sub-problems may have data dependency
among them. Therefore, the processors have to communicate with each other to solve the problem.
It has been found that the time needed by the processors in communicating with each other is more
than the actual processing time. So, while designing a parallel algorithm, proper CPU utilization
should be considered to get an efficient algorithm.
To design an algorithm properly, we must have a clear idea of the basic model of computation in a
parallel computer.
Model of Computation
Both sequential and parallel computers operate on a set (stream) of instructions called algorithms.
These set of instructions (algorithm) instruct the computer about what it has to do in each step.
Depending on the instruction stream and data stream, computers can be classified into four
categories −
SISD Computers
SISD computers contain one control unit, one processing unit, and one memory unit.
In this type of computers, the processor receives a single stream of instructions from the control unit
and operates on a single stream of data from the memory unit. During computation, at each step,
the processor receives one instruction from the control unit and operates on a single data received
from the memory unit.
SIMD Computers
SIMD computers contain one control unit, multiple processing units, and shared memory or
interconnection network.
Here, one single control unit sends instructions to all processing units. During computation, at each
step, all the processors receive a single set of instructions from the control unit and operate on
different set of data from the memory unit.
Each of the processing units has its own local memory unit to store both data and instructions. In
SIMD computers, processors need to communicate among themselves. This is done by shared
memory or by interconnection network.
While some of the processors execute a set of instructions, the remaining processors wait for their
next set of instructions. Instructions from the control unit decides which processor will
be active (execute instructions) or inactive (wait for next instruction).
MISD Computers
As the name suggests, MISD computers contain multiple control units, multiple processing
units, and one common memory unit.
Here, each processor has its own control unit and they share a common memory unit. All the
processors get instructions individually from their own control unit and they operate on a single
stream of data as per the instructions they have received from their respective control units. This
processor operates simultaneously.
MIMD Computers
MIMD computers have multiple control units, multiple processing units, and a shared
memory or interconnection network.
Here, each processor has its own control unit, local memory unit, and arithmetic and logic unit. They
receive different sets of instructions from their respective control units and operate on different sets
of data.
Note
Based on the physical distance of the processors, multicomputers are of two types −
o Multicomputer − When all the processors are very close to one another (e.g., in the
same room).
o Distributed system − When all the processors are far away from one another (e.g.- in
the different cities)