Unit III - Brute Force & Exhaustive Search
Unit III - Brute Force & Exhaustive Search
Brute Force & Exhaustive Search: Introduction to Brute Force approach, Selection Sort
and Bubble Sort, Sequential search, Exhaustive Search- Travelling Salesman Problem and
Knapsack Problem, Depth First Search, Breadth First Search
➢ Brute force is a straightforward approach to solving a problem, usually directly based on the
problem statement and definitions of the concepts involved.
➢ The principal strengths of the brute-force approach are wide applicability and simplicity
➢ Its principal weakness is the subpar (below a usual or normal level) efficiency of most brute-
force algorithms
➢ The following noted algorithms can be considered as examples of the brute-force approach:
✓ definition-based algorithm for matrix multiplication
✓ selection sort
✓ sequential search
✓ straightforward string-matching algorithm
We start selection sort by scanning the entire given list to find its smallest element and exchange
it with the first element, putting the smallest element in its final position in the sorted list.
Then we scan the list, starting with the second element, to find the smallest among the last n
− 1 elements and exchange it with the second element, putting the second smallest element in its final
position.
th
Generally, on the i pass through the list, which we number from 0 to n − 2, the algorithm
searches for the smallest item among the last n − i elements and swaps it with Ai
Another brute-force application to the sorting problem is to compare adjacent elements of the list and exchange them
if they are out of order. By doing it repeatedly, we end up “bubbling up” the largest element to the last position on the list.
The next pass bubbles up the second largest element, and so on, until after n −1 passes the list is sorted.
Example
Write the example given in the class work
Many important problems require finding an element with a special property in a domain that
grows exponentially (or faster) with an instance size. Typically, such problems arise in situations that
involve—explicitly or implicitly—combinatorial objects such as permutations, combinations, and
subsets of a given set.
Many such problems are optimization problems: they ask to find an element that maximizes
or minimizes some desired characteristic such as a path length or an assignment cost.
n
As an example, consider the exponentiation problem: compute a for a nonzero number a and n (n is a
positive number). Although this problem might seem trivial, it provides a useful vehicle for illustrating
several algorithm design strategies, including the brute force.
By the definition of exponentiation,
n
a = a ∗... ∗a
n times
The sequential algorithm simply compares successive elements of a given list with a given search
key until either a match is encountered (successful search) or the list is exhausted without finding a
match (unsuccessful search).
A simple extra trick is often employed in implementing sequential search: if we append the
search key to the end of the list, the search for the key will have to be successful, and therefore we can
eliminate the end of list check altogether.
ALGORITHM SequentialSearch2(A[0..n], K)
The Travelling salesman problem (TSP) has been attracting researchers for the last 150 years by
its seemingly simple formulation, important applications, and interesting connections to other
combinatorial problems.
➢ In layman’s terms, the problem asks to find the shortest tour through a given set of n
cities that visits each city exactly once before returning to the city where it started.
➢ The problem can be conveniently modelled by a weighted graph, with the graph’s
vertices representing the cities and the edge weights specifying the distances.
➢ Then the problem can be stated as the problem of finding the shortest Hamiltonian
circuit of the graph. (A Hamiltonian circuit is defined as a cycle that passes through all
the vertices of the graph exactly once. It is named after the Irish mathematician Sir
William Rowan Hamilton (1805–1865), who became interested in such cycles as an
application of his algebraic discoveries.)
3.2.2 Knapsack Problem
Given n items of known weights w1, w2, . . . , wn and values v1, v2, . . . , vn and a knapsack of capacity W,
find the most valuable subset of the items that fit into the knapsack.
Example : a transport plane that has to deliver the most valuable set of items to a remote location
without exceeding the plane’s capacity
Given N items where each item has some weight and profit associated with it and also given a
bag with capacity W, [i.e., the bag can hold at most W weight in it]. The task is to put the items into the
bag such that the sum of profits associated with them is the maximum possible.
Note: The constraint here is we can either put an item completely into the bag or cannot put it at all [It is
not possible to put a part of an item into the bag].
A simple solution is to consider all subsets of items and calculate the total weight and profit of all
subsets. Consider the only subsets whose total weight is smaller than W. From all such subsets, pick the
subset with maximum profit.
Optimal Substructure: To consider all subsets of items, there can be two cases for every item.
Case 1: The item is included in the optimal subset.
Case 2: The item is not included in the optimal set.
For example
profit[] = { 60, 100, 120 };
weight[] = { 10, 20, 30 };
W = 50;
take B and C
total weight = 20+30 =50
total profit = 100+120=220
Maximum Profit is 220
5 This will always give the maximum profit because, in each step it adds an element such that this is
the maximum possible profit for that much weight.
a) If the weight of the current item is less than or equal to the remaining capacity then add
the value of that item into the result
b) Else add the current item as much as we can and break out of the loop.
iv) Display the maximum profit
Iteration:
For i = 0, weight = 10 which is less than W. So add this element in the knapsack.
profit = 60 and remaining W = 50 – 10 = 40.
For i = 1, weight = 20 which is less than W. So add this element too.
profit = 60 + 100 = 160 and remaining W = 40 – 20 = 20.
For i = 2, weight = 30 is greater than W. So add 20/30 fraction = 2/3 fraction of the element.
Therefore profit = 2/3 * 120 + 160 = 80 + 160 = 240 and remaining W becomes 0.
➢ The term “exhaustive search” can also be applied to two very important algorithms that
systematically process all vertices and edges of a graph. These two traversal algorithms are
depth-first search (DFS) and breadth-first search (BFS).
➢ These algorithms have proved to be very useful for many applications involving graphs in
artificial intelligence and operations research.
➢ In addition, they are indispensable for efficient investigation of fundamental properties of graphs
such as connectivity and cycle presence.
Depth First Search is a traversal approach in which the traverse begins at the root node and proceeds
through the nodes as far as possible until we reach the node without child.
➢ DFS (Depth First Search) makes use of the Stack data structure.
➢ DFS is advantageous whenever the target is remote from the source.
➢ When compared to BFS, DFS is faster
ALGORITHM
i) Initially all vertices are marked unvisited (false).
ii) The DFS algorithm starts at a vertex u in the graph. By starting at vertex u it considers the edges
from u to other vertices.
• If the edge leads to an already visited vertex, then backtrack to current vertex u.
• If an edge leads to an unvisited vertex, then go to that vertex and start processing
from that vertex. That means the new vertex becomes the current root for
traversal.
iii) Follow this process until a vertices are marked visited.
Here adjacency matrix is used to store the connection between the vertices.
ALGORITHM DFS(G)
//Implements a depth-first search traversal of a given graph
//Input: Graph G = _V, E_
//Output: Graph G with its vertices marked with consecutive integers
// in the order they are first encountered by the DFS traversal
mark each vertex in V with 0 as a mark of being “unvisited”
count ←0
for each vertex v in V do
if v is marked with 0
dfs(v)
dfs(v)
//visits recursively all the unvisited vertices connected to vertex v
//by a path and numbers them in the order they are encountered
//via global variable count
count ←count + 1; mark v with count
for each vertex w in V adjacent to v do
if w is marked with 0
dfs(w)
Breath First is a traversal approach in which , we first walk through all nodes on the same level
before moving on to the next level.
➢ The Breadth First Search (BFS) algorithm is used to search a graph data structure for a node
that meets a set of criteria. It starts at the root of the graph and visits all nodes at the current
depth level before moving on to the nodes at the next depth level.
➢ BFS (Breadth First Search) finds the shortest path using the Queue data structure
➢ BFS works better whenever the target is near the source.
➢ When compared to DFS, BFS is slower.
I. Start by putting any one of the graph's vertices at the back of a queue.
II. Take the front item of the queue and add it to the visited list.
III. Create a list of that vertex's adjacent nodes. Add the ones which aren't in the visited list to the
back of the queue.
IV. Keep repeating steps 2 and 3 until the queue is empty.
bfs(v)
//visits all the unvisited vertices connected to vertex v
//by a path and numbers them in the order they are visited
//via global variable count
count ←count + 1; mark v with count and initialize a queue with v
while the queue is not empty do
for each vertex w in V adjacent to the front vertex do
if w is marked with 0
count ←count + 1; mark w with count
add w to the queue
remove the front vertex from the queue
DFS BFS
Data structure a queue a stack
Applications Connectivity, acyclicity, Connectivity, acyclicity,
articulation points minimum-edge articulation points minimum-edge
paths paths
Efficiency for adjacency Ω(|V 2|) Ω(|V 2|)
matrix
Efficiency for adjacency Ω(|V | + |E|) Ω(|V | + |E|)
lists