0% found this document useful (0 votes)
6 views20 pages

Ada

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views20 pages

Ada

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

1. Define the term: Quantifier and Explain its Types.

A quantifier is a word or phrase that is used to specify the quantity or


amount of something. Quantifiers can be used with both countable and
uncountable nouns.
Universal Quantification:
P (a) is the preposition, if P (a) is giving expected result for all values of a
in the universe of discourse. The universal quantification of P (a) is
denoted by, (). So is called as for all i.e., it is the Universal Quantifier.
Existential Quantification:
P (a) is the preposition, if there exits an element a in the universe of
discourse such that P (a) is giving expected result. Here the Existential
Quantification of P (a) is represented by () called as for some, i.e., it is
Existential Quantifier.

2. What is vector? Which operations are performed on vector?

Q.5 Define Pseudo code.


Pseudo code is a method of specifying an algorithm. It is basically a
combination of natural language and programming contract.
Define divide and conquer algorithm
A divide and conquer algorithm is a powerful problem-solving
strategy that works by breaking down a large and complex problem
into smaller, easier-to-solve subproblems. These subproblems are then
solved recursively, either directly or by further dividing them into
even smaller subproblems. Finally, the solutions to the subproblems
are combined to produce the solution to the original problem.
write an algorithm of matrix multiplication
matrixMultiply(A, B):
Assume dimension of A is (m x n), dimension of B is (p x q)
Begin
if n is not same as p, then exit
otherwise define C matrix as (m x q)
for i in range 0 to m - 1, do
for j in range 0 to q – 1, do
for k in range 0 to p, do
C[i, j] = C[i, j] + (A[i, k] * A[k, j])
done
done
done
End

Quick Sort is a divide-and-conquer sorting algorithm that works by


selecting a pivot element and partitioning the remaining elements
into two sub-arrays, one containing element less than the pivot and
the other containing elements greater than the pivot. These sub-
arrays are then recursively sorted.

The best-case time complexity of Quick Sort is O(n log n), which
occurs when the pivot element is chosen such that it divides the
array into two sub-arrays of roughly equal size. In this case, the
recursion depth is only log n, and each level of the recursion
performs O(n) work.
The worst-case time complexity of Quick Sort is O(n^2), which
occurs when the pivot element is chosen to be the largest or
smallest element in the array. In this case, the partitioning step will
only create one sub-array with all the elements except the pivot,
and the recursion will continue on this sub-array until it has only
one element. This results in n recursive calls, each of which
performs O(n) work.

The average-case time complexity of Quick Sort is also O(n log n),
and this is typically the case for randomly chosen pivot elements.
However, there are certain types of input that can cause Quick Sort
to run in O(n^2) time even on average.

Binary search is a search algorithm that finds an element in a sorted list or array
by repeatedly dividing the search space in half. It is much faster than linear
search, especially for large datasets.

Here's how it works:

1. Start with the middle element of the list.


2. Compare the target value to the middle element.
3. If the target value is less than the middle element, then the target value
must be in the lower half of the list. So, discard the upper half of the list
and repeat steps 1 and 2 on the lower half.
4. If the target value is greater than the middle element, then the target value
must be in the upper half of the list. So, discard the lower half of the list
and repeat steps 1 and 2 on the upper half.
5. Repeat steps 1 to 4 until the target value is found or the list is empty.

Here are some of the advantages of binary search:

• It is much faster than linear search for large datasets. In the best case,
binary search can find the target value in just one comparison!
• It is relatively simple to implement.
• It can be used to search for any element in a sorted list or array.
❖ multiply 981 and 1234 using the divide-and-conquer method.
Difference between Greedy Approach and
Dynamic Programming

S.No. Greedy Method Dynamic Programming

1 In a Greedy Algorithm, we make our In Dynamic Programming, we select


decision based on the best current individually in every step, however, the
situation. selection may rely on the solution to sub-
problems.

2 In this technique, there is no In this technique, you can get the assurance
assurance of obtaining the optimal of an optimal solution.
solution.

3 Greedy approach is used to get the Dynamic programming is also used to get
optimal solution. the optimal solution.

4 The greedy method never alters the This technique prefers memorization due to
earlier choices, thus making it more which the memory complexity increases,
efficient in terms of memory. making it less efficient.

5 Greedy techniques are faster than Dynamic programming is comparatively


dynamic programming. slower.
Floyd's algorithm is a dynamic programming algorithm that can be
used to find the shortest paths between all pairs of vertices in a
weighted graph. It works by iteratively computing the shortest paths
between all pairs of vertices, where each iteration considers one more
intermediate vertex. The algorithm is relatively simple to implement
and has a time complexity of O(n^3), where n is the number of
vertices in the graph.

The complexity of an algorithm is a measure of how much time and


space it takes to run. The time complexity of an algorithm is typically
measured in terms of the number of operations it takes to run, as a
function of the input size. The space complexity of an algorithm is
typically measured in terms of the amount of memory it requires, as a
function of the input size.


Back Edge: It is an edge (u, v) such that v is an ancestor of node u but not part
of the DFS Traversal of the tree. Edge from 5 to 4 is a back edge. The
presence of a back edge indicates a cycle in a directed graph.
Acyclic Directed Graph: - It is directed graph that contain no cycle. This type of
graph is used in compliers for identifying the common subexpressions

❖ Give the characteristics of Greedy Algorithms.


• The algorithm solves its problem by finding an optimal
solution. This solution can be a maximum or minimum
value. It makes choices based on the best option available.
• The algorithm is fast and efficient with time complexity of
O(n log n) or O(n). Therefore, applied in solving large-scale
problems.
• The search for optimal solution is done without repetition
– the algorithm runs once.
• It is straightforward and easy to implement.

Dijkstra’s algorithm is a popular algorithm for solving many single-source


shortest path problems having non-negative edge weight in the graphs i.e.,
it is to find the shortest distance between two vertices on a graph. It was
conceived by Dutch computer scientist Edsger W. Dijkstra in 1956.

Algorithm for Dijkstra’s Algorithm:


1. Mark the source node with a current distance of 0 and the rest with
infinity.
2. Set the non-visited node with the smallest current distance as the
current node.
3. For each neighbour, N of the current node adds the current
distance of the adjacent node with the weight of the edge
connecting 0->1. If it is smaller than the current distance of Node,
set it as the new current distance of N.
4. Mark the current node 1 as visited.
5. Go to step 2 if there are any nodes are unvisited.
Step 1: Start from Node 0 and mark Node as visited as you can check in
below image visited Node is marked red.
Dijkstra’s Algorithm

Step 2: Check for adjacent Nodes, Now we have to choices (Either choose
Node1 with distance 2 or either choose Node 2 with distance 6 ) and choose
Node with minimum distance. In this step Node 1 is Minimum distance
adjacent Node, so marked it as visited and add up the distance.
Distance: Node 0 -> Node 1 = 2

Dijkstra’s Algorithm

Step 3: Then Move Forward and check for adjacent Node which is Node 3,
so marked it as visited and add up the distance, Now the distance will be:
Distance: Node 0 -> Node 1 -> Node 3 = 2 + 5 = 7
Dijkstra’s Algorithm

Step 4: Again we have two choices for adjacent Nodes (Either we can
choose Node 4 with distance 10 or either we can choose Node 5 with
distance 15) so choose Node with minimum distance. In this step Node 4 is
Minimum distance adjacent Node, so marked it as visited and add up the
distance.
Distance: Node 0 -> Node 1 -> Node 3 -> Node 4 = 2 + 5 + 10 = 17

Dijkstra’s Algorithm

Step 5: Again, Move Forward and check for adjacent Node which is Node 6,
so marked it as visited and add up the distance, Now the distance will be:
Distance: Node 0 -> Node 1 -> Node 3 -> Node 4 -> Node 6 = 2 + 5 + 10
+ 2 = 19
Dijkstra’s Algorithm

So, the Shortest Distance from the Source Vertex is 19 which is optimal
one

❖ Difference between Backtracking VS Branch-N-Bound technique

Parameters Backtracking Branch-N-Bound

Why it is This is used to solve the decision-based This is used to solve the
used? problem. optimization problem.

Nodes This is a state space tree where the This explored the optimization
node explored the depth-first search. problem.

Efficient More efficient. Less Efficient.

Function It involves the feasibility function. It involves the bounding function.

Traverse It traverses the tree by depth-first It traverses the tree by breadth-


search first search

Solves Backtracking can solve the game of It doesn’t solve any game
chess and sudoku. problem.

Application This application is used to solve the N- This type of application is used to
Used queen problem, the Hamilton cycle, and solve the problem based on the
the problem based on graph coloring. Travelling salesman problem.
❖ Explain Minimax Principle

Minimax is a kind of backtracking algorithm that is used in decision making


and game theory to find the optimal move for a player, assuming that your
opponent also plays optimally. It is widely used in two player turn-based
games such as Tic-Tac-Toe, Backgammon, Mancala, Chess, etc.
In Minimax the two players are called maximizer and minimizer.
The maximizer tries to get the highest score possible while
the minimizer tries to do the opposite and get the lowest score possible.
Every board state has a value associated with it. In a given state if the
maximizer has upper hand, then, the score of the board will tend to be some
positive value. If the minimizer has the upper hand in that board state, then
it will tend to be some negative value. The values of the board are
calculated by some heuristics which are unique for every type of game.

❖ What is the advantage of backtracking algorithm?

❖ What is the difference between live node and dead node.


❖ Explain P, NP, NP complete and NP-Hard problems. Give
examples of each.

P, NP, NP-Complete, and NP-Hard Problems Explained


with Examples:
P (Polynomial Time):

• These are problems that can be solved by a deterministic algorithm in


polynomial time (e.g., O(n), O(n^2)).
• Think of tasks like sorting a list of numbers or checking if a number is
prime.

NP (Nondeterministic Polynomial Time):

• These are problems where a solution can be verified in polynomial time,


even though finding the solution itself might not be feasible in
polynomial time.
• Imagine having a magic box that checks if a solution to a problem is
correct, but you don't know how to actually find the solution yourself.
• Examples:
o Satisfiability problem (SAT): Given a Boolean formula, check if
there exists an assignment of truth values (True/False) to the
variables that makes the formula true.
o Hamiltonian cycle problem: Given a graph, find a closed loop that
visits every node exactly once.

NP-complete:
• These are the "toughest" problems in NP. Any problem in NP can be
reduced to an NP-complete problem in polynomial time.
• Finding a solution to an NP-complete problem is like finding the master
key that unlocks all other NP problems.
• If we could efficiently solve any NP-complete problem, we could
efficiently solve all NP problems, which would be a major breakthrough
in computer science.
• Examples:
o 3SAT: A specific version of SAT where each clause contains
exactly 3 variables.
o Knapsack problem: Given a set of items with weights and values,
find the subset with the maximum total value that doesn't exceed a
weight limit.

NP-Hard:

• These are problems at least as hard as NP-complete problems. Every NP


problem can be reduced to an NP-hard problem in polynomial time.
• However, unlike NP-complete problems, not all NP-hard problems are
known to be in NP.
• Examples:
o Subgraph isomorphism problem: Given two graphs, determine if
one is a subgraph of the other.
o Integer factorization: Break a large integer into its prime factors.

❖ What is Finite Automata? Explain use of finite automata for


string matching with suitable example.

A finite automaton (FA) is a theoretical model of computation


that represents a simple machine with a finite set of states
and transitions between them. It can be used to recognize
patterns in strings, making it a powerful tool for string
matching.

Benefits of using FAs for string matching:


Efficient: FAs can scan a string character by character,
making them efficient for large datasets.

Flexible: They can be designed to recognize various patterns


by modifying the states and transitions.

Easy to understand: The concept of FAs is relatively simple


and intuitive, making them a good starting point for learning
about pattern recognition algorithms.
❖Hamiltonian Problem

❖ The Naive String-Matching Algorithm


NAIVE-STRING-MATCHER (T, P)
1. n ← length [T]
2. m ← length [P]
3. for s ← 0 to n -m
4. do if P [1.....m] = T [s + 1....s + m]
5. then print "Pattern occurs with shift" s

❖ The Rabin-Karp-Algorithm
The Rabin-Karp string matching algorithm calculates a hash value for
the pattern, as well as for each M-character sub sequences of text to be
compared. If the hash values are unequal, the algorithm will determine
the hash value for next M-character sequence. If the hash values are
equal, the algorithm will analyze the pattern and the M-character
sequence. In this way, there is only one comparison per text
subsequence, and character matching is only required when the hash
values match.

Example: For string matching, working module q = 11, how many spurious hits does
the Rabin-Karp matcher encounters in Text T = 31415926535.......

1. T = 31415926535.......
2. P = 26
3. Here T. Length =11 so Q = 11
4. And P mod Q = 26 mod 11 = 4
5. Now find the exact match of P mod Q... Solution:

You might also like