0% found this document useful (0 votes)
16 views

Algorithm Mod 5 Full Notes

Uploaded by

Ashmy Shams
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views

Algorithm Mod 5 Full Notes

Uploaded by

Ashmy Shams
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

Module-V || Greedy and Dynamic Programming

• Theoretical Foundation for Greedy algorithms - Matroidtheory -


• Greedy Strategy vs Dynamic Programing.
• Complexity Theory: Classes P and NP- Polynomial Time Reductions
Greedy is an algorithmic paradigm that builds up a solution piece by piece, always choosing
the next piece that offers the most obvious and immediate benefit. Greedy algorithms are
used for optimization problems.

An optimization problem can be solved using Greedy if the problem has the following
property:

• At every step, we can make a choice that looks best at the moment, and we get the
optimal solution to the complete problem.
• Some popular Greedy Algorithms are Fractional Knapsack, Dijkstra’s algorithm,
Kruskal’s algorithm, Huffman coding and Prim’s Algorithm
• The greedy algorithms are sometimes also used to get an approximation for Hard
optimization problems. For example, Traveling Salesman Problem is an NP-Hard
problem. A Greedy choice for this problem is to pick the nearest unvisited city from
the current city at every step. These solutions don’t always produce the best optimal
solution but can be used to get an approximately optimal solution.

However, it’s important to note that not all problems are suitable for greedy algorithms. They
work best when the problem exhibits the following properties:

Greedy Choice Property: The optimal solution can be constructed by making the best local
choice at each step.
Optimal Substructure: The optimal solution to the problem contains the optimal solutions
to its subproblems.
Characteristics of Greedy Algorithm
Here are the characteristics of a greedy algorithm:

• Greedy algorithms are simple and easy to implement.


• They are efficient in terms of time complexity, often providing quick solutions. Greedy
Algorithms are typically preferred over Dynamic Programming for the problems
where both are applied. For example, Jump Game problem and Single Source
Shortest Path Problem (Dijkstra is preferred over Bellman Ford where we do not have
negative weights)..
• These algorithms do not reconsider previous choices, as they make decisions based
on current information without looking ahead.
These characteristics help to define the nature and usage of greedy algorithms in problem-
solving.

How does the Greedy Algorithm works?


Greedy Algorithm solve optimization problems by making the best local choice at each step
in the hope of finding the global optimum. It’s like taking the best option available at each
moment, hoping it will lead to the best overall outcome.

Here’s how it works:

• Start with the initial state of the problem. This is the starting point from where you
begin making choices.
• Evaluate all possible choices you can make from the current state. Consider all the
options available at that specific moment.
• Choose the option that seems best at that moment, regardless of future
consequences. This is the “greedy” part – you take the best option available now,
even if it might not be the best in the long run.
• Move to the new state based on your chosen option. This becomes your new starting
point for the next iteration.
• Repeat steps 2-4 until you reach the goal state or no further progress is possible. Keep
making the best local choices until you reach the end of the problem or get stuck..

Example:

Let’s say you have a set of coins with values {1, 2, 5, 10, 20, 50, 100} and you need to give
minimum number of coin to someone change for 36 .

The greedy algorithm for making change would work as follows:

1. Start with the largest coin value that is less than or equal to the amount to be changed.
In this case, the largest coin less than 36 is 20 .
2. Subtract the largest coin value from the amount to be changed, and add the coin to
the solution. In this case, subtracting 20 from 36 gives 16 , and we add a 20 coin to
the solution.
3. Repeat steps 1 and 2 until the amount to be changed becomes 0.

So, using the greedy algorithm, the solution for making change for 36 would be one 20 coins,
one 10 coin, one 5 coins and one 1 coin needed.

Note: This is just one example, and other greedy choices could have been made at each step.
However, in this case, the greedy approach leads to the optimal solution.
The greedy algorithm is not always the optimal solution for every optimization problem, as
shown in the example below.

One such example where the Greedy Approach fails is to find the Maximum weighted path
of nodes in the given graph .

Graph with weighted vertices

In the above graph starting from the root node 10 if we greedily select the next node to obtain
the most weighted path the next selected node will be 5 that will take the total sum to 15 and
the path will end as there is no child of 5 but the path 10 -> 5 is not the maximum weight path.

Greedy Approach fails

In order to find the most weighted path all possible path sum must be computed and their
path sum must be compared to get the desired result, it is visible that the most weighted
path in the above graph is 10 -> 1 -> 30 that gives the path sum 41 .

Correct Approach

In such cases Greedy approach wouldn’t work instead complete paths from root to leaf
node has to be considered to get the correct answer i.e. the most weighted path, This can
be achieved by recursively checking all the paths and calculating their weight.
Greedy Algorithm Vs Dynamic Programming
Below are the comparison of Greedy Algorithm and Dynamic Programming based on various
criteria:

Difference between Greedy Approach and Dynamic Programmin

Write a short notes on greedy strategy vs dynamic programming.

Greedy Approach:

The greedy approach makes the best choice at each step with the hope of finding a global
optimum solution.

It selects the locally optimal solution at each stage without considering the overall effect on
the solution.

Greedy algorithms are usually simple, easy to implement, and efficient, but they may not
always lead to the best solution.
Dynamic Programming:

Dynamic programming breaks down a problem into smaller subproblems and solves each
subproblem only once, storing its solution.

It uses the results of solved subproblems to build up a solution to the larger problem.

Dynamic programming is typically used when the same subproblems are being solved
multiple times, leading to inefficient recursive algorithms. By storing the results of
subproblems, dynamic programming avoids redundant computations and can be more
efficient.

Optimal Substructure Property in Dynamic Programming

A given problem is said to have Optimal Substructure Property if the optimal solution of the
given problem can be obtained by using the optimal solution to its subproblems instead of
trying every possible way to solve the subproblems. In other words, we can solve larger
problems given the solution of smaller problems. Suppose we have given a complex
problem, then we will break this problem into simpler problems optimally to find the optimal
solution of the given complex problem.

For example: f(n) = f(n-1) + f(n-2)

In the above example, the problem f(n) is broken into two smaller problems, i.e., F(n-1) and
f(n-2). These two smaller problems can also be broken further into smaller problems.

The above figure shows how problems are broken down into the sub-problems. This process
will continue till the problem cannot be further divided. Once the sub-problem is not further
divided, we will take the base cases to find the solution.
Matroid theory

Matroid theory is a powerful framework for studying abstract independence in combinatorial


optimization. It generalizes concepts from linear algebra and graph theory, enabling efficient
algorithms for complex problems across various fields.

Matroids consist of a ground set and independent subsets, satisfying key properties like the
exchange property. This structure allows for efficient optimization algorithms and provides a
unified approach to seemingly unrelated problems in areas like network design and coding
theory.

Fundamentals of matroid theory

• Matroid theory provides a powerful framework for studying abstract independence in


combinatorial optimization problems
• Generalizes concepts from linear algebra and graph theory to broader mathematical
structures
• Enables efficient algorithms for solving complex optimization problems in various fields

Matroid theory is a branch of combinatorics that studies the properties of matroids—


abstract structures that generalize the concepts of linear independence in vector spaces
and the dependence and independence of sets in various contexts. Matroids provide a
unified framework to study optimization problems, graph theory, and other combinatorial
objects.

• Mathematical structure consisting of a ground set and a collection of independent


subsets
• Captures abstract notion of independence found in various mathematical contexts
• Satisfies three key properties: empty set is independent, hereditary property, and
exchange property
• Allows for efficient optimization algorithms due to its structure
Applications of matroid theory

Combinatorial optimization problems

• Provides framework for modeling and solving various optimization problems


• Enables efficient algorithms for problems with matroid constraints
• Applications in scheduling, resource allocation, and network design
• Generalizes classical problems like minimum spanning tree and maximum matching
• Allows for unified approach to seemingly unrelated optimization tasks
Network design

• Uses graphic matroids to model network connectivity problems


• Solves minimum spanning tree problem efficiently using matroid greedy algorithm
• Addresses network reliability and redundancy issues using matroid connectivity
• Optimizes network flow problems using matroid intersection techniques
• Applications in telecommunications, transportation, and utility network design
Coding theory

• Employs matroid theory in design and analysis of error-correcting codes


• Uses vector matroids to represent linear codes over finite fields
• Matroid operations help construct codes with desired properties
• Matroid duality relates to properties of dual codes
• Applications in data transmission, storage systems, and cryptography
Complexity Theory: Classes P and NP- Polynomial Time Reductions

Computational Complexity Theory

→ In computer science, computational complexity theory is the branch of the theory of


computation that studies the resources, or cost, of the computation required to solve a given
computational problem.
→ The relative computational difficulty of computable functions is the subject matter of
computational complexity.
→ Complexity theory analyzes the difficulty of computational problems in terms of many
different computational resources.
→ Example: Mowing grass has linear complexity because it takes double the time to mow
double the area. However, looking up something in a dictionary has only logarithmic
complexity because a double sized dictionary only has to be opened one time more (e.g.
exactly in the middle - then the problem is reduced to the half).

Complexity Classes in Computer Science

In computer science, some problems remain unsolved due to their inherent difficulty or resource
constraints. These problems are categorized into complexity classes, which group problems based
on the computational resources—time and space—required to solve them or verify their solutions.
This classification is fundamental in understanding the feasibility and efficiency of solving
computational problems.

A complexity class is the set of all of the computational problems which can be solved using a
certain amount of a certain computational resource.

o The complexity class P is the set of decision problems that can be solved by a deterministic
machine in polynomial time. This class corresponds to an intuitive idea of the problems which
can be effectively solved in the worst cases.

o The complexity class NP is the set of decision problems that can be solved by a non-deterministic
machine in polynomial time. This class contains many problems that people would like to be able
to solve effectively. All the problems in this class have the property that their solutions can be
checked effectively.
Complexity Theory: Classes P and NP- Polynomial Time Reductions

Key Resources in Complexity Theory

1. Time Complexity:
a. Describes the number of steps required to solve a problem or verify a solution.
b. Expressed as a function of the input size n, e.g., (n), O(n^2), etc.
2. Space Complexity:
a. Measures the amount of memory required by an algorithm to solve a problem.
b. Includes memory for input, output, and any additional working space.

Deterministic (Turing) Machine

• Turing machines are basic symbol-manipulating devices that, despite their simplicity, can
simulate the logic of any computer, making them a fundamental model in computer science.
• They were described in 1936 by Alan Turing as a thought experiment to explore the limits
of mechanical computation, rather than as a practical computing technology.
• While Turing machines were never intended to be constructed, they play a crucial role in
understanding the abstract properties of computation, yielding insights into computer
science and complexity theory.
• Turing machines provide a precise definition of an algorithm or 'mechanical procedure,'
capturing the informal concept of effective methods in logic and mathematics.
Complexity Theory: Classes P and NP- Polynomial Time Reductions

• Studying Turing machines helps analyze the foundations of computation and the nature of
problem-solving within the limits of mechanical computation.

Nondeterministic (Turing) Machine

• A non-deterministic Turing machine (NTM) is a theoretical model in computer science


where the control mechanism operates like a non-deterministic finite automaton.
• Unlike a deterministic Turing machine (DTM), where a given state and tape symbol
uniquely determine the symbol to write, the direction to move the tape head, and the
subsequent state, an NTM allows multiple possible actions for the same state and symbol
combination.
• In an NTM, the lack of unique transitions enables the machine to explore multiple
computational paths simultaneously, making it a powerful conceptual tool for studying
computational complexity and decision problems.
• While NTMs are not physically realizable, they are essential for theoretical analyses, such
as understanding the relationship between deterministic and non-deterministic computation
(e.g., the P vs NP problem).

Types of Complexity Classes

In complexity theory, problems are categorized into various complexity classes based on the
computational resources required to solve or verify them. Below are some important complexity
classes with their definitions, features, and examples:

1. P Class (Polynomial Time)

Problems in P are decision problems (yes/no questions) that can be solved by a deterministic
Turing machine in polynomial time.

• P is the complexity class that includes decision problems solvable by a deterministic Turing
machine within a polynomial amount of computation time, also referred to as polynomial
time.
• Problems in P are often considered "efficiently solvable" or "tractable," distinguishing
them from problems that, while solvable in theory, are impractical to compute in reality,
known as intractable problems.
• Some problems in P, despite being in polynomial time, are impractical for real-world use
due to extremely large polynomial exponents (e.g., requiring n^{1,000,000} operations).
• P includes many naturally occurring problems, such as the decision versions of linear
programming, finding the greatest common divisor, and determining maximum matchings.
Complexity Theory: Classes P and NP- Polynomial Time Reductions

• In 2002, it was proven that determining if a number is prime belongs to P, showcasing the
class's relevance to foundational computational problems.

Features:

• The solutions are easy to find and verify.


• Problems in P are tractable (solvable both in theory and practice).
• Examples include basic computational problems.
Examples:

• Calculating the greatest common divisor (Euclid’s algorithm).


• Finding a maximum matching in a graph.
• Merge Sort.

2. NP Class (Nondeterministic Polynomial Time)


Problems in NP are decision problems whose solutions can be verified by a deterministic Turing
machine in polynomial time. These problems are "hard to solve but easy to verify."

• NP (Non-deterministic Polynomial time) is the class of decision problems that can be


solved in polynomial time using a non-deterministic Turing machine.
• Alternatively, it can be defined as the set of problems whose solutions can be verified in
polynomial time by a deterministic Turing machine.
• A characteristic feature of problems in NP is that their solutions, once presented, can be
checked effectively and efficiently.
• NP contains many important and challenging problems that are widely studied, including
the Boolean satisfiability problem (SAT), the Hamiltonian path problem (a special case of
the Traveling Salesman Problem), and the Vertex Cover problem.
• While solving NP problems efficiently remains an open challenge, they play a central role
in computational complexity theory, particularly in understanding the relationship between
P and NP.

Features:

• Solutions are generated non-deterministically (guessing and checking).


• If a solution is given, it can be verified in polynomial time.
Examples:
 Boolean Satisfiability Problem (SAT).
Complexity Theory: Classes P and NP- Polynomial Time Reductions

 Hamiltonian Path Problem.


 Graph Coloring Problem.
Example Illustration:
Imagine assigning 200 employees to 200 rooms, ensuring that no two incompatible employees
share a room. While generating such an arrangement is difficult, verifying a proposed solution is
straightforward.

3. Co-NP Class (Complement of NP)


Problems in Co-NP are decision problems where a "no" answer can be verified in polynomial time.
Essentially, Co-NP is the complement of NP.
Features:

• If X is in NP, its complement X′ is in Co-NP.


• Verification involves checking specific proofs for "no" answers in polynomial time.
Examples:

• Checking if a number is composite (complement of primality testing).


• Integer Factorization.

4. NP-Hard Class
A problem is NP-hard if it is at least as hard as the hardest problem in NP, meaning every problem
in NP can be reduced to it in polynomial time. However, NP-hard problems are not necessarily in
NP.
Features:

• Solving an NP-hard problem implies solving all NP problems.


• Verifying a solution may take longer than polynomial time.
Examples:

• Halting Problem (determining whether a program will terminate).


• Satisfiability of Qualified Boolean Formulas.
• Determining if a graph has no Hamiltonian cycle.
Complexity Theory: Classes P and NP- Polynomial Time Reductions

5. NP-Complete Class
A problem is NP-complete if it is both in NP and NP-hard. These are the most challenging
problems in NP.

• NP-complete problems are the most challenging problems within the class NP, as they
are believed to be the least likely to belong to P, meaning they cannot be solved in
polynomial time using a deterministic Turing machine.
• If any NP-complete problem could be solved in polynomial time, it would imply that P =
NP, allowing all NP problems to be solved efficiently.
• Currently, all known algorithms for NP-complete problems require superpolynomial time
in terms of the input size, making them computationally expensive for large inputs.
• Common strategies for tackling NP-complete problems involve approaches like
approximation, probabilistic algorithms, focusing on special cases, or using heuristic
methods to find near-optimal or practical solutions.

Features:

• Any problem in NP can be reduced to an NP-complete problem in polynomial time.


• If an NP-complete problem can be solved in polynomial time, all NP problems can also be
solved in polynomial time (P=NP).
Examples:

• SAT (Boolean Satisfiability Problem).


• Traveling Salesman Problem (decision version).
• Knapsack Problem.

Hierarchy of Complexity Classes


Complexity Theory: Classes P and NP- Polynomial Time Reductions

Polynomial-time reduction

Polynomial-time reductions are a key tool in complexity theory, helping us classify problems
based on their difficulty. They let us transform one problem into another efficiently, showing that
if we can solve one, we can solve the other.

These reductions are crucial for proving NP-completeness. By reducing a known NP-complete
problem to a new one, we can show the new problem is just as hard. This helps us identify a whole
class of equally challenging problems.

Converting instances of one problem to another, known as reduction or polynomial time reduction,
enables us to compare the computational complexity of two problems. The basic tenet of reduction
is establishing a correlation between the difficulty of solving problems A and B. We can learn
more about the relative complexity of problems A and B if we can demonstrate that Problem A is
not more difficult than Problem B.

→ If we have two decision problems, A and B, we say that A reduces to B in polynomial time
(denoted as A ≤p B) if there exists a polynomial-time algorithm that transforms instances
of problem A into instances of problem B such that the answer to the transformed instance
of B is the same as the answer to the original instance of A.
→ The goal of polynomial-time reduction is to prove that if we can solve problem B in
polynomial time, then we can solve problem A in polynomial time as well. This is often
used to show that problem A is at least as hard as problem B.
Complexity Theory: Classes P and NP- Polynomial Time Reductions

Steps in Polynomial-Time Reduction:

1. Transformation: The input of problem A is transformed into an input of problem B in


polynomial time.
2. Solution Mapping: The solution to problem B is then mapped back to a solution for
problem A.
3. Correctness: The transformation and mapping must preserve the correctness of the
problem. If the original instance of A is a "yes" instance (i.e., has a solution), the
transformed instance of B should also be a "yes" instance, and the same goes for "no"
instances.

Example:

A classic example of polynomial-time reduction is proving that the 3-SAT problem is NP-
complete. To show that 3-SAT is NP-complete, you reduce it from another NP-complete problem,
like SAT (Boolean satisfiability). You demonstrate that any instance of SAT can be transformed
into an equivalent instance of 3-SAT in polynomial time, showing that solving 3-SAT is at least
as hard as solving SAT.

Use in NP-Completeness:

• NP-Complete Problems: If problem A can be reduced to problem B in polynomial time,


and if B is known to be NP-complete, then A is also NP-complete. This is a fundamental
technique in computational complexity theory.
• Transitivity: Polynomial-time reductions are transitive, meaning if A≤p B and B≤p C,
then A≤p C.

Polynomial-time reductions are crucial tools for understanding the relationships between problems
and determining the computational hardness of problems.

You might also like