0% found this document useful (0 votes)
5 views37 pages

ALGORITHMS

The document provides an overview of matroids, which are structures that generalize linear independence in vector spaces, detailing their properties, types, and examples such as uniform, linear, and graphic matroids. It also discusses the greedy algorithm, its components, and its applications in optimization problems, along with the history and characteristics of greedy approaches. Additionally, the document covers graph matching, including concepts like maximal and maximum matching, and the significance of graph matching in fields like computer vision.

Uploaded by

sadia firdous
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views37 pages

ALGORITHMS

The document provides an overview of matroids, which are structures that generalize linear independence in vector spaces, detailing their properties, types, and examples such as uniform, linear, and graphic matroids. It also discusses the greedy algorithm, its components, and its applications in optimization problems, along with the history and characteristics of greedy approaches. Additionally, the document covers graph matching, including concepts like maximal and maximum matching, and the significance of graph matching in fields like computer vision.

Uploaded by

sadia firdous
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 37

MATROIDS:

Matroid is a structure that abstracts and generalizes the notion of linear


independence in vector spaces.

Matroid is a pair ⟨X,I⟩⟨X,I⟩ where XX is called ground set and II is set of all
independent subsets of XX.

In other words matroid ⟨X,I⟩⟨X,I⟩ gives a classification for each subset of


XX to be either independent or dependent (included in II or not included
in II).

Of course, we are not speaking about arbitrary classifications.


These 3 properties must hold for
any matroid:
1. Empty set is independent.
2. Any subset of independent set is independent.
3. If independent set AA has smaller size than independent set BB,
there exist at least one element
in BB that can be added into AA without loss of independency.
These are axiomatic properties of matroid. To prove that we are dealing
with matroid, we generally
have to prove these three properties. For example, explicit presentation
of matroid on ground
set {x, y, z}{x, y, z} which considers {y, z}{y, z} and {x, y, z}{x, y, z} to be
dependent and marked red.
Other sets are independent and marked green in the below diagram.
There are matroids of different types
There are some examples:
Uniform matroid: Matroid that considers subset SS independent if size of SS
is not greater than some constant kk (|S|≤k|S|≤k).

Simplest one, this matroid does not really distinguish elements of ground set
in any form, it only cares about number of taken elements.

All subsets of size kk are bases for this matroid, all subsets of size (k+1)(k+1)
are circuits for this matroid.

We can also define some specific cases of uniform matroid.


Trivial matroid: k=0 k=0. Only empty set is considered independent, any

element of ground set is considered dependent (any combination of

ground set elements is also considered dependent as a

consequence).

Complete matroid: k=|X| k=|X|. All subsets are considered

independent including complete ground

set itself.
Linear (algebra) matroid: Ground set consists of vectors of some vector space.

Set of vectors is considered independent if it is linearly independent (no vector

can be expressed as linear combination of other vectors from that set).

This is the matroid from which whole matroid theory originates from. Linear

bases of vector set are bases of matroid.

Any circuit of this matroid is set of vectors, where each vector can be expressed

as combination of all other vectors, but this combination involves all other

vectors in circuit.
Colorful matroid: Ground set consists of colored elements. Each element
has exactly one color.

Set of elements is independent if no pair of included elements share a


color.

Rank of a set is amount of different colors included into a set.

Bases of this matroid are sets that have exactly one element of
each color.

Circuits of this matroid are all possible pairs of elements of the same color
Graphic matroid.

This matroid is defined on edges of some undirected graph. Set of edges is


independent if it does not contain a cycle.

This type of matroids is the greatest one to show some visual examples, because
it can include dependent subsets of a large size and can be represented on a
picture at the same time.

If graph is connected then any basis of this graph is just a spanning tree of this
graph.

If graph is not connected then basis is a forest of spanning trees that include one

spanning tree for each connected component.


Circuits are simple loops of this graph.

Independence oracle for this matroid type can be implemented with


DFS, BFS (start from each vertex in graph and check that no edge
connect a vertex with already visited one) or DSU (keep connected
components, start with disjoint vertices, join by all edges and ensure
that each edge connected different components upon addition).

Here is an example of circuit combinations property in graphic matroid


Truncated matroid:
We can limit rank of any matroid by some number kk without breaking
matroid properties.
For example, basis of truncated colorful matroid is set of elements that
include no more than kk different colors and all colors are unique.
Basis of truncated graphic matroid is acyclic set of edges that leaves at
least (n−k)(n−k) connected components in the graph (where nn is amount
if vertices in a graph).
This is possible because third matroid property does not only refer to
bases of matroid, but to any independent set in matroid and when all
independent sets with sizes greater than kk are simultaneously removed,
independent sets of size kk become new bases and for any lesser
independent set can still find elements from each basis that can be added.
Matroid on a subset of ground set.
We can limit ground set of matroid to its subset without breaking
matroid properties.
This is possible because rules of dependence does not rely on specific
elements being in ground set.
If we remove an edge from a graph, we will still have a valid graph. If
we remove an element from set (of vectors or colored elements)
we will still get a valid set of some element of the same type and rules
will preserve.
Now, we can also define rank of subset in matroid as a rank of matroid
on a ground set limited to this subset
Expanded matroid:
Direct matroid sum. We can consider two matroids as one big matroid without any

difficulties if elements of ground set of first matroid does not affect independence,

neither intersect with elements of ground set of second matroid and vice versa.

If we consider two graphic matroids on two connected graphs, we can unite their

graphs together resulting in graph with two connected components, but it is clear

that including some edges in one component have no effect on other component.

This is called direct matroid sum.


Formally,

M1=⟨X1,I1⟩M1=⟨X1,I1⟩,

M2=⟨X2,I2⟩M2=⟨X2,I2⟩,

M1+M2=⟨X1∪X2,I1×I2⟩M1+M2=⟨X1∪X2,I1×I2⟩,

Where ×× means cartesian product of two sets. We can unite as many

matroids of as many different types without restrictions.


Greedy Algorithm:
In Greedy Algorithm a set of resources are recursively divided based on
the maximum, immediate
availability of that resource at any given stage of execution. To solve a
problem based on the greedy
approach, there are two stages
1. Scanning the list of items
2. Optimization.
These stages are covered parallelly, on course of division of the array.
To understand the greedy approach, we need to have a working
knowledge of recursion and context switching.
This helps us to understand how to trace the code.
Two conditions define the greedy paradigm
1. Each stepwise solution must structure a problem towards its best-
accepted solution.
2. It is sufficient if the structuring of the problem can halt in a finite
number of greedy steps
History of Greedy Algorithms:
Here is an important landmark of greedy algorithms:

1. Greedy algorithms were conceptualized for many graph walk algorithms in the
1950s.

2.EsdgerDjikstra conceptualized the algorithm to generate minimal spanning trees.


He aimed to shorten the span of routes within the Dutch capital, Amsterdam.

3. In the same decade, Prim and Kruskal achieved optimization strategies that
were based on minimizing path costs along weighed routes.
4. In the '70s, American researchers, Cormen, Rivest and Stein proposed a recursive

substructuring of greedy solutions in their classical introduction to algorithms book

5. The greedy paradigm was registered as a different type of optimization strategy in

the NIST records in 2005.

6. Till date, protocols that run the web, such as the open-shortest-path-first (OSPF)

and many other network packet switching protocols use the greedy strategy to

minimize time spent on a network.


Components of Greedy Algorithm:
Greedy algorithms have the following five components:
1. A candidate set − A solution is created from this set.
2. A selection function − Used to choose the best candidate to be added
to the solution.
3. A feasibility function − Used to determine whether a candidate can be
used to contribute to the solution.
4. An objective function − Used to assign a value to a solution or a partial
solution.
5. A solution function − Used to indicate whether a complete solution has
been reached
Characteristics of the Greedy
Approach:
The important characteristics of a greedy method are:

1. There is an ordered list of resources, with costs or value attributions.


These quantify constraints on a system.

2. We will take the maximum quantity of resources in the time a


constraint applies.

3. For example, in an activity scheduling problem, the resource costs are


in hours, and the activities need to be performed in serial order
Here are the reasons for using the
greedy approach:
1. The greedy approach has a few tradeoffs, which may make it suitable
for optimization.
2. One prominent reason is to achieve the most feasible solution
immediately. In the activity selection problem, if more activities can be
done before finishing the current activity, these activities can be
performed within the same time.
3.Another reason is to divide a problem recursively based on a condition,
with no need to combine all the solutions.
4.In the activity selection problem, the "recursive division" step is
achieved by scanning a list of items only once and considering certain
activities.
How to solve the activity selection
problem
In the activity scheduling example, there is a "start" and "finish" time for every
activity. Each Activity is indexed by a number for reference. There are two activity
categories.
1. Considered activity: is the activity, which is the reference from which the ability
to do more
than one remaining activity is analyzed.
2. Remaining activities: activities at one or more indexes ahead of the considered
activity.
The total duration gives the cost of performing the activity. That is (finish - start)
gives us the durational as the cost of an activity. The greedy extent is the number of
remaining activities we can perform in the time of a considered activity.
Architecture of the Greedy
approach:
STEP 1: Scan the list of activity costs, starting with index 0 as the
considered Index.
STEP 2: When more activities can be finished by the time, the
considered activity finishes, start searching for one or more remaining
activities.
STEP 3: If there are no more remaining activities, the current remaining
activity becomes the next considered activity. Repeat step 1 and step 2,
with the new considered activity. If there are no remaining activities
left, go to step 4.
STEP 4: Return the union of considered indices. These are the activity
indices that will be used to maximize throughput.
Disadvantages of Greedy Algorithms
 It is not suitable for problems where a solution is required for every
sub-problem like sorting.
 In such problems, the greedy strategy can be wrong; in the worst case
even lead to a non-optimal
solution.
 Therefore the disadvantage of greedy algorithms is using not knowing
what lies ahead of the current
greedy state. Below is a depiction of the disadvantage of the greedy
approach
In the greedy scan shown here as a tree (higher value higher greed), an
algorithm state at value: 40, is
likely to take 29 as the next value. Further, its quest ends at 12. This
amounts to a value of 41.
However, if the algorithm took a sub-optimal path or adopted a
conquering strategy then 25 would be
followed by 40, and the overall cost improvement would be 65, which is
valued 24 points higher as a suboptimal decision.
Examples of Greedy Algorithms
Most networking algorithms use the greedy approach. Here is a list of
few of them:
1. Prim's Minimal Spanning Tree Algorithm
2. Travelling Salesman Problem
3. Graph - Map Coloring
4. Kruskal's Minimal Spanning Tree Algorithm
5. Dijkstra's Minimal Spanning Tree Algorithm
6. Graph - Vertex Cover
7. Knapsack Problem
8. Job Scheduling Problem
GRAPH MATCHING:
GRAPH MATCHING:
Graph matching is the problem of finding a similarity between graphs.

h where there are no edges a


Graphs are commonly used to encode structural information in many
fields, including computer vision and pattern recognition, and graph
matching is an important tool in these areas.
y common vertex between an
In these areas it is commonly assumed that the comparison is between
the data graph and the model graph
.The case of exact graph matching is known as the graph isomorphism
problem.

ph is called a matching M(G


The problem of exact matching of a graph to a part of another graph is
called subgraph isomorphism problem. The
The inexact graph matching
It refers to matching problems when exact matching is impossible,
e.g.,when the numbers of vertices in the two graphs are different.

In this case it is required to find the best possible match.

For example, in image recognition applications, the results of image


segmentation in image processing typically produces data graphs with
the numbers of vertices much larger than in the model graphs data
expected to match against.
In the case of attributed graphs, even if the numbers of vertices and edges are
the same, the matching still may be only in exact.

Two categories of search methods are the ones based on identification of


possible and impossible pairings of vertices between the two graphs and
methods which formulate graph matching as an optimization problem.

Graph edit distance is one of similarity measuressuggested for graph


matching.

The class of algorithms is called error-tolerant graph matching.

Definition
A matching graph is a sub-graph of a graph where there are no edges
adjacent to each other. Simply, there should not be any common vertex
between any two edges.
Matching
Let ‘G’ = (V, E) be a graph. A subgraph is called a matching M(G),

if each vertex of G is incidentwith at most one edge in M,

i.e.,deg(V) ≤ 1 ∀ V ∈ G. Which means in the matching graph M(G),


thevertices should have a degree of 1 or 0, where the edges should be
incident from the graph G.
• Notation

• − M(G)

• The Example:
• In a matching,ifdeg(V) = 1, then (V) is said to be matchedifdeg(V) = 0, then (V) is not matched.In a
matching, no two edges are adjacent. It is because if any two edges are adjacent, then the degree
ofthe vertex which is joining those two edges will have a degree of 2 which violates the matching
rule.
• Maximal Matching
• A matching M of graph ‘G’ is said to
• be maximal
• if no other edges of ‘G’ can be added to M
Maximum Matching

It is also known as largest maximal matching. Maximum matching is


defined as the maximal matching with maximum number of edges.

The number of edges in the maximum matching of ‘G’ is called its

matching number

.
For a graph given in the above example, M1 and M2

are the maximum matching of ‘G’ and its matching number is 2. Hence
by using the graph G, we can form only the sub-graphs with only 2
edges maximum. Hence we have the matching number as two.
Perfect Matching
A matching (M) of graph (G) is said to be a perfect match,
if every vertex of graph g (G) is incidentto exactly one edge of the
matching (M),
• i.e.,deg(V) = 1 ∀ V The degree of each and every vertex in the
subgraph should have a degree of 1.
• Perfect Matching - Example
• In the following graphs, M1 and M2
• are examples of perfect matching of G.

You might also like