0% found this document useful (0 votes)
4 views7 pages

Downloadfile 1

The document discusses the Divide and Conquer strategy, explaining its origins, applications in computer science, and efficiency in solving complex problems. It also covers the Matrix Chain Multiplication problem and Strassen's matrix multiplication algorithm, along with graph traversal techniques such as Depth-First Search (DFS) and Breadth-First Search (BFS). Additionally, it introduces the concept of Strongly Connected Components (SCCs) in graphs.

Uploaded by

demoharshit1106
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views7 pages

Downloadfile 1

The document discusses the Divide and Conquer strategy, explaining its origins, applications in computer science, and efficiency in solving complex problems. It also covers the Matrix Chain Multiplication problem and Strassen's matrix multiplication algorithm, along with graph traversal techniques such as Depth-First Search (DFS) and Breadth-First Search (BFS). Additionally, it introduces the concept of Strongly Connected Components (SCCs) in graphs.

Uploaded by

demoharshit1106
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

‭Divide and Conquer‬ ‭ 1‬‭=‬‭S tra s s en‬‭( ‬‭A11‬‭+‬‭A22‬‭,‬‭B 11‬‭+‬‭B 22‬‭) ‬

M
‭M2‬‭=‬‭S tra s s en‬‭( ‬‭A21‬‭+‬‭A22‬‭,‬‭B 11‬‭) ‬
‭●‬ I‭ t is a strategy inspired by the idea of breaking a large task into‬ ‭M3‬‭=‬‭S tra s s en‬‭( ‬‭A11‬‭,‬‭B 12‬‭-‬‭B 22‬‭) ‬
‭s maller, manageable parts.‬ ‭M4‬‭=‬‭S tra s s en‬‭( ‬‭A22‬‭,‬‭B 21‬‭-‬‭B 11‬‭) ‬
‭●‬ ‭I t was originally used in politics as “Divide et impera” (divide‬ ‭M5‬‭=‬‭S tra s s en‬‭( ‬‭A11‬‭+‬‭A12‬‭,‬‭B 22‬‭) ‬
‭a nd rule) by Roman rulers.‬ ‭M6‬‭=‬‭S tra s s en‬‭( ‬‭A21‬‭-‬‭A11‬‭,‬‭B 11‬‭+‬‭B 12‬‭) ‬
‭M7‬‭=‬‭S tra s s en‬‭( ‬‭A12‬‭-‬‭A22‬‭,‬‭B 21‬‭+‬‭B 22‬‭) ‬
‭●‬ ‭I n computer science, it is a method to solve complex problems‬
‭efficiently.‬
‭ 11‬‭=‬‭M1‬‭+‬‭M4‬‭-‬‭M5‬‭+‬‭M7‬
C
‭●‬ ‭Steps‬‭:‬ ‭C12‬‭=‬‭M3‬‭+‬‭M5‬
‭●‬ ‭Divide‬‭: Split the main problem into smaller,‬ ‭C21‬‭=‬‭M2‬‭+‬‭M4‬
‭non-overlapping subproblems.‬ ‭C22‬‭=‬‭M1‬‭-‬‭M2‬‭+‬‭M3‬‭+‬‭M6‬
‭●‬ ‭Conquer‬‭: Solve each of these smaller subproblems‬
‭(usually using recursion).‬ ‭ ombi ne‬‭C11‬‭,‬‭C12‬‭,‬‭C21‬‭,‬‭C22‬‭i nto C‬
C
‭●‬ ‭Combine‬‭: Merge the solutions of the subproblems to‬‭s olve‬ ‭return‬‭C‬
‭the original problem.‬ ‭Representation of Graph‬
‭1.Adjacency Matrix‬

‭2.Linked list‬

‭●‬ A ‭ pplications‬‭: Used in algorithms for sorting (Merge‬‭Sort, Quick‬


‭Sort), searching (Binary Search), and computational geometry.‬
‭●‬ ‭Efficiency‬‭: Reduces time complexity, often achieving‬‭O(n log n),‬
‭making it suitable for large datasets.‬

‭Matrix chain multiplication problem‬


‭●‬ M ‭ atrix chain multiplication (or Matrix Chain Ordering Problem,‬
‭MCOP) is an optimization problem that can be solved using‬
‭dynamic programming.‬
‭●‬ ‭MCOP helps to find the most efficient way to multiply given‬ ‭3.Incidence Matrix‬
‭matrices.‬
‭●‬ ‭Solution for matrix chain multiplication problem is Strassen’s‬
‭matrix multiplication.‬

‭Strassen’s matrix multiplication‬

‭Graph Traversal‬
‭Algorithm‬ ‭●‬ G ‭ raph is represented by its nodes and edges, so traversal of‬
‭S tra s s en‬‭( ‬‭A‬‭,‬‭B ‬‭) :‬ ‭each node is the traversing in graph.‬
‭i f‬‭s i ze‬‭( ‬‭A‬‭) ‬‭==‬‭1‬‭:‬ ‭●‬ ‭There are two standard ways of traversing a graph‬‭1.Depth first‬
‭return‬‭A‬‭[‬‭0‬‭][‬‭0‬‭]‬‭*‬‭B ‬‭[‭0
‬ ‬‭][‬‭0‬‭]‬ ‭search 2. Breadth first search‬
‭ i vi de A i nto A11‬‭,‬‭A12‬‭,‬‭A21‬‭,‬‭A22‬
D
‭D i vi de B i nto B11‬‭,‬‭B 12‬‭,‬‭B 21‬‭,‬‭B 22‬
‭DFS‬ ‭Adjacency List:‬

‭●‬ D‭ fs is the searching or traversing algorithm , in which we used‬ ‭‬ A


● ‭ →F,C,B‬ ‭ →G,C‬ C
B ‭ →F‬ D ‭ →C‬ ‭E→D,C,J‬
‭the stack data structure .‬ ‭●‬ ‭F→D‬ ‭G→C,E‬ ‭J→D,K‬ ‭K→E,G‬
‭●‬ ‭I n dfs ,we will first focus on the depth then go to the breadth at‬
‭that level. Time complexity : O( V + E) & space complexity :O(v)‬ ‭●‬ ‭Queue = [A], BFS = [] , Pop A, visit A, push F,C,B‬
‭○‬ ‭Queue = [F, C, B], BFS = [A].‬
‭Numerical of DFS‬ ‭●‬ ‭Queue = [F, C, B], BFS = [A] ,Pop F, visit F, push D.‬
‭○‬ ‭Queue = [C, B, D], BFS = [A, F].‬
‭●‬ ‭Queue = [C, B, D], BFS = [A, F] , Pop C, visit C.‬
‭○‬ ‭Queue = [B, D], BFS = [A, F, C].‬
‭●‬ ‭Queue = [B, D], BFS = [A, F, C], Pop B, visit B, push G.‬
‭○‬ ‭Queue = [D, G], BFS = [A, F, C, B].‬
‭●‬ ‭Queue = [D, G], BFS = [A, F, C, B] ,Pop D, visit D.‬
‭○‬ ‭Queue = [G], BFS = [A, F, C, B, D].‬
‭●‬ ‭Queue = [G], BFS = [A, F, C, B, D]‬ ‭, Pop G, visit G, push E.‬
‭○‬ ‭Queue = [E], BFS = [A, F, C, B, D, G].‬
‭‬ A
● ‭ djacency List‬ ‭●‬ ‭Queue = [E], BFS = [A, F, C, B, D, G] , Pop E, visit E, push J.‬
‭●‬ ‭1→2,7‬ ‭2→3‬ ‭ →5,4,1‬
3 ‭○‬ ‭Queue = [J], BFS = [A, F, C, B, D, G, E].‬
‭●‬ ‭4→6‬ ‭5→4‬ ‭6→2,5,1‬ ‭7→3,6‬ ‭●‬ ‭Queue = [J], BFS = [A, F, C, B, D, G, E] , Pop J, visit J, push K.‬
‭○‬ ‭Queue = [K], BFS = [A, F, C, B, D, G, E, J].‬
‭ ‬ I‭nitially set unvisited for all vertex‬
● ‭●‬ ‭Queue = [K], BFS = [A, F, C, B, D, G, E, J] , Pop K, visit K.‬
‭●‬ ‭Stack = [1]: Pop 1, mark vis[1]=1 , push 2,7‬ ‭○‬ ‭Queue = [], BFS = [A, F, C, B, D, G, E, J, K].‬
‭○‬ ‭DFS = [1], Stack = [2, 7]‬
‭●‬ ‭Stack = [2, 7]: Pop 7, mark vis[7]=1 , push 3,6‬ ‭BFS Algorithm‬
‭○‬ ‭DFS = [1, 7], Stack = [2, 3, 6]‬ ‭B FS‬‭( ‬‭G ra ph‬‭,‬‭S ta rtVertex‬‭) :‬
‭●‬ ‭Stack = [2, 3, 6]: Pop 6, mark vis[6]=1 , push 2,5.‬ ‭1‬‭.‬‭Create a queue Q‬‭a nd‬‭m a rk‬‭all‬‭vertices ‬‭a s ‬‭u nvi s i ted‬
‭○‬ ‭DFS = [1, 7, 6], Stack = [2, 3, 2,5]‬ ‭2‬‭.‬‭Ma rk Sta rtVertex‬‭a s ‬‭vi s i ted‬‭a nd‬‭e nqueue i t‬‭i nto Q‬
‭3‬‭.‬‭Whi l e Q‬‭i s ‬‭n ot‬‭e mpty‬‭:‬
‭●‬ ‭Stack = [2, 3,2, 5]: Pop 5, mark vis[5]=1 , push 4.‬
‭a ‬‭.‬‭D equeue a vertex V‬‭from‬‭Q‬
‭○‬ ‭DFS = [1, 7, 6, 5], Stack = [2, 3, 2,4]‬ ‭b ‬‭.‬‭Proces s vertex V‬‭( ‬‭e ‬‭.‬‭g‬‭.,‬‭print‬‭i t‬‭) ‬
‭●‬ ‭Stack = [2, 3,2, 4]: Pop 4, mark vis[4]=1 , no new vertices to push.‬ ‭c‬‭.‬‭For ea ch nei ghbor U of‬‭V‬‭:‬
‭○‬ ‭DFS = [1, 7, 6, 5, 4], Stack = [2, 3,2]‬ ‭i ‬‭.‬‭I f U‬‭i s ‬‭n ot‬‭vi s i ted‬‭:‬
‭●‬ ‭Stack = [2, 3,2]: Pop 2, mark vis[2]=1 , no new vertices to push‬‭.‬ ‭-‬‭Ma rk U‬‭a s ‬‭vi s i ted‬
‭○‬ ‭DFS = [1, 7, 6, 5, 4, 2], Stack = [2,3]‬ ‭-‬‭Enqueue U i nto Q‬
‭●‬ ‭Stack = [2,3]: Pop 3, mark vis[3]=1 , no new vertices to push.‬
‭○‬ ‭DFS = [1, 7, 6, 5, 4, 2,3], Stack = [2]‬ ‭Connected Graph Algorithm‬
‭●‬ ‭Stack=[2] : pop 2 , already visited‬ i‭sConnected‬‭(‬‭Graph‬‭):‬
‭○‬ ‭DFS = [1, 7, 6, 5, 4, 2,3], Stack = []‬ ‭1‬‭.‬‭Ma rk‬‭all‬‭vertices ‬‭a s ‬‭u nvi s i ted‬
‭2‬‭.‬‭Choos e a sta rting vertex Sta rtVertex‬
‭DFS Algorithm‬ ‭3‬‭.‬‭Create a n empty queue Q‬
‭D FS‬‭(‬‭Graph‬‭,‬‭Start‬‭):‬ ‭4‬‭.‬‭Ma rk Sta rtVertex‬‭a s ‬‭vi s i ted‬‭a nd‬‭e nqueue i t i nto‬‭Q‬
‭1‬‭.‬‭I ni tia l i ze a sta ck‬
‭2‬‭.‬‭Ma rk‬‭all‬‭n odes ‬‭a s ‬‭u nvi s i ted‬ ‭5‬‭.‬‭Whi l e Q‬‭i s ‬‭n ot‬‭e mpty‬‭:‬
‭3‬‭.‬‭Pus h Sta rt node onto the sta ck‬ ‭a ‬‭.‬‭D equeue a vertex V‬‭from‬‭Q‬
‭b ‬‭.‬‭For ea ch nei ghbor U of‬‭V‬‭:‬
‭4‬‭.‬‭Whi l e sta ck‬‭i s ‬‭n ot‬‭e mpty‬‭:‬ ‭i ‬‭.‬‭I f U‬‭i s ‬‭n ot‬‭vi s i ted‬‭:‬
‭a ‬‭.‬‭Pop a node‬‭from‬‭the sta ck‬ ‭-‬‭Ma rk U‬‭a s ‬‭vi s i ted‬
‭b ‬‭.‬‭I f the node‬‭i s ‬‭n ot‬‭vi s i ted‬‭:‬ ‭-‬‭Enqueue U i nto Q‬
‭i ‬‭.‬‭Ma rk the node‬‭a s ‬‭vi s i ted‬
‭i i ‬‭.‬‭Proces s the node‬‭( ‬‭e ‬‭.‬‭g‬‭.,‬‭print‬‭o r‬‭s tore‬‭i t‬‭) ‬ ‭6‬‭.‬‭Check‬‭all‬‭vertices ‭:‬‬
‭i i i ‬‭.‬‭For ea ch nei ghbor of the node‬‭( ‬‭i n‬‭revers e order‬‭) :‬ ‭-‬‭I f‬‭all‬‭vertices a re vi s i ted‬‭,‬‭return‬‭True‬
‭-‬‭I f the nei ghbor‬‭i s ‬‭n ot‬‭vi s i ted‬‭:‬ ‭-‬‭Otherwi s e‬‭,‬‭return‬‭Fa l s e‬
‭Pus h the nei ghbor onto the sta ck‬
‭Strongly Connected Components‬
‭Breadth-first-search [BFS]‬
‭ ‬ S‭ trongly Connected Components (SCCs) are parts of a graph‬

‭●‬ B ‭ fs is the searching or traversing algorithm , in which we used‬ ‭●‬ ‭where every node can reach every other node in the same part.‬
‭the queue data structure .‬ ‭●‬ ‭I f each SCC is treated as one node, the graph becomes a Directed‬
‭●‬ ‭I n bfs ,we will first focus on the breadth then go to the depth at‬ ‭Acyclic Graph (DAG).‬
‭that level.‬ ‭●‬ ‭A simple way to find all SCCs in a graph.‬
‭●‬ ‭Time complexity : O( V + E ) & space complexity :O(v)‬

‭Numerical of BFS‬
‭Kosaraju Algorithm‬ ‭Activity Selection Problem‬
‭Kosaraju‬‭(‬‭G‬‭):‬
‭●‬ T‭ he Activity Selection Problem is about selecting the maximum‬
‭1‬‭.‬‭S ta ck‬‭=‬‭[]‬
‭2‬‭.‬‭R un‬‭D FS1‬‭o n G to fil l Sta ck by fini s h time‬
‭number of activities that do not overlap, given their start and‬
‭3‬‭.‬‭Tra ns pos e G to get G‬‭^‬‭T‬ ‭finish times.‬
‭4‬‭.‬‭S CCs ‬‭=‬‭[]‬ ‭●‬ ‭Problem Statement‬
‭5‬‭.‬‭Whi l e Sta ck‬‭i s ‬‭n ot‬‭e mpty‬‭:‬ ‭●‬ ‭You are given‬‭n activities‬‭, each with a start time‬‭(s[i])‬‭a nd a‬
‭Pop v‬‭from‬‭S ta ck‬ ‭finish time‬‭(f[i])‬‭.‬
‭I f v‬‭i s ‬‭u nvi s i ted‬‭i n‬‭G ‬‭^‬‭T‬‭:‬ ‭●‬ ‭Choose the maximum number of activities that can be‬
‭R un‬‭D FS2‬‭o n G‬‭^‬‭T sta rting‬‭from‬‭v‬ ‭performed without overlapping‬‭.‬
‭Add‬‭all‬‭vi s i ted nodes to SCCs ‬
‭6‬‭.‬‭Return SCCs ‬
‭Algorithm for activity Selection Problem‬
‭ ‬ S‭ ort the activities by their finish times.‬

‭●‬ ‭Select the first activity (the one that finishes earliest).‬
‭Convex Hull‬ ‭●‬ ‭For each subsequent activity, check if its start time is greater‬
‭●‬ T‭ he convex hull of a set of points in a 2D plane is the smallest‬ ‭than or equal to the finish time of the last selected activity.‬
‭convex polygon that encloses all the given points.‬ ‭●‬ ‭I f yes, select the activity.‬
‭●‬ ‭I magine stretching a rubber band around the points—it snaps‬
‭i nto the shape of the convex hull.‬ ‭Example‬
‭●‬ ‭The convex hull is a convex polygon, meaning any line segment‬
‭between two points inside the hull lies entirely within it.‬ ‭●‬ I‭nput‬‭:‬
‭●‬ ‭I t contains all the given points either on its boundary or inside.‬ ‭Activities = (1,3),(2,5),(4,6),(6,8),(5,7) (Each pair represents‬
‭(start,finish))‬
‭●‬ ‭Steps‬‭:‬
‭●‬ ‭Sort activities by finish times: (1,3),(4,6),(2,5),(5,7),(6,8)‬
‭●‬ ‭Select (1,3).‬
‭●‬ ‭Next non-overlapping activities: (4,6), (6,8)‬

‭ utput‬‭:‬
O
‭Selected activities: (1,3),(4,6),(6,8)‬
‭Applications‬ ‭Time Complexity:‬
‭‬ C
● ‭ omputer Graphics:‬‭Shape modeling and collision detection.‬ ‭ ‬ S‭ orting: O(nlog⁡n)‬

‭●‬ ‭Geographic Information Systems (GIS):‬‭Bounding geographical‬ ‭●‬ ‭Selection: O(n)‬
‭data.‬ ‭●‬ ‭Total: O(nlog⁡n) .‬
‭●‬ ‭Robotics‬‭: Path planning and motion trajectories.‬
‭●‬ ‭Machine Learning‬‭: Support vector machines and data‬‭clustering.‬ ‭Applications:‬
‭ ‬ S‭ cheduling tasks.‬

‭●‬ ‭Allocating resources.‬
‭Example‬
‭●‬ ‭Given points: (1,1),(2,2),(2,0),(2,4),(3,3),(4,2)‬ Quantum 3.13‬

‭●‬ ‭The convex hull includes points: (1,1),(2,4),(4,2),(2,0)‬ ‭Fractional Knapsack Problem‬
‭forming a quadrilateral.‬
‭●‬ T‭ he Fractional Knapsack Problem involves maximizing the total‬
‭Greedy Algorithms‬ ‭value of items placed in a knapsack with a given weight‬
‭capacity.‬
‭‬ G
● ‭ reedy algorithms are simple and easy to understand.‬
‭●‬ ‭Unlike the 0/1 Knapsack Problem, you can take fractions of an‬
‭●‬ ‭They make decisions based only on the current situation,‬
‭i tem.‬
‭without thinking about how those decisions might affect future.‬
‭●‬ ‭Example Problem‬
‭●‬ ‭They are easy to create, simple to code, and often very fast.‬
‭●‬ ‭Given‬‭:‬
‭●‬ ‭However, greedy algorithms don't always give the correct‬
‭●‬ ‭I tems: (v,w)=[(60,10),(100,20),(120,30)]‬
‭s olution for every problem.‬
‭●‬ ‭Knapsack capacity W=50‬
‭●‬ ‭They are mostly used for solving optimization problems.‬

‭Advantages‬
‭ ‬ S‭ imple to implement.‬

‭●‬ ‭Efficient for many problems, with time complexity often O(nlog⁡n)‬
‭or better.‬

‭Limitations‬
‭●‬ G
‭ reedy algorithms may fail when future choices depend on‬
‭earlier ones, like in the knapsack problem where fractional‬ Seen
‭ ‬
‭choices are not allowed.‬
‭“Unlike the Fractional Knapsack Problem, where items can be divided,‬
‭in the 0/1 Knapsack Problem, each item can either be taken whole or‬
‭left out.”‬

Quantum 3.22‬

‭Prims Algorithm‬ ‭Algorithm Step‬
I‭nitialize‬‭:‬‭Create a forest‬‭w ith‬‭each vertex‬‭as‬‭a separate tree‬‭.‬
‭●‬ I‭ t is a greedy algorithm that is used to find the minimum‬ ‭Sort‬‭Edges‬‭:‬‭Arrange‬‭all‬‭edges‬‭in‬‭increasing order‬‭of their weights‬‭.‬
‭s panning tree from a graph.‬ ‭B uild‬‭MST‬‭:‬‭Pick the smallest edge‬‭.‬‭If it doesn‬‭'‬‭t form‬‭a cycle, include it in the‬
‭●‬ ‭Prim's algorithm starts with the single node and explores all‬ ‭MST.‬
‭the adjacent nodes with all the connecting edges at every step.‬ ‭Repeat‬‭:‬‭Continue until the‬‭MST‬‭has 𝑉−‬‭1‬‭edges‬‭( ‬‭w here‬‭V‬‭is‬‭the number of‬
‭●‬ ‭I t is used for undirected graph .‬ ‭vertices‬‭) .‬
‭●‬ ‭Time complexity :O(E Log V)) & space complexity : O(V)‬
‭Dijkstra algorithm‬
‭●‬ D
‭ ijkstra's algorithm is an algorithm for finding the shortest‬
‭paths between nodes in a weighted graph‬
‭●‬ I‭ t is a type of Greedy Algorithm that only works on Weighted‬
‭Graphs having positive weights.‬
‭●‬ ‭I t can also be used for finding the shortest paths from a single‬
‭node to a single destination‬
‭●‬ ‭time‬ ‭complexity : O( E log V) & space complexity :‬‭O( V)‬

‭Algorithm Step‬

‭1.‬ ‭ ark the source node‬‭with‬‭a current distance of‬‭0‬‭and‬‭set‬‭the‬


M
‭distance of‬‭all‬‭other nodes to‬‭INFINITY‬‭.‬
‭2.‬ ‭S elect the unvisited node‬‭with‬‭the smallest current‬‭distance‬‭as‬
‭the current node‬‭(‬‭“curr”‬‭).‬
‭ .‬
3 ‭For each neighbor N of the current node curr‬‭:‬
‭4.‬ ‭If dist‬‭[‬‭curr‬‭]+‬‭weight‬‭[‬‭N‬‭]<‬‭dist‬‭[‬‭N‬‭]‬‭,‬‭update‬
‭dist‬‭[‬‭N‬‭]=‬‭dist‬‭[‬‭curr‬‭]+‬‭weight‬‭[‬‭N‬‭]‬‭.‬
‭ .‬
5 ‭Mark the current node curr‬‭as‬‭v isited‬‭.‬
‭6.‬ ‭Repeat‬‭from‬‭S tep‬‭2‬‭until‬‭all‬‭nodes are visited‬‭.‬

‭Implementation of Dijikstra Algorithm‬


‭Algorithm Step‬
‭Initialization‬‭:‬‭Choose‬‭any‬‭vertex‬‭as‬‭the starting point‬‭.‬

‭Process‬‭:‬‭Repeat until‬‭all‬‭vertices are part of the‬‭MST‬‭:‬


‭a‬‭.‬‭From the current‬‭MST‬‭,‬‭find the smallest‬‭-‬‭w eight‬‭edge connecting to a‬
‭new vertex‬‭.‬
‭b‬‭.‬‭Add this edge‬‭and‬‭vertex to the‬‭MST‬‭.‬
‭End‬‭:‬
‭Stop when the‬‭MST‬‭includes‬‭all‬‭vertices‬‭.‬
‭Kruskal's Algorithm‬
‭●‬ K ‭ ruskal's Algorithm It is a greedy algorithm that is used to find‬
‭the minimum spanning tree from a graph.‬
‭●‬ ‭I n Kruskal's algorithm, we start from edges with the lowest‬
‭weight and keep adding the edges until the goal is reached.‬
‭●‬ ‭time complexity :O(E Log E)) & space complexity : O(V)‬

‭Bellman Ford Algrithm‬


‭●‬ T‭ he Bellman-Ford algorithm is used to find the shortest paths‬
‭from a source vertex to all other vertices in a weighted graph.‬
‭Unlike Dijkstra's algorithm,‬
‭●‬ ‭i t can handle graphs with negative edge weights.‬
‭●‬ ‭Handles Negative Weights:‬‭Works for graphs with both positive‬
‭a nd negative edge weights (but no negative weight cycles).‬
‭●‬ T‭ ime Complexity:‬‭O(VE), where V is the number of vertices and E‬
‭i s the number of edges.‬
‭●‬ ‭Space Complexity‬‭: O(V) .‬

‭I ‭m
‬ plementation of BellmanFord Algortihm‬

‭Algorithm Steps‬
‭1‬‭.‬‭Initialize‬‭distances‬‭:‬
‭dist‬‭[‬‭source‬‭]‬‭=‬‭0‬
‭dist‬‭[‬‭all‬‭other vertices‬‭]‬‭=‬‭infinity‬

‭2‬‭.‬‭Repeat‬‭( ‬‭V‬‭-‬‭1‬‭) ‬‭times‬‭:‬


‭For each edge‬‭( ‬‭u‬‭,‬‭v‬‭) ‬‭w ith‬‭w eight w‬‭:‬
‭If dist‬‭[‬‭u‬‭]‬‭+‬‭w ‬‭<‬‭dist‬‭[‭v‬ ‬‭]:‬
‭dist‬‭[‬‭v‬‭]‬‭=‬‭dist‬‭[‬‭u‬‭]‬‭+‬‭w ‬

‭3‬‭.‬‭Check‬‭for‬‭negative‬‭w eight cycles‬‭:‬


‭For each edge‬‭( ‬‭u‬‭,‬‭v‬‭) ‬‭w ith‬‭w eight w‬‭:‬
‭If dist‬‭[‬‭u‬‭]‬‭+‬‭w ‬‭<‬‭dist‬‭[‭v‬ ‬‭]:‬
‭Report‬‭"‬‭Negative weight cycle exists‬‭"‬

You might also like