0% found this document useful (0 votes)
23 views

Algorithm

An algorithm is a set of steps to solve a problem. It must be clear, finite, and produce the correct output given valid input. Common ways to represent algorithms include pseudocode, flowcharts, and programming code. Key aspects like time and space complexity are analyzed.

Uploaded by

Maruf
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views

Algorithm

An algorithm is a set of steps to solve a problem. It must be clear, finite, and produce the correct output given valid input. Common ways to represent algorithms include pseudocode, flowcharts, and programming code. Key aspects like time and space complexity are analyzed.

Uploaded by

Maruf
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 16

Algorithm Overview: Definition and Properties

An algorithm is essentially a recipe for solving a problem. It's a set of well-defined, step-by-step
instructions that take some input and produce a specific output. Here's a breakdown of the key
points:

Definition:
 A finite set of instructions for solving a problem in a specific order [1].
 A procedure for performing a computation or solving a problem [2].
Properties of a Good Algorithm:
 Finiteness: The algorithm must terminate after a finite number of steps, meaning it
shouldn't run forever [3].
 Clarity: The instructions should be clear, unambiguous, and easy to understand [2, 4].
 Input: The algorithm should have well-defined input, specifying the type and format of data
it can handle [4].
 Output: The algorithm should produce a well-defined output, specifying the type and
format of the results [4].
 Correctness: The algorithm must produce the desired output for every valid input [4].
 Efficiency: Ideally, the algorithm should use resources (like time and memory) in an
optimal way for the given problem [4].
Real-world examples of algorithms:
 A recipe is an algorithm for cooking a dish.
 The long division method is an algorithm for solving division problems.
 The steps you follow to tie your shoes is an algorithm.

By understanding algorithms, we can break down complex problems into smaller, manageable
steps and design efficient solutions for computers and other applications.

Differences between Algorithm and Program

Aspect Algorithm Program


A set of instructions written in a specific
A sequence of well-defined steps to programming language to implement an
Definition solve a problem. algorithm.
Nature Abstract, conceptual. Concrete, executable.
Typically described in pseudocode or Executable code written in a programming
Execution a high-level language. language.
To implement a solution in a specific
Purpose To solve a problem conceptually. environment.
Input Abstract, often not defined. Concrete, defined by the program's input
Aspect Algorithm Program
mechanism.
Output Abstract, often not defined. Concrete, produced by the program's execution.
Must terminate after a finite number May terminate or loop indefinitely depending on
Termination of steps. its design.
Can be implemented in various Implemented in a specific programming
Implementation programming languages. language.
More flexible, allowing for multiple Less flexible, tied to a specific programming
Flexibility implementations. language.
Focuses on efficiency and Optimized for efficiency and correctness in the
Optimization correctness at a conceptual level. implemented code.
Validated through analysis and
Validation testing. Validated through testing and debugging.

Algorithm vs. Program

Feature Algorithm Program

Step-by-step instructions to solve a Implementation of an algorithm in a


Definition
problem specific programming language

Level of
More abstract, focuses on logic More specific, translates logic into code
Abstraction

Language independent (can be Language dependent (written in a


Language
written in plain English, pseudocode, specific programming language like
Dependence
or flowcharts) Python, Java)

Focus Solves a problem in a general way Executes instructions on a computer

Example Long division method Python code to perform long division


Algorithm representation

Algorithms can be represented in various ways, depending on the context and the audience. Here are
some common methods of representing algorithms
1. Pseudocode: Pseudocode is a high-level description of an algorithm that combines natural
language and some programming language-like syntax. It's designed to be easily understood by
humans and is often used during the initial stages of algorithm design. Pseudocode abstracts
away specific programming language syntax, focusing on the logical flow of the algorithm.
Example:
// Algorithm to find the maximum element in an array
max = array[0]
for each element in array
if element > max
max = element
return max

2. Flowcharts: Flowcharts use graphical symbols and arrows to represent the logical flow of an
algorithm. They provide a visual representation of the sequence of steps, decision points, and
loops within an algorithm. Flowcharts are particularly useful for illustrating complex branching
and looping structures.

Basic Flowchart Symbols


Process/Operation Symbols
Process/Operations Flowchart symbols are excellent for describing the sequence of operations. A
well-designed flowchart depicts the process from beginning to end.

Branch and Control of Flow Symbols

These branching controls decide whether or not a decision is made later in the process. They are
frequently represented by an arrow with the ability to turn left or right. Control flow charts are
used to describe complex logical problems, such as decision-making, algorithms, or other
methods for problem solving.

Input and Output Symbols

These
flowchart symbols represent a series of processes that convert input into output. A flowchart
depicts the beginning and end of each process in the order in which it should be carried out using
Input and Output symbols.
Data and Information Storage Symbols

The data could be structured information, unstructured information, raw data, or digital
information. Databases, text files, spreadsheets, or a combination of these formats can be used to
store various types of information. The primary distinction between structured and unstructured
data is that the latter is not organized in a predictable manner.

Data Processing Symbols

Data processing symbols are used to communicate data flow processes among people who may
or may not be working with computers. Data-processing symbols are used by people to
communicate what they want and what will happen next.
3. Structured English: Structured English is a natural language-based representation of an
algorithm that uses structured constructs such as sequence, selection, and iteration. It's similar to
pseudocode but is written in plain English sentences, making it easier for non-technical
stakeholders to understand.
Example:
Begin
Set max to the first element of the array
For each element in the array
If the element is greater than max
Set max to the element
End if
End for
Return max

4. Programming Language Code: Algorithms can also be represented directly in programming


languages using actual code syntax. This representation is executable and can be run on a
computer to solve real-world problems.
Example (in Python):
def find_max(arr):
max_val = arr[0]
for elem in arr:
if elem > max_val:
max_val = elem
return max_val

5. Structured Diagrams: Other types of diagrams, such as Nassi-Shneiderman diagrams or UML


activity diagrams, can be used to represent algorithms. These diagrams provide structured visual
representations similar to flowcharts but may use different symbols and conventions.

RESOURCES CONSIDERED FOR ALGORITHM ANALYSIS


There are several key resources considered for algorithm analysis:

1. Time Complexity:
 This measures the amount of time an algorithm takes to execute as the input size grows. It's
often expressed using Big O Notation, which categorizes how the execution time scales with
input size (e.g., O(n) for linear time, O(n^2) for quadratic time).
2. Space Complexity:
 This measures the amount of memory an algorithm needs to run as the input size grows. It's also
commonly analyzed using Big O Notation to understand the memory requirements based on
input size.

3. Algorithmic Paradigms:
 These are broad categories of algorithms designed to solve specific problems in a certain way.
Common paradigms include:
o Divide-and-Conquer: Breaks down a problem into smaller subproblems, solves them
recursively, and combines the solutions.
o Dynamic Programming: Overlapping subproblems are solved only once and stored for
reuse, improving efficiency.
o Greedy Algorithms: Make the locally optimal choice at each step with the hope of
finding a global optimum.
4. Data Structures:
 The way data is organized in memory can significantly impact algorithm performance.
Understanding how different data structures (e.g., arrays, linked lists, trees) affect access times
and operations is crucial for efficient algorithm design.
5. Analysis Techniques:
 Techniques like profiling and benchmarking help measure the actual running time and memory
usage of algorithms on specific hardware and inputs. This provides practical insights into real-
world performance.
Resources for further exploration:
 Textbooks on Algorithm Design and Analysis (e.g., Introduction to Algorithms by Cormen et
al.)
 Online Courses on Algorithm Analysis (e.g., Introduction to Algorithms on Coursera)
 Online Resources like Wikipedia articles on Time Complexity, Space Complexity, and
Algorithmic Paradigms

Divide and Conquer


Divide and conquer is a powerful algorithmic paradigm for solving problems by breaking them
down into smaller, more manageable subproblems. Here's a breakdown of the key concepts:

Idea:
1. Divide: Break the problem into smaller subproblems that are similar to the original
problem but smaller in size
2. Conquer: Solve each subproblem recursively. If the subproblem is small enough, solve it
directly using a base case.
3. Combine: The solutions to the subproblems are combined to form the solution to the
original problem.

Benefits:
 Efficiency: Divide and conquer can often lead to more efficient algorithms compared to
brute-force approaches, especially for complex problems.
 Parallelization: The independent nature of subproblems makes divide-and-conquer
algorithms well-suited for parallel processing, where multiple processors can solve
subproblems simultaneously.

Examples of Divide-and-Conquer Algorithms:


 Merge Sort: Sorts a list by dividing it into halves, recursively sorting each half, and then
merging the sorted halves.
 Quick Sort: Selects a pivot element from the list, partitions the list around the pivot
(elements less than the pivot come before, and elements greater than the pivot come
after), and then recursively sorts the sub-lists on either side of the partition.
 Binary Search: Searches for a target value within a sorted array by repeatedly dividing the
search interval in half until the target is found or the interval becomes empty.
Real-world Applications:
 Divide-and-conquer is used in various applications, including:
o Image and signal processing (e.g., Fast Fourier Transform)
o Cryptography (e.g., RSA algorithm)
o Robotics (e.g., path planning)
When to Use Divide and Conquer:
 Divide-and-conquer is a good choice for problems that can be naturally divided into
independent subproblems.
 It's also well-suited for problems where the time complexity of solving subproblems is
significantly less than the time complexity of solving the original problem directly.
Limitations:
 Divide-and-conquer might not be suitable for problems with high overhead in the divide and
combine steps.
 Additionally, it may not be efficient for problems where the subproblems are not truly
independent.

a. Path
A path is a sequence of vertices, where each consecutive pair of vertices is connected by an
edge in the graph.
Formally, a path in a graph G is defined as a sequence of vertices v1,v2,…,vn such that for every i
where 1≤ i <1≤i<n, there exists an edge between vertices vi and vi+1.
A path can be of varying lengths, including:
1. Simple Path: A path where all vertices are distinct, except for the possibility of the first
and last vertices being the same (forming a cycle if they are).
2. Cycle: A path where the first and last vertices are the same, and all other vertices are
distinct. A cycle must contain at least three vertices.
3. Elementary Path: A path where all edges are distinct, i.e., no edge is repeated in the
path.
4. Hamiltonian Path: A path that visits every vertex exactly once in a graph. If the
Hamiltonian path ends where it started, it forms a Hamiltonian cycle.
5. Eulerian Path: A path that traverses every edge exactly once. If the Eulerian path ends
where it started, it forms an Eulerian cycle.

b. Circle
A circle is a closed path in a graph where the first and last vertices are the same.
Formally, a cycle in a graph G is a sequence of vertices v1,v2,…,vn such that v1=vn and for
every i where 1≤ i <1≤i<n, there exists an edge between vertices vi and +1vi+1, as well as an
edge between vn and v1.
Cycles can be classified into various types based on their properties:
1. Simple Cycle: A cycle where all vertices are distinct except for the first and last
vertices, which are the same. There are no repeated vertices or edges along the cycle.
2. Eulerian Cycle: A cycle that traverses every edge of the graph exactly once and
returns to the starting vertex. In other words, it's a closed trail that visits every edge of
the graph exactly once.
3. Hamiltonian Cycle: A cycle that visits every vertex of the graph exactly once and
returns to the starting vertex. In other words, it's a closed path that visits every vertex of
the graph exactly once.
4. Non-simple Cycle: A cycle that may contain repeated vertices or edges. It's not
necessarily a simple cycle.

c. Tree
A tree is a type of undirected graph that is acyclic (contains no cycles) and connected.

Characteristics:

 Nodes and Edges: It consist of vertices (often called nodes) connected


by edges (branches).
 Leaves: Vertices with only one edge connecting them are called leaves.
 Degree of a Node: The number of edges connected to a node is its degree. In a tree, all
nodes (except the root) have a degree of 1 or 2.
 Root Node (Optional): Trees can have a designated root node, acting as the starting point
of the hierarchical structure. However, some trees may not have a designated root.

Trees have numerous applications, including:


1. Data structures: Trees are commonly used as the underlying structure for various data
structures such as binary trees, binary search trees, and heaps.
2. Networks: Trees are used to model hierarchical structures in computer networks, such
as the Internet or organizational networks.
3. Algorithms: Many algorithms in computer science, such as tree traversal algorithms,
rely on trees as their underlying data structure.
4. Routing and optimization: Trees are used in routing algorithms to find the most efficient
paths in networks.
5. Representation of hierarchical data: Trees are often used to represent hierarchical data
structures such as file systems, organization charts, and XML documents.

d. Strongly Connected
Strongly Connected or Directed Graph is a graph where edges have a specific direction,
indicated by an arrow.
For every pair of vertices (u, v) in the graph, there exists:

o A directed path from u to v (following the direction of the edges).


o A directed path from v to u (following the direction of the edges).

Applications

Strongly connected graphs have applications in various areas, including:


 Transportation networks: Analyzing one-way routes in traffic systems.
 Circuit design: Identifying independent components in electronic circuits.
 Algorithmic analysis: Studying the efficiency of algorithms on different graph
structures.
Breadth-First Search
Breadth-First Search (BFS) is a graph traversal algorithm that explores a graph
systematically. It starts at a specific vertex (node) and explores all its neighbor vertices at the
current depth level before moving on to the next depth level.
BFS algorithm:
1. Choose a starting node: Select a starting node as the root of the BFS traversal.
2. Initialize data structures: Initialize a queue to keep track of the nodes to be visited. Enqueue
the starting node into the queue. Also, initialize a set or array to keep track of visited nodes to
avoid revisiting them.
3. BFS traversal:
 While the queue is not empty:
 Dequeue a node from the queue (let's call it the current node).
 Process the current node (e.g., visit or perform any desired operation).
 Enqueue all the neighboring nodes of the current node that have not been visited
yet.
 Mark the current node as visited.
4. Repeat until the queue is empty: Continue this process until all nodes reachable from the
starting node have been visited.
5. Output: The output of the BFS algorithm depends on the specific problem being solved. For
example, if you are interested in finding the shortest path from the source node to every other
node, you can maintain a distance array or dictionary that stores the shortest distance from the
source to each node.

Depth-First Search
Depth-First Search (DFS) is another fundamental graph traversal algorithm that explores as
far as possible along each branch before backtracking. It operates on graphs represented
either as adjacency lists or adjacency matrices. It is often used for searching or traversing a
graph and can be used to detect cycles, find connected components, and perform topological
sorting.

DFS algorithm:
1. Choose a starting node: Select a starting node as the root of the DFS traversal.
2. Initialize data structures: Initialize a stack (or use recursion with the call stack) to keep track of
the nodes to be visited. Also, initialize a set or array to keep track of visited nodes to avoid
revisiting them.
3. DFS traversal:
 Push the starting node onto the stack (or make a recursive call with the starting node).
 While the stack is not empty (or the recursive calls continue):
o Pop a node from the stack (or process the current node if using recursion).
o Process the current node (e.g., visit or perform any desired operation).
o Push all the neighboring nodes of the current node onto the stack that have not
been visited yet.
o Mark the current node as visited.
4. Repeat until the stack is empty: Continue this process until all nodes reachable from the
starting node have been visited.
5. Output: The output of the DFS algorithm depends on the specific problem being solved.
Question Five
a. (i) Computational Graph
A computational graph, also known as a computation graph or a directed acyclic graph (DAG)
- is a mathematical representation used in computer science and machine learning to model
and perform computations. It consists of nodes, representing operations or variables, and
directed edges, representing the flow of data between these operations.
a computational graph G is defined as a directed graph (V, E), where:
V is a set of nodes representing operations or variables.
E is a set of directed edges representing dependencies between nodes.

(ii) Graphs are considered a useful model for most computation problems due to
several reasons:
1. Flexibility: Graphs are highly flexible and can model a wide range of real-world
scenarios, from social networks to transportation systems to molecular structures. This
versatility makes them applicable to various computation problems across different
domains.
2. Abstraction: Graphs provide a powerful abstraction that allows complex systems to be
represented in a simple and intuitive manner.
3. Efficiency: Graph algorithms often have efficient solutions for many computation
problems.
4. Visualization: Graphs can be visually represented, allowing practitioners to gain
insights into complex structures and relationships. Visualization helps in understanding
the problem, designing algorithms, and interpreting results.
5. Modularity: Graphs support modular design, where complex problems can be
decomposed into smaller subproblems represented as graphs.
6. Interdisciplinary Applications: Graphs find applications in diverse fields, including
computer science, biology, sociology, economics, and many others. This
interdisciplinary nature highlights the utility of graphs as a universal model for
computation problems.

b. Dijkstra's algorithm
Dijkstra's algorithm is a graph algorithm used to find the shortest path from a single source
vertex to all other vertices in a weighted graph with non-negative edge weights. It realizes this
by iteratively selecting the vertex with the smallest uncertain distance from the source and
updating the distances of its neighboring vertices accordingly.

Dijkstra's algorithm:
1. Initialize the distance of the source vertex to 0 and the distances of all other vertices to
infinity.
2. Create a priority queue (usually implemented with a min-heap) to store vertices
ordered by their tentative distances from the source.
3. While the priority queue is not empty:
 Extract the vertex with the smallest tentative distance from the priority queue.
 For each neighboring vertex not yet processed:
 Update its tentative distance if a shorter path through the current vertex is
found.
 Update the priority queue accordingly.
4. Repeat until all vertices have been processed.
The time complexity of Dijkstra's algorithm is O((V+E)logV),
where: V is the number of vertices and E is the number of edges in the graph.

(ii) Usefulness

 Finding Shortest Paths: Dijkstra's algorithm excels at identifying the path with the
minimum cost (distance, time, etc.) between a starting point (source node) and all other
reachable points (destination nodes) in a weighted graph. The weights assigned to edges
represent the cost of traversing that connection.

 Efficiency: For certain graph types (directed acyclic graphs), Dijkstra's algorithm boasts a
time complexity of O(E log V), where E is the number of edges and V is the number of
vertices. This makes it efficient for handling large networks.

 Non-Negative Edge Weights: The algorithm works best with graphs where edge weights
are non-negative. This ensures it prioritizes finding the path with the least cumulative cost.

Applications

Dijkstra's algorithm has a wide range of applications in various fields:

 Navigation Systems: It's a core component of navigation apps like Google Maps, helping
determine the fastest route between two locations, considering factors like traffic or road
closures (represented by edge weights).

 Network Routing: In computer networks, it's used to find the optimal path for data packets
to travel between devices, ensuring efficient data flow.

 Logistics and Delivery Services: Delivery companies can utilize Dijkstra's algorithm to
optimize delivery routes, minimizing travel time and cost.

 Social Network Analysis: The algorithm can be adapted to find the shortest path (in
terms of number of connections) between users in a social network.

 Financial Applications: It can be used to identify the most cost-effective investment


strategies or resource allocation plans within financial models.

 Bioinformatics: Dijkstra's algorithm finds applications in protein structure analysis, where


it helps identify the shortest pathways for molecules within a protein.

Question Three
a. Memoization is an optimization technique used to speed up certain computer
programs. It works by storing the results of expensive function calls and reusing them
when the same inputs are encountered later. This avoids redundant calculations and
can significantly improve performance, especially for functions that involve complex
computations.

An algorithm that uses memoization concept

# Memoization table to store computed factorials


factorial_memo = {}

def factorial(n):
# Base cases: 0! = 1 and 1! = 1
if n == 0 or n == 1:
return 1

# Check if the result is already memoized


if n in factorial_memo:
return factorial_memo[n]

# Compute the factorial recursively


fact_n = n * factorial(n - 1)

# Memoize the result


factorial_memo[n] = fact_n

return fact_n

# Example usage:
n=5
print("Factorial({}) = {}".format(n, factorial(n)))

The selection sort algorithm sorts a collection by repeatedly finding the minimum element
from the unsorted part and moving it to the beginning of the collection.
Unsorted elements {3, 1, 4, 2}:
1. Initial Collection: {3, 1, 4, 2}
2. First Pass: Find the minimum element (1) and swap it with the first element. Updated
collection: {1, 3, 4, 2}
3. Second Pass: Find the minimum element (2) and swap it with the second element.
Updated collection: {1, 2, 4, 3}
4. Third Pass: Find the minimum element (3) and swap it with the third element. Updated
collection: {1, 2, 3, 4}
5. Fourth Pass: No action needed as only one element remains.
6. Result: Sorted collection: {1, 2, 3, 4}
The time complexity of selection sort is O(n2)

Merge sort is a divide-and-conquer algorithm that sorts a collection by recursively dividing it


into smaller subcollections, sorting each subcollection, and then merging the sorted
subcollections to produce the final sorted result.
Unsorted element {3, 1, 4, 2}
1. Initial Collection: {3, 1, 4, 2}
2. Divide Phase: Divide the collection into two halves: {3, 1} and {4, 2}.
3. Recursive Sorting: Recursively sort each half:
 Sort {3, 1} into {1, 3}.
 Sort {4, 2} into {2, 4}.
4. Merge Phase: Merge the sorted halves {1, 3} and {2, 4}:
 Compare elements from each half and select the smaller one.
 Produce the final sorted result: {1, 2, 3, 4}.
Merge sort has a time complexity of O(n log n)
Binary search is an efficient algorithm for finding a target value within a sorted collection. It
works by repeatedly dividing the search interval in half until the target value is found or the
interval becomes empty.
 Initial Collection: {1, 2, 3, 4}
 Target Value: We're searching for the value 3.
 Search Interval: Initially, the entire collection is the search interval: [1, 4].
 Binary Search:
 First Comparison: Compare the target value (3) with the middle element (2) of the
search interval.
 Since 3 is greater than 2, discard the left half of the interval.
 Updated Search Interval: [3, 4].
 Second Comparison: Compare the target value (3) with the middle element (3) of
the updated search interval.
 The target value matches the middle element, so we have found the value 3
in the collection.
 Return the index of the found value (index 2).
 Result: The value 3 is found at index 2 in the collection.

Binary search has a time complexity of O(log n), making it an efficient algorithm for searching
sorted collections.

Insertion sort is a simple sorting algorithm that builds the final sorted array one element at a
time by repeatedly taking the next element from the unsorted part of the collection and
inserting it into its correct position in the sorted part.
1. Initial Collection: {3, 1, 4, 2}
2. Sorted and Unsorted Parts: Initially, the first element (3) is considered sorted, and the
rest of the elements (1, 4, 2) are unsorted.
3. Insertion Process:
 First Pass: Insert the second element (1) into its correct position in the sorted part
of the collection.
 Compare 1 with 3 (the only element in the sorted part). Since 1 is less than 3,
swap them. Updated Collection: {1, 3, 4, 2}
 Second Pass: Insert the third element (4) into its correct position in the sorted part
of the collection.
 Compare 4 with 3 (the last element in the sorted part). Since 4 is greater than
3, no swap is needed. Updated Collection: {1, 3, 4, 2}
 Third Pass: Insert the fourth element (2) into its correct position in the sorted part
of the collection.
 Compare 2 with 4, then with 3, and finally with 1. 2 is less than 3, so swap
them. Updated Collection: {1, 2, 3, 4}
4. Result: The final sorted collection is {1, 2, 3, 4}.
Insertion sort has a time complexity of O(n^2) in the worst-case scenario

Bubble sort is a simple sorting algorithm that repeatedly compares adjacent elements in the
collection and swaps them if they are in the wrong order.
Initial Collection: {3, 1, 4, 2}
First Pass:
 Compare the first two elements (3 and 1). Since 3 is greater than 1, swap them.
Updated Collection: {1, 3, 4, 2}
 Compare the next two elements (3 and 4). Since they are in the correct order, no swap
is needed. Updated Collection: {1, 3, 4, 2}
 Compare the next two elements (4 and 2). Since 4 is greater than 2, swap them.
Updated Collection: {1, 3, 2, 4}
Second Pass:
 Compare the first two elements (1 and 3). Since they are in the correct order, no swap
is needed. Updated Collection: {1, 3, 2, 4}
 Compare the next two elements (3 and 2). Since 3 is greater than 2, swap them.
Updated Collection: {1, 2, 3, 4}
 Compare the next two elements (3 and 4). Since they are in the correct order, no swap
is needed. Updated Collection: {1, 2, 3, 4}
Third Pass:
 Compare the first two elements (1 and 2). Since they are in the correct order, no swap
is needed. Updated Collection: {1, 2, 3, 4}
 Compare the next two elements (2 and 3). Since they are in the correct order, no swap
is needed. Updated Collection: {1, 2, 3, 4}
 Compare the next two elements (3 and 4). Since they are in the correct order, no swap
is needed. Updated Collection: {1, 2, 3, 4}
Result: The final sorted collection is {1, 2, 3, 4}.

You might also like