0% found this document useful (0 votes)
63 views13 pages

CSC 425 Summary Note

The document discusses different types of algorithms including sorting, searching, and graph algorithms. It provides examples of common algorithms like bubble sort, binary search, and breadth-first search. It also explains key concepts like time complexity.

Uploaded by

ayomidekelly21
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
63 views13 pages

CSC 425 Summary Note

The document discusses different types of algorithms including sorting, searching, and graph algorithms. It provides examples of common algorithms like bubble sort, binary search, and breadth-first search. It also explains key concepts like time complexity.

Uploaded by

ayomidekelly21
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

CSC 425 (Summary Note)

Algorithms

Algorithms can be defined as the sequence of steps used to solve a problem. The sequence presents
a unique method of addressing an issue by providing a particular solution.

In order for a process to represent an algorithm, it must be:

1. Finite: The algorithm must eventually solve the problem.


2. Well-defined: The series of steps must be precise and present steps that are understandable.
3. Effective: An algorithm must solve all cases of the problem for which someone defined it.

Use-cases of Algorithms

1. Searching: Locating information or verifying that the information you see is the
information you want is an essential task.
2. Sorting: Determining which order to use to present information is important because most
people today suffer from information overload, and putting information in order is one way
to reduce the onrush of data.
3. Transforming: Converting one sort of data to another sort of data is critical to
understanding and using the data effectively.
4. Scheduling: Making the use of resources fair to all concerned is another use-case of
algorithms. For example, timing lights at intersections are no longer simple devices that
count down the seconds between light changes. Modern devices consider all sorts of issues,
such as the time of day, weather conditions, and flow of traffic. Scheduling comes in many
forms, however. For example, consider how your computer runs multiple tasks at the same
time. Without a scheduling algorithm, the operating system might grab all the available
resources and keep your application from doing any useful work.
5. Graph Analysis: Deciding on the shortest line between two points finds all sorts of uses.
For example, in a routing problem, your GPS couldn’t function without this particular
algorithm because it could never direct you along city streets using the shortest route from
point A to point B.

1
6. Cryptography: Keeping data safe is an ongoing battle with hackers constantly attacking
data sources. Algorithms make it possible to analyze data, put it into some other form, and
then return it to its original form later.
7. Pseudorandom number generation: Imagine playing games that never varied. You start
at the same place; perform the same steps, in the same manner, every time you play.
Without the capability to generate seemingly random numbers, many computer tasks
become impossible.

The most important issue to consider when working with algorithms is that given a particular input,
you should expect a specific output. Secondary issues include how many resources the algorithm
requires to perform its task and how long it takes to complete the task. Depending on the kind of
issue and the sort of algorithm used, you may also need to consider issues of accuracy and
consistency.

There are various types of algorithms, and they can be classified based on their purpose, design
methodology, and application.

1. Sorting Algorithms: Bubble Sort, Selection Sort, Insertion Sort, Merge Sort, Quick Sort,
Heap Sort, Radix Sort
2. Searching Algorithms: Linear Search, Binary Search, Hashing Algorithms (for hash
tables)
3. Graph Algorithms: Breadth-First Search (BFS), Depth-First Search (DFS), Dijkstra's
Algorithm

Sorting Algorithms

1. Bubble Sort: Bubble sort is a sorting algorithm that repeatedly steps through the list to be
sorted, comparing adjacent items and swapping them if they are in the wrong order. At the
end of each pass, the largest element will be in its final position, and all of the other
elements will have been moved closer to their correct positions. (Examples in Class note).
2. Insertion Sort: This is a sorting algorithm that builds a sorted list from an unordered list
by inserting elements one at a time. The insertion sort algorithm works by taking a list of
items and inserting them one at a time into a sorted list. It does this by finding the position
of the item to be inserted and then inserting it into the list at that position. (Examples in
Class note).

2
3. Selection Sort: This is a sorting algorithm that takes an unsorted list of items and organizes
them in ascending order. It does this by selecting the smallest item from the list and placing
it at the beginning of the list. It then repeats this process for the remaining items in the list.
(Examples in Class note).
4. Merge sort: This is a comparison-based sorting algorithm that sorts data using the divide
and conquer technique. The algorithm splits the data set into halves, sorts each half, and
then merges the two sorted halves. (Examples in Class note).
5. Quick sort: This is a sorting algorithm which partitions the input array into two arrays:
one larger than the kth element, and one smaller. It then recursively sorts the two arrays.
The leftmost element in the sublist is compared with the rightmost element. If they are
equal, the sublist is sorted. If the leftmost element is less than the rightmost element, the
leftmost element is placed in the new sublist and the rightmost element is placed in the old
sublist. If the leftmost element is greater than the rightmost element, the rightmost element
is placed in the new sublist and the leftmost element is placed in the old sublist.
6. Heap Sort: This is a sorting algorithm that works by first creating a heap out of the
unsorted list, and then iteratively removing the largest element from the heap and inserting
it into the correct position in the sorted list.
7. Radix Sort: This is a sorting algorithm that sorts elements in a list according to the
numerical value of each element’s digit.

Searching Algorithms

Searching Algorithms are designed to check for an element or retrieve an element from any data
structure where it is stored.

Linear Search is defined as a sequential search algorithm that starts at one end and goes through
each element of a list until the desired element is found, otherwise the search continues till the end
of the data set. In Linear Search Algorithm,

• Every element is considered as a potential match for the key and checked for the same.
• If any element is found equal to the key, the search is successful, and the index of that
element is returned.
• If no element is found equal to the key, the search yields “No match found”.

3
Advantages of Linear Search

• Linear search can be used irrespective of whether the array is sorted or not. It can be used
on arrays of any data type.
• Does not require any additional memory.
• It is a well-suited algorithm for small datasets.

Drawbacks of Linear Search

• Linear search has a time complexity of O(N), which in turn makes it slow for large datasets.
• Not suitable for large arrays.

8 7 6 5

(Examples in Class note).

Binary Search is defined as a searching algorithm used in a sorted array by repeatedly dividing
the search interval in half. The idea of binary search is to use the information that the array is sorted
and reduce the time complexity to O(log N). To apply Binary Search algorithm, the data structure
must be sorted and access to any element of the data structure must take constant time.

Steps in Binary Search:

• Divide the search space into two halves by finding the middle index “mid”.
• Compare the middle element of the search space with the key.
• If the key is found at middle element, the process is terminated.
• If the key is not found at middle element, choose which half will be used as the next search
space.
• If the key is smaller than the middle element, then the left side is used for next search.
• If the key is larger than the middle element, then the right side is used for next search.
• This process is continued until the key is found or the total search space is exhausted.

Advantages of Binary Search

• Binary search is faster than linear search, especially for large arrays.

4
• More efficient than other searching algorithms with a similar time complexity, such as
interpolation search or exponential search.
• Binary search is well-suited for searching large datasets that are stored in external memory,
such as on a hard drive or in the cloud.

Drawbacks of Binary Search

• The array should be sorted.


• Binary search requires that the data structure being searched be stored in contiguous
memory locations.
• Binary search requires that the elements of the array be comparable, meaning that they
must be able to be ordered.

Applications of Binary Search

• Binary search can be used as a building block for more complex algorithms used in
machine learning, such as algorithms for training neural networks or finding the optimal
hyperparameters for a model.
• It can be used for searching in computer graphics such as algorithms for ray tracing or
texture mapping.
• It can be used for searching a database.

2 7 9 11 20 25 27 50 51 60

(Examples in Class note).

Graph Algorithms

Graph algorithms are a set of techniques used to analyze, traverse, and manipulate graphs, which
are mathematical structures consisting of vertices (nodes) connected by edges. These algorithms
are essential in various fields, including computer science, network analysis, operations research,
and social network analysis.

5
1. Depth-First Search (DFS)

DFS is a searching algorithm used to traverse or search a graph or tree. It is often used to find
connected components or cycles. DFS traverses deep into the graph before exploring neighbors at
the same level. This property makes it useful for tasks such as finding connected components,
detecting cycles in graphs, and performing topological sorting. Taking this example, the steps in
DFS are:

1. In the beginning, all vertices are unvisited, and all edges are undiscovered.
2. Choose an arbitrary start vertex (A) and visit it.
3. The vertex being visited is called the current vertex.

4. From the current vertex, take any undiscovered edge.


5. If the adjacent node that follows is unvisited, visit it.
6. Set the edge traversed to discovered.

Visit B

6
Visit C

7. If the vertex that follows the discovered edge is visited, mark the edge as back egde.

Visit D

7
Mark the Edge from D to A as Back edge

8. If we arrive at a vertex with no undiscovered edges, backtrack to the parent of that vertex.

Visit E

8
Vertex A has been vistited, Edge E to A becomes a Back edge.

E has no undiscovered edges, backtrack to the parent vertex.

C has no undiscovered edges, backtrack to the parent vertex.

9
B has no undiscovered edges, backtrack to the parent vertex.

If backtrack has reached the start point, stop.

2. Breadth-First Search (BFS)

BFS explores all the vertices in a graph that are reachable from a given source vertex. It traverses
the graph level by level, starting from the source vertex. BFS is often used to find the shortest path
between two vertices in an unweighted graph and to determine connectivity in graphs.

Steps in BFS

Initialization: BFS begins by selecting a starting node and marking it as visited. This starting node
is typically referred to as the root node. It also initializes a queue data structure to keep track of
the nodes to be explored.

Exploration of Neighbors: BFS explores all the neighboring nodes of the current node.
Neighboring nodes are those that are directly connected to the current node by an edge in the graph
and have not yet been visited. It examines all neighbors before moving on to the neighbors of the
neighbors.

10
Queue Mechanism: BFS utilizes a queue to keep track of the nodes that need to be explored.
After visiting a node and exploring its neighbors, BFS adds these neighbors to the queue.

Level-wise Exploration: BFS explores the graph level by level. This means that it explores all the
nodes at a given distance (or depth) from the starting node before moving on to nodes at a greater
distance.

Marking Nodes as Visited: To avoid visiting the same node multiple times, BFS marks each
visited node as "visited" or "discovered" once it is added to the queue. This ensures that nodes are
not revisited.

Termination: BFS continues the exploration process until it has visited all reachable nodes or
until it reaches a termination condition, such as finding a target node (if the search is for a specific
node) or exhausting all nodes in the graph.

Output: The output of BFS typically includes the sequence of nodes visited during the traversal,
as well as any other relevant information based on the specific problem being solved.

3. Dijkstra's Algorithm

Dijkstra's algorithm finds the shortest path between nodes in a graph with non-negative edge
weights. It maintains a priority queue to greedily select the shortest path.

Steps in Dijkstra's Algorithm

Initialization: Begin by selecting a source node from which to start the traversal. Set the distance
from the source node to itself as 0, and set the distance to all other nodes as infinity initially. Also,
maintain a priority queue (or a min-heap) to keep track of nodes to be explored, prioritized by their
current tentative distance from the source node.

Exploration of Neighbors: Start exploring the neighbors of the source node. For each neighbor,
calculate the tentative distance from the source node through the current node. Update the tentative
distance of the neighbor if this newly calculated distance is smaller than its current tentative
distance.

11
Relaxation of Edges: As you explore each node and its neighbors, update the tentative distances
of the neighboring nodes if a shorter path is found through the current node.

Priority Queue: After updating the tentative distances of neighboring nodes, add them to the
priority queue. The priority queue ensures that the node with the smallest tentative distance is
explored next.

Exploration Continues: Repeat steps 2-4 until all nodes have been visited or until the priority
queue becomes empty.

Termination: The algorithm terminates when all nodes have been visited or when the priority
queue becomes empty. At this point, the tentative distances from the source node to all other nodes
are finalized.

Path Reconstruction (Optional): If needed, you can reconstruct the shortest path from the source
node to any other node by backtracking through the nodes with the smallest tentative distances.

Asymptotic analysis of upper and average complexity bounds

Asymptotic analysis is a method used in computer science and mathematics to analyze the
efficiency of algorithms or the resource usage of programs as the input size grows towards infinity.
It's a way of understanding how the performance of an algorithm scales with larger inputs. There
are three commonly used types of asymptotic analysis:

1. Big O notation (upper bound): This notation describes the worst-case scenario for the
algorithm's performance. It provides an upper bound on the growth rate of the algorithm's resource
usage (such as time or space) as a function of the input size. For example, if an algorithm has a
time complexity of O(𝑛2 ), it means that the worst-case time it takes to run increases quadratically
as the input size (𝑛) increases.

2. Big Omega notation (lower bound): This notation describes the best-case scenario for the
algorithm's performance. It provides a lower bound on the growth rate of the algorithm's resource

12
usage. For example, if an algorithm has a time complexity of Ω(𝑛), it means that the best-case time
it takes to run increases linearly as the input size (𝑛) increases.

3. Big Theta notation (average bound): This notation describes both the upper and lower bounds
on the growth rate of the algorithm's resource usage, providing a tight bound on its performance.
For example, if an algorithm has a time complexity of Θ(𝑛2 ), it means that both the best-case and
worst-case time complexities are quadratic.

When analyzing algorithms, we often focus on their worst-case performance, as it gives us a


guarantee on how the algorithm will behave under all possible inputs. However, average-case
analysis can also be important, especially when dealing with randomized algorithms or situations
where inputs are distributed in a certain way. In summary, asymptotic analysis provides a way to
understand how the efficiency or resource usage of an algorithm changes as the input size grows,
allowing us to make informed decisions about which algorithms to use for solving specific
problems. By applying asymptotic analysis, we gain valuable insights into the scalability and
efficiency of algorithms, enabling us to choose the most appropriate algorithm for solving specific
problems and optimizing our computational resources effectively.

Practice Questions

- Explain the conditions a process must fulfil before it can be considered an algorithm.
- List and explain six (6) use cases of algorithms.
- Using real world examples, list and explain three (3) sorting algorithms.
- Use linear search to locate 4 in the array:

6 9 4 5

- List and explain the steps in BFS.


- List and explain the steps in searching using Dijkstra's Algorithm.

13

You might also like