Lecture 2
Lecture 2
FACULTY:
DEPARTMENT: COMPUTER SCIENCE
PROGRAMME: B.Sc. (Hons) COMPUTER SCIENCE
Course code: CSC 413; Course title: Analysis of Algorithms; Credit unit: 2; Course status: C
Lecturer’s Data:
Office Location:
Consultation Hours:
Course content
CSC 413 Analysis of Algorithms; 2 units (C)
Review of the concepts of Big oh. Introduction to Algorithms, complexity; Time and space complexity,
computational problems: solvable, unsolvable, impractical, intractable, classes of complexities: P, NP, NP-
complete etc. how to analyse algorithm: examples from fundamental algorithms (sorting and searching) and
other algorithms: combinational and cryptographic algorithms and protocols; solved problems for illustration,
are required. Complexity measurement metrics: Halstead, cyclomatic, line-of-code, etc.
Analysis of Algorithm
Analysis of Algorithm
Algorithm analysis is the process of assessing the efficiency and
performance characteristics of algorithms.
Time complexity analysis provides an estimate of an algorithm's running time as a function of the input
size.
You can use Big O notation to express the upper bound of an algorithm's running time in terms of the
input size
return i
return -1
# In this example, the time complexity is O(n) for linear search.
•Profiling Tools:
Python offers various profiling tools that can help you analyze code performance more
systematically. The cProfile module is one such tool.
Profiling tools provide detailed information about function calls, execution times, and memory
usage
import cProfile
def my_function():
# Code to analyze
cProfile.run('my_function()')
•Benchmarking Libraries:
Python also has libraries like timeit and perfplot that simplify the
process of benchmarking and comparing different algorithms or
code snippets.
import timeit
def my_function():
# Code to analyze
# Measure the execution time of my_function
execution_time = timeit.timeit(my_function, number=1000)
•Plotting and Visualization:
Visualization tools like Matplotlib can help you plot and visualize the running times of
different algorithms for various input sizes. This can aid in understanding how an algorithm
scales with input
ii. Experiments can be done only on a limited set of test inputs; hence, they leave out the running times of
inputs not included in the experiment (and these inputs may be important).
iii. An algorithm must be fully implemented in order to execute it to study its running time experimentally.
•This last requirement is the most serious drawback to the use of experimental studies.
At early stages of design, when considering a choice of data structures
or algorithms, it would be foolish to spend a significant amount of
time implementing an approach that could easily be deemed inferior
by a higher-level analysis.
Review of the concepts of Big oh
•The concepts of Big O notation is a mathematical notation used in computer science to analyze the time and
space complexity of algorithms.
1. Definition: Big O notation, often denoted as O(f(n)), describes the upper bound of the growth rate of an algorithm's time
or space complexity in relation to the input size (n). It helps to characterize the worst-case behavior of an algorithm.
2. Asymptotic Analysis: Big O notation focuses on the asymptotic behavior of an algorithm. In other words, it describes
how the resource usage (time or space) of an algorithm scales as the input size becomes arbitrarily large. It doesn't
concern itself with constant factors or lower-order terms.
3. Types of Complexity:
i. Time Complexity: This describes how the running time of an algorithm increases with input size. For example,
O(n) indicates linear time, O(log n) implies logarithmic time, and O(n 2) signifies quadratic time complexity.
ii. Space Complexity: This characterizes how the memory consumption of an algorithm grows
with the input size.
4. Best, Average, and Worst Case: Big O notation is often used to describe the worst-case time or space
complexity of an algorithm. Sometimes, it's also used for the average or best case, but it's most
commonly associated with the worst case because it provides a guaranteed upper bound.
5. Common Notations:
i. O(1): Constant time complexity. The algorithm's execution time or space usage doesn't depend on
the input size.
ii. O(log n): Logarithmic time complexity, typical of efficient algorithms like binary search.
iii. O(n): Linear time complexity. The resource usage grows linearly with the input size.
iv. O(n log n): Common in algorithms like quicksort and mergesort.
v. O(n2): Quadratic time complexity, common in inefficient sorting algorithms.
vi. O(2n): Exponential time complexity, often seen in brute-force algorithms.
6. Big O Rules:
vii. Addition Rule: If an algorithm consists of multiple parts, the overall complexity is determined by
the part with the highest Big O notation.
viii. Multiplication Rule: If an algorithm has nested loops or recursive calls, you multiply the complexities together.
7. Notations Beyond Big O:
i. Omega (Ω): Describes the lower bound of an algorithm's
complexity.
ii. Theta (Θ): Represents both the upper and lower bounds,
providing a tight bound on an algorithm's complexity.
8. Big O Use Cases: Big O notation is essential for comparing and
analyzing algorithms. It helps determine the efficiency of an
algorithm and choose the most suitable one for a given problem.
• In summary, Big O notation is a valuable tool for understanding the
performance characteristics of algorithms. It allows developers and
computer scientists to make informed decisions about which algorithm
to use for a particular problem and helps ensure that computational
resources are used efficiently, especially as the input size grows.