0% found this document useful (0 votes)
17 views86 pages

Unit 1.pptx-1

The document provides an overview of data structures and algorithms, defining data structures as specialized formats for organizing and managing data efficiently. It discusses algorithms as step-by-step instructions for problem-solving, emphasizing the importance of analyzing their time and space complexities using Big O notation and other asymptotic notations. Additionally, it compares recursive and non-recursive algorithms in terms of their performance and practical considerations.

Uploaded by

Revathy Raja
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views86 pages

Unit 1.pptx-1

The document provides an overview of data structures and algorithms, defining data structures as specialized formats for organizing and managing data efficiently. It discusses algorithms as step-by-step instructions for problem-solving, emphasizing the importance of analyzing their time and space complexities using Big O notation and other asymptotic notations. Additionally, it compares recursive and non-recursive algorithms in terms of their performance and practical considerations.

Uploaded by

Revathy Raja
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 86

AI435 - Data Structures and

Algorithms
INTRODUCTION

1
2
3
4
5
6
Data Structures
Data Structure (by definition):
A data structure is a specialized format or arrangement in which data is stored,
organized, and managed in a computer system.
It provides a way to represent, store, and manipulate information in a systematic
and efficient manner.
Data structures serve as the foundation for writing programs and algorithms,
enabling efficient data storage, retrieval, and manipulation operations.

In simpler terms:
Data structures are like containers for organizing and storing data in a computer so
that it can be used efficiently when needed.
They help optimize operations like searching, inserting, deleting, and retrieving
information.
7
Data Structures

8
An ordered set containing
variable number of
Data Structures
Insert at one end
elements

and delete at
other end fixed-size sequenced collection collection of
of elements of the same data logically related
type. information

subbranches
branches and
data are arranged in
elements are connected
sequence memory locations.

operations are performed at


one end only
collection of nodes and
connecting edges
9
Unit 1
INTRODUCTION TO ALGORITHMS AND ANALYSIS

10
Fundamentals of algorithm
analysis

11
Fundamentals of algorithm analysis
An Algorithm:

An algorithm is a step-by-step set of instructions for solving a specific


problem.

It's like a recipe for a computer to follow.

12
Fundamentals of algorithm analysis
Space and Time Complexity:

Space Complexity: This measures how much memory an algorithm


needs to complete its task.

Time Complexity: This measures how long an algorithm takes to finish,


based on the size of the input.

13
Fundamentals of algorithm analysis
Algorithm analysis:

We analyze algorithms to understand how they perform.

This helps us choose the best one for a given task.

14
Fundamentals of algorithm analysis
Big O Notation (O-notation):

Big O notation is a way to describe how the runtime or space


requirements of an algorithm grow as the size of the input grows.

Example: Suppose you have a list of numbers, and you want to find the
largest one. If you have 10 numbers, it might take 10 comparisons. If
you have 1000 numbers, it might take 1000 comparisons. In Big O
notation, we say this algorithm is O(n) because the time it takes grows
linearly with the input size.

15
Fundamentals of algorithm analysis
Asymptotic Notations:

Besides Big O, there are other notations like Omega (Ω) and Theta (Θ). They
describe different aspects of algorithm performance.

Omega (Ω): This describes the best-case scenario for an algorithm. It tells us the
lower bound on the time or space required.

Theta (Θ): This describes the average case. It gives a tight range of performance.

Example: If an algorithm's best-case and worst-case times are both O(n), we say it
has a Theta(n) time complexity.

16
Fundamentals of algorithm analysis
Best Case, Worst Case, and Average Case:

Best Case: This is the scenario where the algorithm performs the fastest. It's
the most optimistic situation.
Worst Case: This is the scenario where the algorithm performs the slowest.
It's the most pessimistic situation.
Average Case: This is the expected performance over all possible inputs.
Example: Consider a sorting algorithm. For some inputs, it might already be
sorted (best case), while for others, it might be in reverse order (worst case).

17
Fundamentals of algorithm analysis
Non-Recursive and Recursive Algorithms:

Non-Recursive Algorithm: This is a set of steps that doesn't call itself. It


solves a problem directly.
Recursive Algorithm: This is an algorithm that calls itself to solve
smaller instances of the same problem.
Example: A factorial function can be defined recursively. For instance,
5! (5 factorial) is 5 * 4 * 3 * 2 * 1.

18
Fundamentals of algorithm analysis
Non-Recursive and Recursive Algorithms:

Non-Recursive Algorithm: This is a set of steps that doesn't call itself. It


solves a problem directly.
Recursive Algorithm: This is an algorithm that calls itself to solve
smaller instances of the same problem.
Example: A factorial function can be defined recursively. For instance,
5! (5 factorial) is 5 * 4 * 3 * 2 * 1.

19
Space and time complexity
of an algorithm

20
Space Complexity:

21
Space Complexity:
Space complexity refers to the amount of memory an algorithm
requires to execute, and how this space requirement grows as the size
of the input increases.
It's important to note that space complexity doesn't just include the
input data, but also any additional memory that the algorithm needs to
perform its operations.

22
Different types of space complexity:
Constant Space (O(1)):
The space used by the algorithm remains the same, regardless of the
size of the input.
This means that the algorithm doesn't require additional memory as
the input grows.

Example: Accessing a single element in an array. It doesn't matter if the


array has 10 elements or 1000, you only need a fixed amount of
memory to store the index.

23
Different types of space complexity:
Linear Space (O(n)):

The space used by the algorithm grows linearly with the size of the
input.
This means that if the input size doubles, the space required also
doubles.

Example: Storing elements in an array where the size of the array is


directly proportional to the size of the input.

24
Different types of space complexity:
Quadratic Space (O(n^2)), Cubic Space (O(n^3)), and so on:

These represent algorithms whose space requirements grow


polynomially with the size of the input.

Example: Nested loops or multi-dimensional arrays where the space


grows with the square, cube, etc., of the input size.

25
Time Complexity:

26
Time Complexity:
Time complexity refers to the amount of time an algorithm takes to
complete, and how this time requirement grows as the size of the input
increases.
It's an estimation of the number of basic operations (like comparisons
or assignments) an algorithm performs.

27
Different types of time complexity:
Constant Time (O(1)):
The time taken by the algorithm to complete is constant, regardless of
the size of the input.
This means it takes the same amount of time to run, no matter how
large the input is.

Example: Accessing an element in an array using its index.

28
Different types of time complexity:
Logarithmic Time (O(log n)):

The time taken by the algorithm grows logarithmically with the size of
the input.
This means as the input size increases, the time taken increases, but at
a decreasing rate.

29
Different types of time complexity:
Linear Time (O(n)):
The time taken by the algorithm is directly proportional to the size of
the input.
If the input size doubles, the time taken also doubles.

Example: A simple linear search in an unsorted list.

30
Different types of time complexity:
Quadratic Time (O(n^2)), Cubic Time (O(n^3)), and so on:

These represent algorithms whose time requirements grow


polynomially with the size of the input.

Example: Nested loops, like in some sorting algorithms.

31
Different types of time complexity:
Exponential Time (O(2^n)), Factorial Time (O(n!)), and so on:

These represent algorithms whose time requirements grow extremely


quickly with the size of the input.

Example: Some brute-force algorithms or algorithms that generate all


possible combinations.

32
Different types of complexity:

33
Types of asymptotic notations
and orders of growth

34
Asymptotic Notations

35
Asymptotic notations and orders of growth
Asymptotic notations are a way to describe the performance of an
algorithm in terms of its growth rate or efficiency as the input size
increases.
It is a way to express the performance of algorithms and are important
for comparing and selecting algorithms for different tasks.

36
Asymptotic notations and orders of growth
• Big O Notation (O-notation)
• Omega Notation (Ω-notation)
• Theta Notation (Θ-notation)

37
Big O Notation (O-notation)
Definition:
Big O notation describes the upper bound of an algorithm's time or
space complexity. It gives an upper limit on the growth rate of an
algorithm.

Example: If an algorithm has a time complexity of O(n), it means the


algorithm's running time grows linearly or less than linearly with the
input size.

56
Omega Notation (Ω-notation)
Definition:
Omega notation describes the lower bound of an algorithm's time or
space complexity.
It gives a lower limit on the growth rate of an algorithm.

Example: If an algorithm has a time complexity of Ω(n), it means the


algorithm's running time grows at least linearly with the input size.
Orders of Growth: The same as Big O notation, but with the lower
bound

57
Theta Notation (Θ-notation):
Definition:
Theta notation provides a tight bound on an algorithm's time or space
complexity.
It indicates both the upper and lower limits, giving an exact description
of the algorithm's behavior.

Example: If an algorithm has a time complexity of Θ(n), it means the


algorithm's running time grows exactly linearly with the input size.
Orders of Growth: The same as Big O notation, but with the lower
bound
58
Big O Notation (O-notation): Orders of
Growth:

59
Big O Notation (O-notation): Orders of
Growth:
O(1): Constant time complexity
O(log n): Logarithmic time complexity
O(n): Linear time complexity
O(n^2): Quadratic time complexity
O(n^3): Cubic time complexity
O(2^n): Exponential time complexity

60
61
62
63
64
65
66
Algorithm efficiency

67
Algorithm efficiency
Algorithm efficiency refers to how well an algorithm performs in terms
of time or space as the input size grows.
Three common scenarios for analyzing algorithm efficiency are:
• Best case
• Worst case
• Average case

68
Algorithm efficiency : Average Case
Definition: The average case scenario represents the expected
performance of an algorithm over all possible inputs. It considers the
scenario in which the algorithm performs with an average amount of
resources, taking into account the likelihood of different inputs
Example: For a linear search on a list with a uniformly random
distribution of the target element, the average case would be
approximately half the size of the list. This is because, on average, the
target element is expected to be found around the middle of the list.
Usefulness: Average case analysis provides a more realistic view of how
an algorithm is likely to perform in typical situations.

69
Algorithm efficiency : Average Case, example
Suppose we have a list of 10 numbers:
[7,2,9,4,1,8,3,5,6,10]
Here's how the linear search would progress:
1. Comparison 1: Check the first element, which is 7. It's not the target.
2. Comparison 2: Check the second element, which is 2. It's not the target.
3. Comparison 3: Check the third element, which is 9. It's not the target.
4. Comparison 4: Check the fourth element, which is 4. We found the target!
In this example, the target element (4) was found after the fourth comparison.
The list has 10 elements, so the average number of comparisons required to
find the target element in this case is 4 (approx. near to 5).

70
Algorithm efficiency : Best Case
Definition: The best-case scenario represents the most favorable
situation for an algorithm. It considers the scenario in which the
algorithm performs with the least amount of resources (e.g., time,
space).
Example: For a linear search, the best case occurs when the target
element is the first element in the list. In this case, the algorithm only
needs to perform one comparison. (if the target is 7)
Usefulness: Best case analysis is helpful for understanding the lower
bound of an algorithm's performance. It provides insight into how well
the algorithm can potentially perform under ideal circumstances.

71
Algorithm efficiency : Worst Case
Definition: The worst case scenario represents the least favorable situation
for an algorithm. It considers the scenario in which the algorithm requires the
maximum amount of resources (e.g., time, space).
Example: The worst case occurs when the target element is at the end of
the list or not present at all. For instance, if the target is 10 (not found), the
linear search would require checking all elements. In this case, the algorithm
may need to perform the maximum number of comparisons and swaps.
(target is 10 (not found))
Usefulness: Worst case analysis is crucial for understanding the upper bound
of an algorithm's performance. It provides a guarantee that the algorithm will
not perform worse than a certain level under any circumstances.
72
Analysis of non-recursive and
recursive algorithms

73
Analysis of non-recursive and recursive
algorithms
Analyzing non-recursive and recursive algorithms involves
evaluating their time complexity, space complexity, and practical
considerations.

74
Recursive Algorithm:

75
Recursive Algorithm:
Time Complexity:
The time complexity of a recursive algorithm depends on the
number of recursive calls made and the work done in each call.
Recursive algorithms often involve repeated sub-problems.
Use recurrence relations or recurrence trees to analyze the time
complexity.
Express the time complexity using Big O notation.

76
Recursive Algorithm:
Space Complexity:
Recursive algorithms use the call stack to keep track of function
calls. Analyze the space used in the call stack.
Consider any additional data structures used within the recursion.

77
Recursive Algorithm: Example
#include <stdio.h>
int factorial(int n) { 🡨 Calculating Factorial
Recursively in C
if (n == 0 || n == 1) {
return 1; Time Complexity: The time complexity is O(n) because
} else { there are n recursive calls to calculate the factorial of n.
return n * factorial(n - 1);
Space Complexity: The space complexity is O(n) as well
} since the maximum depth of the recursive calls is n.
}
int main() {
int result = factorial(5);
printf("Factorial: %d\n", result);
return 0;
}
78
Non-Recursive Algorithm

79
Non-Recursive Algorithm:
Time Complexity:
Analyze the number of basic operations (comparisons,
assignments, etc.) the algorithm performs as a function of the
input size.
Express the time complexity using Big O notation (e.g., O(n) for
linear time complexity).
The time complexity is O(n) because there are n recursive calls to
calculate the factorial of n.

80
Non-Recursive Algorithm:
Space Complexity:
Evaluate the additional memory required by the algorithm as the
input size increases.
Consider variables, data structures, and any auxiliary space used
during execution.
The space complexity is O(n) as well since the maximum depth of
the recursive calls is n.

81
Non-Recursive Algorithm: Example
#include <stdio.h>
int factorialNonRecursive(int n) { 🡨 Calculating Factorial Non-Recursively
in C
int result = 1;
for (int i = 1; i <= n; i++) { Time Complexity: The time complexity is still O(n)
result *= i; because there's a loop that iterates n times.
}
Space Complexity: The space complexity is O(1)
return result; because the algorithm doesn't rely on the call stack; it
} uses a constant amount of space regardless of the input
int main() { size.
int result = factorialNonRecursive(5);
printf("Factorial: %d\n", result);
return 0;
}
82
Recursive Algorithm:
Practical Considerations:
Be mindful of potential stack overflow errors for very deep
recursion.
Consider memoization or dynamic programming techniques to
optimize recursive algorithms.

83
Comparison

84
Comparison:
Recursive algorithms can sometimes lead to exponential time
complexity, especially if not implemented efficiently (e.g.,
Fibonacci sequence without memoization).
Non-recursive algorithms tend to have more predictable time
complexity based on their specific logic.

85
Unit 1
ENDS

86

You might also like