0% found this document useful (0 votes)
15 views34 pages

Chapter5.pdf 2

Uploaded by

Youssef Ebrahim
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views34 pages

Chapter5.pdf 2

Uploaded by

Youssef Ebrahim
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 34

Introduction to Computing

and Algorithms
Chapter 05 CS1003

Algorithm efficiency Semester 1 /1446


Instructor : Dr. Majzoob Kamalaldin
Omer

1.1
Algorithms

An algorithm is a generic, step-by-step list of instructions for solving


a problem.

It is a method for solving any instance of the problem such that


given a particular input, the algorithm produces the desired result.

A program, on the other hand, is an algorithm that has been


encoded into some programming language.
Algorithm Analysis
• There is an important difference between a program and the
underlying algorithm that the program is representing.
• There may be many programs for the same algorithm,
depending on the programmer and the programming language
being used.
• Algorithm analysis is concerned with comparing algorithms
based upon the amount of computing resources that each
algorithm uses.
• We want to be able to consider two algorithms and say that
one is better than the other because it is more efficient in its
use of those resources or perhaps because it simply uses
fewer.
Computing resources can be the amount of space or
memory an algorithm requires to solve the problem.

The problem instance itself typically dictates the


amount of space required by a problem solution.

Computing Some algorithms have particular space requirements.

Resources We can analyze and compare algorithms based on the


amount of time they require to execute.

This measure is sometimes referred to as the


“execution time” or “running time” of the algorithm.
Characterize an algorithm’s efficiency in terms of execution
time, independent of any particular program or computer.

The number of operations or steps that the algorithm will


require.

Big-O
If each of these steps is considered to be a basic unit of
Notation computation, then the execution time for an algorithm can be
expressed as the number of steps required to solve the
problem.
Deciding on an appropriate basic unit of computation can be a
complicated problem and will depend on how the algorithm is
implemented.
Big-O Notation

The exact number of operations is not as important as determining the most dominant part of
the comparison function.

In other words, as the problem gets larger, some portion of the comparison function tends to
overpower the rest.

This dominant term is what, in the end, is used for comparison.

The order of magnitude function describes the part of comparison that increases the fastest as
the value of comparison increases.

Order of magnitude is often called Big-O notation (for “order”).


Data and Algorithms

Sometimes the performance of an algorithm depends on the exact values of the data
rather than simply the size of the problem.

For these kinds of algorithms we need to characterize their performance in terms of:

Best case Worst case Average case


performance. performance. performance.
Algorithm Cases

The worst case performance refers to a particular data set where the algorithm
performs especially poorly.

Whereas a different data set for the exact same algorithm might have
extraordinarily good performance.

However, in most cases the algorithm performs somewhere in between these


two extremes (average case).

It is important for a computer scientist to understand these distinctions so they


are not misled by one particular case.
Anagram Detection
Example
• A good example problem for showing algorithms
with different orders of magnitude is the anagram
detection problem for strings.
• One string is an anagram of another if the second is
simply a rearrangement of the first.
• For example, 'heart' and 'earth' are anagrams.
• The strings 'python' and 'typhon' are anagrams as
well.
Solving Anagram
Detection Problem
• Our goal is to write a boolean function that will
take two strings and return whether they are
anagrams.
• For the sake of simplicity, we will assume that the
two strings in question are of equal length and that
they are made up of symbols from the set of 26
lowercase alphabetic characters.
Solution 1: Checking Off
Check to see that each character in the first string actually
occurs in the second.

If it is possible to “checkoff” each character, then the two


strings must be anagrams.

Checking off a character will be accomplished by replacing it


with the special value (e.g., *).

The first step in the process will be to convert the second string
to a list.

Each character from the first string can be checked against the
characters in the list and if found, checked off by replacement.
We need to note that each of the n characters in the first
word (e.g., string or list) will cause an iteration through up
to n characters in the list from the second word.

Each of the n positions in the list will be visited once to


match a character from first word.

Analyzing
The number of visits then becomes the sum of the integers
Solution 1 from 1 to n.

As n gets large, the more it will dominant the comparison


function.
even though word 1 and word 2 are different,
they are anagrams only if they consist of exactly
the same characters.
If we begin by sorting each string alphabetically,
Solution 2: from a to z, we will end up with the same string
if the original two strings are anagrams.
Sort and In many programming languages, we can use
Compare the built-in sort method on lists by simply
converting each string to a list at the start.
Using the sort methods are not without their
own cost.
Using the sort methods are not
without their own cost.

The sorting operations dominate


Analyzing the iteration.
Solution 2
In the end, this algorithm will have
the same order of magnitude as
that of the sorting process.
• A brute force technique for solving a problem
typically tries to exhaust all possibilities.
• We can generate a list of all possible strings using
the characters from word 1 and then see if word 2

Solution 3: occurs.
• However, there is a difficulty with this approach.

Brute Force • When generating all possible strings from word 1:


• here are n possible first characters.
• n-2 possible characters for the second position.
• n-2 for the third, and so on.
If word 1 were 20 characters long, there
would be 2,432,902,008,176,640,000
possible candidate strings.

If we processed one possibility every


Analyzing second, it would still take us 77,146,816,596
years to go through the entire list.
Solution 3
This is probably not going to be a good
solution.
Any two anagrams will have the same number of a’s, the
same number of b’s, the same number of c’s, and so on.

In order to decide whether two strings are anagrams, we


will first count the number of times each character occurs.
Solution 4: Since there are 26 possible characters, we can use a list of
Count and 26 counters, one for each possible character.

Compare Each time we see a particular character, we will increment


the counter at that position.

In the end, if the two lists of counters are identical, the


strings must be anagrams.
The solution has a number of iterations.

However, unlike the first solution, none of them are nested.

Analyzing First two iterations used to count the characters are both based

Solution 4 on n.

Third iteration, compare the two lists of counts, and will always
takes 26 steps since there are 26 possible characters in the
strings.

This solution has linear order of magnitude algorithm for solving


this problem, which is a good thing.
Although the last solution was able to run in linear time, it
could only do so by using additional storage to keep the
two lists of character counts.
In other words, this algorithm sacrificed space in order to
gain time.
Analyzing On many occasions you will need to make decisions
All between time and space trade-offs.

Solutions In this case, the amount of extra space is not significant.

As a computer scientist, when given a choice of algorithms,


it will be up to you to determine the best use of computing
resources given a particular problem.
What is Big-O?

• Big-O notation is a way to describe


how the time or space complexity of
an algorithm grows as the size of the
input increases. It gives us an upper
bound on the running time, showing
the worst-case scenario.
Why do we need it?

• We use Big-O to understand the efficiency of an algorithm and compare


which algorithm is better when working with large amounts of data.
Big-O Notation
Basics

• Common Big-O Notations:


• O(1) (Constant Time): The algorithm's
runtime doesn't change, regardless of the
input size.
• O(n) (Linear Time): The runtime grows
directly with the input size.
• O(n²) (Quadratic Time): The runtime
grows with the square of the input size.
Examples of Big-O Notations in Algorithms

• O(1) - Constant Time


• Example: Accessing an element in an array by index.
O(n) - Linear Time:

Example: Looping through an array


O(n²) - Quadratic Time:

• Example: Comparing all pairs in a list.


How to Calculate Big-
O for an Algorithm
1. Identify the most time-consuming operation
in your code.
2. Focus on the loops: How many times does the
loop run?
1. Single loop? O(n)
2. Nested loops? O(n²)
3. Ignore constants: Big-O only focuses on how
the algorithm scales as the input size increases.
Constants are not important.
4. Look at the worst-case scenario to determine
Big-O.
Example: calculate the Big O for the
following code:
Example
Function
Step-by-Step Time Complexity Calculation:

❑Line 1: int total = 0;


This is a simple assignment operation that runs in constant time.
Time complexity: O(1)
❑Line 2: if (n == 0) {
Checking if the array is empty is a constant-time operation.
Time complexity: O(1)
❑Line 3: return;
This line executes only if the condition is true (when n == 0), so it’s also a constant-
time operation.
Time complexity: O(1)
❑Line 4: for (int i = 0; i < n; i++) {
This loop runs n times because it iterates over each element in the array.
Time complexity: O(n)
❑Line 5: total += arr[i];
This operation (adding an element to total) runs once for each iteration of the loop, which means it runs
n times.
Time complexity: O(1) per iteration, and since it’s inside the loop, it runs O(n) in total.
Combined time complexity for Lines 4-5: O(n)
❑Line 6: for (int j = 0; j < n; j++) {
This outer loop runs n times.
Time complexity: O(n)
❑Line 7: for (int k = 0; k < n; k++) {
This inner loop runs n times for each iteration of the outer loop. So, the total number of iterations is
n * n = n².
Time complexity: O(n²)
❑Line 8: cout << arr[j] + arr[k] << endl;
This print statement runs once for every iteration of the inner loop, which means it executes
n² times.
Time complexity: O(1) per iteration, but since it’s inside a nested loop, it runs O(n²) in
total.
•Combined time complexity for Lines 6-8: O(n²)
❑Line 9: if (n % 2 == 0) {

This condition checks whether the number of elements is even. It’s a constant-time
operation.

Time complexity: O(1)

❑Line 10: cout << "Even number of elements" << endl;

Printing a message is a constant-time operation.

Time complexity: O(1)


Overall Time Complexity

Lines Operation Big- O Complexity


1 int total = 0; O(1)
2–3 If condition and return O(1)
4–5 Loop through the array O(n)
6–8 Nested loops and print sum O(n²)

9 – 10 Check if number is even and print O(1)

Overall Complexity Overall Time Complexity O(n²)


Thank you

You might also like