0% found this document useful (0 votes)
7 views86 pages

Algorithms Intro

The document provides a comprehensive overview of algorithms, including their definitions, efficiency measurements, and complexities. It discusses various types of algorithms, such as searching and sorting algorithms, and introduces concepts like recursion and asymptotic notations. Additionally, it highlights the importance of analyzing algorithms based on time and space complexity to determine their efficiency.

Uploaded by

kaviphotosdrive
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views86 pages

Algorithms Intro

The document provides a comprehensive overview of algorithms, including their definitions, efficiency measurements, and complexities. It discusses various types of algorithms, such as searching and sorting algorithms, and introduces concepts like recursion and asymptotic notations. Additionally, it highlights the importance of analyzing algorithms based on time and space complexity to determine their efficiency.

Uploaded by

kaviphotosdrive
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 86

Deep Dive into

Algorithms
Get ready to scuba dive into
the world of algorithms
Algorithms - Recap
Algorithms ?
Algorithm - Definition

• Step by step instructions/procedures executed in a specific


order in order to complete a specific task or to solve a problem
efficiently.

• More than one ways to solve a problem, hence there might be


more than one algorithm, but the best algorithm ensures that the
task is completed in an efficient manner.
Efficiency of an Algorithm
How to
measure ?
Efficiency Measurement - Experimental Studies

• Implement it, experiment the implementation with various test


inputs, record the time spent during each execution and then
analyse the results.
Demo using a simple For Loop
Challenges with Experimental
Studies
• Experimental Studies of running times are valuable, but for a
production quality code there are three major limitations to their use of
algorithm analysis.
1. Difficult to compare two algorithms unless they are run using the
same hardware and software
2. Can be done only on a limited set of test inputs (might exclude the
running time of inputs not included in the experiment)
3. The algorithm needs to be fully implemented for it to be tested for
its running time.
• Difficult to use this during design stage as it expects an implemented
algorithm which is a major drawback.
What now?
Moving beyond Experimental Analysis

• Develop an approach to analyse the efficiency of algorithms that


A. Allow us to evaluate the relative efficiency of two algorithms
irrespective of the hardware and software used to implement
and execute them
B. Using a high level description of the algorithm (instead of its
implementation) to do the analysis
C. Take into account all possible inputs
What to measure?

• Time - Time taken to execute the algorithm


1. Best Time

2. Average Time

3. Worse Time

• Space - Memory required in terms of the data structures used in


the algorithm
Time & Space Complexity
• Time complexity is defined in terms of how many times it takes to run a given
algorithm, based on the length of the input.
• When an algorithm is run on a computer, it necessitates a certain amount of
memory space and the amount of memory an algorithm uses is identified by
the space complexity.
• A space-time or time-memory trade-off in computer science is a case where
an algorithm or program trades increased space (data storage) usage with
decreased time (computation or response time).
• The utility of a given space-time trade-off is affected by related fixed and
variable costs (of, e.g., CPU speed, storage space), and is subject to
diminishing returns.
Analysing an Algorithm

• Counting Primitive Operations

• Measuring operations as a function of its input size

• Focusing on the worst case scenario


Counting the Primitive Operations
• Low level instructions with an execution time that is constant. Examples:
1. Assigning a value to a variable (int x = 10;)
2. Following an object reference (function add(int a, int b){…})
3. Performing an arithmetic operation (int sum = a + b;)
4. Comparing two numbers ( if (a > b) then {} else {} )
5. Accessing a single element of an array by index (int n = intArr[10];);
6. Calling a method (add(10,20);)
7. Returning from a method (return sum;)
• Count the number of primitive operations that are executed and use this as the
number t, instead of the time taken for each primitive operation execution
Operations as function of input size

• Associate each algorithm with a function f(x) that characterises


the number of primitive operations that are performed as a
function of n, where n is the input size.

• Example: f(n) = c, f(n) = n, etc.


Worse case scenario
• Algorithms may run faster on some
inputs than it does on the others of the
same size. Hence, to calculate the
running time of an algorithm as a
function of input size n, we use the
average over all possible inputs of the
same size.
• Average case analysis is usually
quite challenging as we need a
distribution to be able to arrive at it, so
we use worse case analysis as it is
easier to identify a worst case input in
any scenario.
Asymptotic Notations
• Mathematical notations used to describe the running time of an algorithm
when the input tends towards a particular value or a limiting value.
• Three asymptotic notations
• Big-O (O) – upper bound of the running time of an algorithm or worse
case complexity
• Omega (Ω) – lower bound of the running time of an algorithm or best
case complexity
• Theta (𝚹) – encloses upper and lower bound and represents the average
case complexity

Definition: A straight line which a curve approaches arbitrarily closely, as they go to infinity. The limit
of the curve, its tangent "at infinity".
Big O Functions
f(n) Name Explanation
1 Constant f(n) = c
f(n) = logb(n) if & only if
x
log n Logarithmic b = n and b > 1
for us b = 2
n Linear f(n) = n
n log n Log Linear f(n) = n log n
2 2
n Quadratic f(n) = n
3 3
n Cubic f(n) = n
n n
2 Exponential f(n) = 2
n! Factorial f(n) = n!
Plot of Common Big O Functions
Constant Function
private static void simpleFunctionBasicOperations(int a,int b, int c) {
int sum = a + b;
int product = a * b * c;
int quotient = a * b / c;
System.out.println("\na + b = "+sum);
System.out.println("a * b * c = "+product);
System.out.println("a * b / c = "+quotient);
}
Also a Constant Function

private static void simpleFunctionWithArrayChanged(int[] nArr) {


System.out.println("Simple Int Array To String is " + nArr.toString());
for (int i = 0; i < 3; i++) {
System.out.println("Class of array object is " + nArr[i] + " : " +
nArr.getClass());
}
}
Logarithmic Function
private static void doublingLoopVariable(int n) {

int iteration = 1;

for( int i = 1; i < n;) {

System.out.println(" Iteration = "+iteration + " i = "+i);

i = i * 2;

iteration ++;

}
Another Logarithmic Function
private static void halvingLoopVariable(int n) {

int iteration = 1;

for( int i = n; i > 0;) {

System.out.println(" Iteration = "+iteration + " i = "+i);

i = i / 2;

iteration ++;

}
Linear Function
private static void simpleForLoop(int[] arr) {
System.out.println("For Loop");
for (int i =0 ; i< arr.length;i++) {
double sq = Math.pow(arr[i], 2);
System.out.println("Number : "+arr[i]+" Square : "+sq);
}
}
Another Linear Function
public static void twoForLoops(int[] arr) {

System.out.println("For Loop");

for (int i =0 ; i< arr.length;i++) {

double sq = Math.pow(arr[i], 2);

System.out.println("Number : "+arr[i]+" Square : "+sq);

for (int i =0 ; i< 10000;i++) {

double sq = Math.pow(arr[i], 2);

System.out.println("Number : "+arr[i]+" Square : "+sq);

}
Cubic Functions
Another Cubic Function
Exponential Function

public static double pow(double c, double n) {

if (n == 0) return 1;

return c * Math.pow(c, n-1);

}
Factorial Function
PROBLEMS
• Write a program to check whether the given number is Odd or
Even.

• Write a program to print the sum of digits in the given number.

• Write a program to print the natural number series till the given
number.

• Write a program to print the solid square pattern for the given
number.

• Write a program to print the sum of digits of the numbers in the


given list.
Day 2 - Recursion
Daily Challenge Questions
• Write a java program to multiply the digits of a given number

• Write a java program to print the first N prime numbers with the
2
time complexity of O(N )

• Write a java program for finding the sum of squares of digits of a


number with O(log N) time complexity.

• Write a java program for finding Armstrong number in O(log N)


time complexity
Day 1 - Refresher
• Time Complexity of Algorithms = number of lines executed for a
given input size

• Asymptotic Notations: Big O (Worse case), Theta (Average


case), Omeage (Best case)
2
• 1 (Constant) < log N (Logarithmic) < N (Linear) < N log N (Log Linear) < N
3 N
(Quadratic) < N (Cubic) < 2 (Exponential) < N! (Factorial)
Adding a list of numbers
• Let us add the numbers 1, 3, 5, 7, 9

• We can also add them as ((((1+3)+5)+7)+9) or as (1+(3+(5+(7+9))))


• (1+(3+(5+(7+9)))) => (1+(3+(5+16))) => (1+(3+21)) => (1+24) => 25
• This approach of repeating the steps for a smaller problem is called
recursion.
What is Recursion?

• Recursion is a method of solving problems that involves breaking a


problem down into smaller and smaller subproblems until you get to a
small enough problem that it can be solved trivially.

This Photo by Unknown Author is licensed under CC BY-NC


Three Laws of Recursion
• It must have a base case (simplest instance of a problem,
consisting of a function that terminates the recursive function,
the base case evaluates the result when a given condition is
met).

• It must change its state and move towards the base case
also called as the recursive step.

• It must call itself, recursively (by passing in the input which


decreases in size or complexity).
Adding a list of numbers -
Recursively
Pros and Cons
• Pros:
• Reduces the length of the code
• Makes it easier to read and write
• Problems can be solved quickly
• Decomposing the problem into smaller solvable problems
• Cons:
• Can be difficult to debug
• If not coded properly might lead to memory leaks or overflows
Types of Recursion
• Direct Recursion : When a function calls itself within the same function
repeatedly

• Indirect Recursion : When a function calls another function which calls the
first function

• Tail Recursion : If the function is calling itself and the recursive call is the last
statement executed by the function

• Non Tail/Head Recursion : If the function is calling itself and the recursive
call is the first statement of of the function

• Linear Recursion : If the recursive function is called only once


• Tree Recursion : If the recursive function calls itself more than once.
Fibonacci Sequence

• f(n) = f(n - 1) + f(n - 2) for n >= 2

• f(0) = f(1) = 1

• The sequence is 0, 1, 1, 2, 3, 5, 8, 13, ….


Fibonacci Sequence - Recursive
public static int fibonacciSequenceRecursive(int n) {

if(n==0) {

return 0;

if (n ==1 || n == 2) {

return 1;

return fibonacciSequenceRecursive(n-1) + fibonacciSequenceRecursive(n-2);

}
Staircase Problem

• The top of staircases can be reached by climbing 1,2 or 3 steps at a


time.

• How many ways there are to reach the top of the staircase. i.e: find
number of ways to climb top of staircase.
Staircase Problem – Recursive
Solution
Robot Maneuvering

• A robot is located at the top-left corner of a m x n grid

• The robot can only move either down or right at any point in time. The
robot is trying to reach the bottom-right corner of the grid.

• How many possible unique paths are there?


Robot Maneuvering – Recursive
Solution
Searching Algorithms
Searching
• For a given set of elements we need to find the location of a
specific element.

• studentIDs = {061,028,087,001,024} , find if the student whose


ID is 084 is present or not.

• There are two types of searching algorithms

• Linear Search

• Binary Search
Linear Search

• Simplest of Searching Algorithms

• Takes in an ordered or unordered list of elements

• Searches for an element in a sequential manner from the first


element to the last (until it finds the element or it exhausts its
search)

• Order of Complexity : O(n)


Linear Search Algorithm

020 604 205 024 200 195 493 520 720 402

Let us search for


205
020 604 205 024 200 195 493 520 720 402

Is this 205?
Linear Search Algorithm
Binary Search
• If the input list is already sorted, then linear search is not an
efficient way of searching.
• In this case it is possible to perform the search much faster, because
the order of the elements in the list guides the search.
• Since it is already sorted, we can select a central point and check if
the value at the central point is the same or higher or lower than the
item to be searched.
• If it is same, the search ends there, if it is high then we search for
elements from 1 – the mid point and if it is low then we search for
elements from mid point to end of array.
Binary Search Algorithm

020 024 195 200 205 402 493 520 604 720

Is this 520?
Binary Search Algorithm

020 024 195 200 205 402 493 520 604 720

Is this 520?
Binary Search Algorithm
Binary Search Algorithm
Recursive
Binary Search – Time Complexity

• The time complexity of binary search in a successful search is O(log


n) and for an unsuccessful search is O(log n).
Sorting Algorithms
Why Sorting?
• Sorting is the process of placing elements from a collection in
some kind of order

• Example: a list of words could be sorted alphabetically or by


length, list of cities sorted by population or area, or pincode,
sorting the names of students by student ids or names etc.

• There are a lot of sorting algorithms which are developed and


analyzed.

• Sorting a huge number of elements takes a substantial amount


of time and resources.
Sorting Algorithms
• Bubble Sort

• Selection Sort

• Insertion Sort

• Merge Sort

• Quick Sort

• Heap Sort
Bubble Sort

• Bubble sort works by repeatedly going through the list to be


sorted comparing each pair of adjacent elements.

• If the elements are in the wrong order they are swapped, if not,
move on to the next pair.
Bubble Sort
Bubble Sort - Complexity

• Since it does it for two nested loops, on a worse case scenario it


2
will be O(N )
Selection Sort

• Improves on the bubble sort by making only one exchange for every
pass through the list.

• A selection sort looks for the largest value as it makes a pass and, after
completing the pass, places it in the proper location.
2
• Still has the complexity of O(N )
Selection Sort - Working
Insertion Sort

2
• The insertion sort, although still O(N ), works in a slightly different
way.

• It always maintains a sorted sublist in the lower positions of the list.

• Each new item is then “inserted” back into the previous sublist such
that the sorted sublist is one item larger.
Insertion Sort - Working
Merge Sort
• Uses divide and conquer strategy

• A recursive algorithm that continually splits a list in half

• If the list is empty or has one item, it is sorted by definition (the base case)

• If the list has more than one item, we split the list and recursively invoke a merge
sort on both halves

• Once the two halves are sorted, the fundamental operation, called a merge, is
performed.

• Merging is the process of taking two smaller sorted lists and combining them
together into a single, sorted, new list

• Complexity is O(nlogn)
Merge Sort - Working
Quick Sort
• Uses the same divide and conquer of merge sort with less
additional space.

• Selects a value called pivot value, which is mostly the first time
(for simplicity sake). This value is used for arriving at the
splitting the list.

• The actual value where the pivot value actually arrives at the
sorted list is the split point.

• Complexity is O (N log N)
Quick Sort - Working
Heap Sort

• Based on heap data structure (a tree structure)

• Heap sort processes the elements by creating the min-heap or


max-heap using the elements of the given array.

• Uses a binary tree which is balanced

• Uses O(N log N)


Heap Sort – Max Heap
Heap Sort - Steps

• Build a max heap from the input

• The max element should be at the root, remove that and add it
to the final sorted list and then replace the root with the last item
in the heap.

• Repeat the above two steps until all elements are sorted.
Heap Sort - Working
Greedy Algorithms
What is Greedy Algorithm
• A greedy algorithm constructs a solution to the problem by
always making a choice that looks the best at the moment.

• A greedy algorithm never takes back its choices, but directly


constructs the final solution.

• For this reason, greedy algorithms are usually very efficient.

• Problem: Finding a greedy strategy that always produces an


optimal solution, also the locally optimal choices should also be
globally optimal.
Activity Selection Problem

• A Combinatorial problem to select non-conflicting activities to


perform within a given time frame, given a set of activities with
start and finish times.

• Need to select the maximum number of activities that can be


peformed by a single person or machine, assuming that only
one activity can be done at a time.

• Also known as scheduling problem.


Activity Selection - Example
Activity Name Start Time End time
match 1 1 2
match 5 3 4
match 4 0 6
match 3 5 6
match 6 8 9
match 2 5 9
Activity Selection - Algorithm
Huffman Coding
• Used in Text Compression (used for storing huge
documents/photos etc. in compressed form, for efficient transfer
of data over networking, etc.)

• It’s a variable length encoding using frequencies of the


characters.

• It uses short code-word for frequently used characters and long


code-word for infrequently used characters which helps with the
optimization.

• Ensure that no code-word (also called as prefix code) acts as a


prefix for another code.
Huffman Coding - Example
• A–5

• B–9

• C – 12

• D – 13

• E – 16

• F - 45
Algorithm - Huffman
Knapsack Problem
• Suppose a hitch-hiker has to fill up his knapsack by selecting
from among various possible objects those which will give him
maximum comfort

• This is called a Knapsack problem where we try to maximize the


objective function with the help of the values given to the
comfort of the object (also called as value or profit) and its size /
weight.

• Used in capital budgeting, cargo loading, cutting stock, etc.


Knapsack 0-1 Problem

You might also like