0% found this document useful (0 votes)
19 views

Quick Notes for DP Prep1- Algorithms

Uploaded by

lidiyateshome4
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views

Quick Notes for DP Prep1- Algorithms

Uploaded by

lidiyateshome4
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 16

DS & Algorithms

Resource containing all upcoming DS topics by Google:


https://techdevguide.withgoogle.com/paths/data-structures-and-algorithms/#sequence-2

Linear Data Structures:


 Array: A collection of elements of the same type stored in contiguous memory locations.
 Linked List: A collection of elements linked together by pointers, allowing for dynamic insertion and
deletion.
 Queue: A First-In-First-Out (FIFO) structure where elements are added at the end and removed
from the beginning.
 Stack: A Last-In-First-Out (LIFO) structure where elements are added and removed from the top.
Non-Linear Data Structures:
 Tree: A hierarchical structure where each node can have multiple child nodes.
 Graph: A collection of nodes connected by edges, representing relationships between data
elements.
 Hash Table: A data structure that uses a hash function to map keys to values, allowing for fast
lookup and insertion.

Lists and trees and their applications. Stacks and queues.

https://leetcode.com/explore/learn/card/linked-list/
https://leetcode.com/explore/learn/card/queue-stack/
https://techdevguide.withgoogle.com/resources/topics/trees/?no-filter=true#!
https://www.javatpoint.com/tree

Lists and trees are fundamental data structures with numerous applications across various domains. Here are
some common applications for each:

### Applications of Lists:

1. **Data Storage**:

- Lists are used to store collections of items where the order of elements matters. They're used in databases,
spreadsheets, and file systems for organizing and managing data.

2. **Task Management**:

- To-do lists, priority queues, and task schedulers use lists to manage tasks in a specific order or priority.

3. **Text Processing**:

- Lists are used to represent strings, where each character is stored as an element in the list. They're utilized
in text editors, parsers, and compilers for manipulating and analyzing textual data.

4. **Undo Functionality**:

- Lists can be used to implement undo functionality in applications such as text editors, graphic design
software, and version control systems.
5. **Linear Data Structures**:

- Stacks and queues, which are specialized forms of lists, are used for managing data in specific orders like
Last-In-First-Out (LIFO) or First-In-First-Out (FIFO) respectively. They find applications in algorithms, operating
systems, and simulations.

6. **Dynamic Memory Allocation**:

- Linked lists are used to manage memory dynamically, allowing for efficient allocation and deallocation of
memory blocks as needed.

### Applications of Trees:

1. **Hierarchical Data Representation**:

- Trees are used to represent hierarchical relationships between data elements, such as file systems,
organization charts, and XML/HTML structures.

2. **Binary Search Trees (BST)**:

- BSTs are used for fast searching, insertion, and deletion operations. They're employed in databases, symbol
tables, and compilers for efficient data retrieval and manipulation.

3. **Balanced Trees**:

- Balanced trees like AVL trees and Red-Black trees are used in databases and filesystems to ensure efficient
and balanced access to data, preventing worst-case performance scenarios.

4. **Binary Heap**:

- Trees can be used to implement priority queues efficiently. Binary heaps, a specialized form of binary trees,
are often used to implement priority queues in algorithms such as Dijkstra's shortest path algorithm and heap
sort.

5. **Trie**:

- Tries are used for fast retrieval of associative data, particularly for storing and searching dictionaries,
autocomplete systems, and IP routing tables.

6. **Decision Trees**:

- Decision trees are used in machine learning and decision-making algorithms for classification and regression
tasks.

7. **Parsing and Expression Trees**:

- Trees are used to represent mathematical expressions and parse them efficiently. Expression trees are
utilized in compilers, calculators, and symbolic algebra systems.
These are just a few examples of the diverse applications of lists and trees in computer science and beyond.
Their flexibility and efficiency make them indispensable tools for organizing, managing, and processing data in
a wide range of applications.

What is Big-O Notation?

https://www.youtube.com/watch?v=BgLTDT03QtU

https://neetcode.io/courses/lessons/big-o-notation
Graphs and their search methods. Applications.

https://techdevguide.withgoogle.com/resources/topics/graphs/?no-filter=true#!

https://www.javatpoint.com/ds-graph - go through all graph topics from this link

Algorithm design methods (divide and conquer, dynamic programming, greedy


algorithms).
What is an algorithm?

An Algorithm is a procedure to solve a particular problem in a finite number of steps for a finite-sized input.

The algorithms can be classified in various ways. They are:

- Implementation Method
- Design Method – one we’re interested in.
- Design Approaches
- Other Classifications

Classification by Design Method:

There are primarily three main categories into which an algorithm can be named in this type of classification. They
are:

Greedy Method:
In the greedy method, at each step, a decision is made to choose the local optimum, without thinking about the
future consequences. This technique involves trying all possible solutions to a problem and selecting the one that
satisfies the requirements. While simple, brute force algorithms may not be efficient for large problem instances
due to their exponential time complexity.

Applications of Greedy Algorithm

- It is used in finding the shortest path.

o It is used to find the minimum spanning tree using the prim's algorithm or the Kruskal's
algorithm.
o It is used in a job sequencing with a deadline.
o This algorithm is also used to solve the fractional knapsack problem

Example: Fractional Knapsack, Activity Selection.

Disadvantages of using Greedy algorithm


Greedy algorithm makes decisions based on the information available at each
phase without considering the broader problem. So, there might be a possibility that
the greedy solution does not give the best solution for every problem.

It follows the local optimum choice at each stage with a intend of finding the global
optimum.

Let's understand through an example.

Suppose there is a problem 'P'. I want to travel from A to B shown as below:

P:A→B

The problem is that we have to travel this journey from A to B. There are various
solutions to go from A to B. We can go from A to B by walk, car, bike, train,
aeroplane, etc. There is a constraint in the journey that we have to travel this
journey within 12 hrs. If I go by train or aeroplane then only, I can cover this
distance within 12 hrs. There are many solutions to this problem but there are only
two solutions that satisfy the constraint.

If we say that we have to cover the journey at the minimum cost. This means that
we have to travel this distance as minimum as possible, so this problem is known as
a minimization problem. Till now, we have two feasible solutions, i.e., one by train
and another one by air. Since travelling by train will lead to the minimum cost so it
is an optimal solution. An optimal solution is also the feasible solution, but providing
the best result so that solution is the optimal solution with the minimum cost. There
would be only one optimal solution.

The problem that requires either minimum or maximum result then that problem is
known as an optimization problem. Greedy method is one of the strategies used for
solving the optimization problems.

Dynamic Programming:
The approach of Dynamic programming is similar to divide and conquer. It is a bottom-up approach we
solve all possible small problems and then combine them to obtain solutions for
bigger problems. The difference is that whenever we have recursive function calls with the same result,
instead of calling them again we try to store the result in a data structure in the form of a table and retrieve the
results from the table. Thus, the overall time complexity is reduced. “Dynamic” means we dynamically decide
whether to call a function or retrieve values from the table. Dynamic Programming is frequently
related to Optimization Problems.
Example: 0-1 Knapsack, subset-sum problem, Fibonacci sequence

Backtracking: Backtracking is a systematic approach to solving problems by recursively trying all possible solutions.
If a solution is found to be invalid, the algorithm backtracks and tries another option.

Examples include the N-Queens problem and Sudoku solving.


Divide and Conquer:
The Divide and Conquer strategy involves dividing the problem into sub-problem, recursively solving them, and
then recombining them for the final answer.

Example: Merge sort, Quicksort, binary search

Divide and Conquer Method vs Dynamic Programming

Divide and Conquer Method Dynamic Programming

1.It deals (involves) three steps at each level of 1.It involves the sequence of four steps:
recursion:
Divide the problem into a number of subproblems. o Characterize the structure of optimal
Conquer the subproblems by solving them
recursively. solutions.
Combine the solution to the subproblems into the
solution for original subproblems. o Recursively defines the values of optimal

solutions.

o Compute the value of optimal solutions

in a Bottom-up minimum.

o Construct an Optimal Solution from

computed information.

2. It is Recursive. 2. It is non Recursive.

3. It does more work on subproblems and hence has 3. It solves subproblems only once and then
more time consumption. stores in the table.

4. It is a top-down approach. 4. It is a Bottom-up approach.

5. In this subproblems are independent of each 5. In this subproblems are interdependent.


other.

6. For example: Merge Sort & Binary Search etc. 6. For example: Matrix Multiplication.
Backtracking

In this topic, we will learn about backtracking, which is a very important skill set to solve
recursive solutions. Recursive functions are those that call itself more than once. Consider
an example of Palindrome:

Initially, the function isPalindrome(S, 0, 8) is called once with the parameters isPalindrome(S,
1, 7). The recursive call isPalindrome(S, 1, 7) is called once with the parameters
isPalindrome(S, 2, 6).

Backtracking is one of the techniques that can be used to solve the problem. We can write
the algorithm using this strategy. It uses the Brute force search to solve the problem, and
the brute force search says that for the given problem, we try to make all the possible
solutions and pick out the best solution from all the desired solutions. This rule is also
followed in dynamic programming, but dynamic programming is used for solving
optimization problems. In contrast, backtracking is not used in solving optimization
problems. Backtracking is used when we have multiple solutions, and we require all those
solutions.

Backtracking name itself suggests that we are going back and coming forward; if it satisfies
the condition, then return success, else we go back again. It is used to solve a problem in
which a sequence of objects is chosen from a specified set so that the sequence satisfies
some criteria.

Applications of Backtracking: N-queen problem, Sum of subset problem, Graph coloring,


Hamiliton cycle.

Elementary and non-elementary sorting methods.


Elementary Sorting Methods:

Elementary sorting methods are simple, basic algorithms for sorting data that are easy to understand and
implement. They are often used for educational purposes and for sorting small datasets where efficiency is not a
critical concern. Some examples of elementary sorting methods include:

Comparison Based Sort

Comparison based sorts are sorting algorithms that require a direct method of comparison defined by the ordering
relation. In a sense, they are the most natural sorting algorithms since, intuitively, when we think about sorting
elements, we instinctively think about comparing elements to each other. In the following sections, we’ll introduce
some of the fundamental comparison-based sorting algorithms.

The fundamental problem of sorting is all about ordering a collection of items. How
you order these items is entirely based on the method of comparison. Suppose you
needed to sort a pile of books. If you are working on a home library, you might
organize it by the author’s last name. But if you need to quickly transport the books,
it might make sense to initially organize them based on the size of the book. Both of
these problems are sorting problems, but a key takeaway is that sorting problems
are necessarily tied to a method of comparison. Different methods of comparison
may lead to different results. At the most basic level, sorting algorithms are all
about rearranging elements in a collection based on a common characteristic of
those elements.

A sort is formally defined as a rearrangement of a sequence of elements that puts


all elements into a non-decreasing order based on the ordering relation.

Selection sort
Suppose you had to sort a pile of books by their weight, with the heaviest
book on the bottom and the lightest book on the top. One reasonable
method of sorting is to go through your books, find the heaviest book, and
then place that at the bottom. After that, you can then find the next heaviest
book in the remaining pile of books and place that on top of the heaviest
book. You can continue this approach until you have a sorted pile of books.
This concept is exactly what the selection sort does.

Suppose we had a collection of elements where every element is an integer.


Selection sort will build up the sorted list by repeatedly finding the minimum
element in that list and moving it to the front of the list through a swap. It
will proceed to swap elements appropriately until the entire list is sorted.

write. Unfortunately, it is pretty slow, requiring 𝑂(𝑛2)O(n2) time to sort the


In terms of simplicity, it is a highly intuitive algorithm and not too difficult to

find the minimum element, meaning we can have up to 𝑛+(𝑛−1)+(𝑛−2)+…


list in the worst case. In the worst case, we have to search the entire array to

+1n+(n−1)+(n−2)+…+1 total operations, which is 𝑂(𝑛2)O(n2). The space


complexity of selection sort is 𝑂(1)O(1) since we do not use any additional
space during the algorithm (all operations are in-place).

It also is not a stable sorting algorithm. For example consider the collection
[4, 2, 3, 4, 1]. After the first round of selection sort, we get the array [1, 2, 3,
4, 4]. This array is sorted, but it does not preserve the ordering of equal
elements.

Implementation

Selection sort is a simple comparison-based sorting algorithm. It works by


dividing the input array into two subarrays: sorted and unsorted. The
algorithm repeatedly selects the smallest (or largest) element from the
unsorted subarray and swaps it with the first unsorted element, thereby
expanding the sorted subarray. This process continues until the entire array
is sorted.
Here's a step-by-step explanation of how selection sort works:

1. **Divide the Array**: Initially, the entire array is considered unsorted.

2. **Find the Smallest Element**: Iterate through the unsorted subarray to


find the smallest element.

3. **Swap with First Unsorted Element**: Once the smallest element is


found, swap it with the first unsorted element. This effectively expands the
sorted subarray by one element

4. **Repeat**: Repeat steps 2 and 3 for the remaining unsorted elements.


Each iteration adds one element to the sorted subarray and reduces the size
of the unsorted subarray.

5. **Termination**: The algorithm terminates when the entire array is sorted.

While selection sort is straightforward to implement, it is generally less


efficient than more advanced sorting algorithms such as merge sort or quick
sort. However, it can be useful in situations where simplicity and ease of
implementation are prioritized over performance.

Bubble Sort
Conceptually, bubble sort is an implementation of a rather simple idea. Suppose we
have a collection of integers that we want to sort in ascending order. Bubble sort
proceeds to consider two adjacent elements at a time. If these two adjacent
elements are out of order (in this case, the left element is strictly greater than the
right element), bubble sort will swap them. It then proceeds to the next pair of
adjacent elements. In the first pass of bubble sort, it will process every set of
adjacent elements in the collection once, making swaps as necessary. The core idea
of bubble sort is it will repeat this process until no more swaps are made in a single
pass, which means the list is sorted.
In terms of the running time of the algorithm, bubble sort’s runtime is entirely based

𝑛elements, each pass will consider (𝑛−1)pairs. In the worst case, when the
on the number of passes it must make in the array until it’s sorted. If the array
has
minimum element is at the end of the list, it will take (𝑛−1) passes to get it to the
proper place at the front of the list, and then one more additional pass to determine
that no more swaps are needed. Bubble sort, as a result, has worst case runtime of
(its n-squared in the bracket btw )- O(n2) . The space complexity of bubble sort is
O(1). All sorting operations involve swapping adjacent elements in the original
input array, so no additional space is required. Bubble sort is also a stable sorting
algorithm since equal elements will never have swapped places, so their relative
ordering will be preserved.
Overall, bubble sort is fairly simple to implement, and it’s stable, but outside of that,
this algorithm does not have many desirable features. It’s fairly slow for most inputs
and, as a result, it is rarely used in practice.

Insertion Sort
Going back to our pile of books analogy, where we attempted to sort by weight, let's explore
another approach to sorting the pile of books. We'll start at the top of the pile and iterate over
the books one by one. Every time we encounter a book that is lighter than the book above it,
we'll move the book up until it is in its appropriate place. Repeating this for the entire pile of
books, we will get the books in sorted order.

This is the core intuition behind insertion sort. Given a collection of integers, you can sort the
list by proceeding from the start of the list, and every time you encounter an element that is out
of order, you can continuously swap places with previous elements until it is inserted in its
correct relative location based on what you’ve processed thus far.
This process is best understood with a visual example.

In terms of efficiency of this approach, the worst possible input is a reversed list,
where every element has to be inserted at the very beginning of the list, which

complexity of insertion sort is 𝑂(1). All operations are performed in-place.


1+2+…+(n−1) or O(n2 - squared swaps. The space
leads to a total of

Despite the O(n2 ) time complexity, in practice, there are a couple of advantages to
insertion sort.
For one, it is a stable sort. By design of its implementation, we will never swap an
element later in the list with an equal element earlier in the list. But more
importantly, there are cases where insertion sort may actually be the best sort.
Generally, on almost sorted arrays where the number of inversions is relatively
small compared to the size of the array, insertion sort will be quite fast since the
number of swaps required will be low on almost sorted arrays.
Next, insertion sort can also be the best choice on small arrays. This is more of an
empirical observation based on experiments, but it is one that you should be aware
of. Many sorting functions have a quick check for the size of the collection and if
that value is below a threshold, the program will default to insertion sort. Java's
official implementation of Arrays.sort() performs such a check before performing
more theoretically optimal sorts.
In terms of disadvantages, on larger collections with many inversions, other sorts
will generally outperform insertion sort. However, of all the sorts we have covered
thus far, insertion sort is the first that is practically used, depending on the context.

Quiz!

Multiple Choice Question


In what cases does insertion sort preferable? (Select all that appropriately)
When the input is small (< 15 elements)
When the input contains a lot of equal elements
When the input is sorted in reverse order
When the input is very close to being sorted

Heap Sort
A priority queue is an abstract data type, while a Heap is a data structure.
Therefore, a Heap is not a Priority Queue, but a way to implement a Priority Queue.

A Heap has the following properties:

 Insertion of an element into the Heap has a time complexity


of O(logN);
 Deletion of an element from the Heap has a time complexity
of O(logN);
 The maximum/minimum value in the Heap can be obtained with
O(1) time complexity.

When we discussed selection sort, the basic principle involved finding the
minimum element and moving it to the front. We repeated this continuously

time of 𝑂(𝑛2)O(n2), since for every iteration, we need to find the minimum
until we sorted the entire list. But as we saw, selection sort has a running

element in the list which takes 𝑂(𝑛)O(n) time. We can improve upon this by
using a special data structure called a heap.

To review the basics of the heap data structure, you can visit the Heap
Explore Card. The core concept of the heap sort involves constructing a heap
from our input and repeatedly removing the minimum/maximum element to
sort the array. A naive approach to heapsort would start with creating a new
array and adding elements one by one into the new array. As with previous
sorting algorithms, this sorting algorithm can also be performed in place, so
no extra memory is used in terms of space complexity.

The key idea for in-place heapsort involves a balance of two central ideas:
(a) Building a heap from an unsorted array through a “bottom-up
heapification” process, and
(b) Using the heap to sort the input array.

Heapsort traditionally uses a max-heap to sort the array, although a min-


heap also works, but its implementation is a little less elegant.

Algorithm for “bottom-up heapification” of input into max-heap. Given an


input array, we can represent it as a binary tree. If the parent node is stored
at index i, the left child will be stored at index 2i + 1 and the right child at
index 2i + 2 (assuming the indexing starts at 0).
To convert it to a max-heap, we proceed with the following steps:

1. Start from the end of the array (bottom of the binary tree).
2. There are two cases for a node
o It is greater than its left child and right child (if any).
 In this case, proceed to next node (one index before current
array index)
o There exists a child node that is greater than the current node
 In this case, swap the current node with the child node. This
fixes a violation of the max-heap property
 Repeat the process with the node until the max-heap property is
no longer violated
3. Repeat step 2 on every node in the binary tree from bottom-up.

A key property of this method is that by processing the nodes from the
bottom-up, once we are at a specific node in our heap, it is guaranteed that
all child nodes are also heaps. Once we have “heapified” the input, we can
begin using the max-heap to sort the list. To do so, we will:

1. Take the maximum element at index 0 (we know this is the maximum
element because of the max-heap property) and swap it with the last element
in the array (this element's proper place).
2. We now have sorted an element (the last element). We can now ignore this
element and decrease heap size by 1, thereby omitting the max element
from the heap while keeping it in the array.
3. Treat the remaining elements as a new heap. There are two cases:
o The root element violates the max-heap property
 Sink this node into the heap until it no longer violates the max-
heap property. Here the concept of "sinking" a node refers to
swapping the node with one of its children until the heap
property is no longer violated.
o The root element does not violate the max-heap property
 Proceed to step (4)
4. Repeat step 1 on the remaining unsorted elements. Continue until all
elements are sorted.

time of the algorithm is now 𝑂(𝑁log⁡𝑁)O(NlogN). This is a result of the


The key aspect that makes heapsort better than selection sort is the running

operation in the sort is a 𝑂(log⁡𝑁)O(logN) operation, which has to be


fact that removing the max element from the heap, which is the central

performed in the worst case 𝑁−1N−1 times. Note that in-place heapification
is an 𝑂(𝑁)O(N) operation, so it has no impact on the worst-case time
complexity of heapsort.

In terms of space complexity, since we are treating the input array as a heap

is 𝑂(1)O(1).
and creating no extra space (all operations are in-place), heapsort

The best way to understand heapsort is by seeing it in action. Below is an


animation of heapsort:

Implementation of heapsort

The main advantage of heapsort is it's generally much faster than the other
comparison based sorts on sufficiently large inputs as a consequence of the
running time. However, there are a few undesirable qualities in the

this algorithm performs worse than other 𝑂(𝑁log⁡𝑁)O(NlogN) sorts as a


algorithm. For one, it is not a stable sort. It also turns out that in practice,

result of bad cache locality properties. Heapsort swaps elements based on


locations in heaps, which can cause many read operations to access indices
in a seemingly random order, causing many cache misses, which will result
in practical performance hits.

Quiz!

Multiple Choice Question


Which sort is heapsort a direct optimization of?
A. Insertion Sort
B. Bubble Sort
C. Selection Sort
Elementary search methods. Hash-based search.
Algorithm questions
Sorting algorithms are a common topic in algorithmic interviews, and interviewers
often ask various questions to assess candidates' understanding of different
sorting techniques. Here are some common sorting interview questions:

1. **What is a sorting algorithm?**: This is a fundamental question that assesses


the candidate's basic understanding of sorting.
2. **What is the time complexity of the bubble sort algorithm?**: Understanding
the time complexity of sorting algorithms is crucial. For bubble sort, it's O(n^2) in
the worst case.
3. **Explain how the merge sort algorithm works. What is its time complexity?**:
Candidates may be asked to describe the merge sort algorithm and analyze its
time complexity, which is O(n log n) in all cases.
4. **What is the difference between stable and unstable sorting algorithms? Can
you provide examples of each?**: Candidates should be able to differentiate
between stable and unstable sorting algorithms and provide examples of each.
For example, merge sort is stable, while quick sort is unstable.
5. **Describe the quick sort algorithm and analyze its time complexity.**: Quick
sort is a popular sorting algorithm, and candidates may be asked to explain its
workings and analyze its time complexity, which is O(n log n) on average and
O(n^2) in the worst case.
6. **What is an in-place sorting algorithm? Can you provide examples?**: In-
place sorting algorithms sort the elements within the array itself without requiring
additional space. Candidates should be able to identify in-place sorting algorithms
like quick sort and selection sort.
7. **Compare bubble sort and insertion sort.**: Candidates may be asked to
compare and contrast different sorting algorithms, such as bubble sort and
insertion sort, in terms of time complexity, space complexity, and performance.
8. **Explain the concept of stability in sorting algorithms. Why is it important?**:
Candidates should understand the importance of stability in sorting algorithms
and how it affects the relative order of equal elements.
9. **What is the best sorting algorithm to use in different scenarios?**:
Candidates may be asked to recommend the best sorting algorithm for various
scenarios based on factors like input size, data distribution, and memory
constraints.
10. **Implement a sorting algorithm of your choice**: Some interviews may
require candidates to implement a sorting algorithm from scratch and analyze its
performance.

Sorting based questions


1. What is the time complexity of bubble sort
2. Explain how the merge sort algorithm works.
3. What is the worst-case time complexity of merge sort?
4. Describe the quick sort algorithm and analyze its time complexity.
5. What is a stable sorting algorithm? Provide an example.
6. Compare and contrast quick sort and merge sort.
7. Explain the concept of in-place sorting algorithms.
8. What is the best sorting algorithm for large datasets?
9. Implement a sorting algorithm of your choice.

You might also like