What is Logarithmic Time Complexity? A Complete Tutorial
Last Updated :
23 Jul, 2025
Logarithmic time complexity is denoted as O(log n). It is a measure of how the runtime of an algorithm scales as the input size increases. In this comprehensive tutorial. In this article, we will look in-depth into the Logarithmic Complexity. We will also do various comparisons between different logarithmic complexities, when and where such logarithmic complexities are used, several examples of logarithmic complexities, and much more. So let's get started.
What is Logarithmic Time ComplexityWhat is a Logarithm?
The power to which a base needs to be raised to reach a given number is called the logarithm of that number for the respective base.
For finding logarithmic two necessary factors that need to be known are base and number.
What is Complexity Analysis?
The primary motive to use DSA is to solve a problem effectively and efficiently. How can you decide if a program written by you is efficient or not? This is measured by complexities. Complexity is of two types:
The space Complexity of an algorithm is the space taken by an algorithm to run the program for a given input size. The program has some space requirements necessary for its execution these include auxiliary space and input space. The important standard for comparison of algorithms is the space taken by the algorithm to run for a given input size Hence it needs to be optimized.
In Computer science, there are various problems and several ways to solve each of these problems using different algorithms. These algorithms may have varied approaches, some might be too complex to Implement while some may solve the problem in a lot simpler way than others. It is hard to select a suitable and efficient algorithm out of all that are available. To make the selection of the best algorithm easy, calculation of complexity and time consumption of an algorithm is important this is why time complexity analysis is important, for this asymptotic analysis of the algorithm is done.
There are three cases denoted by three different notations of analysis:
How to measure complexities?
Below are some major order of Complexities are:
- Constant: If the algorithm runs for the same amount of time every time irrespective of the input size. It is said to exhibit constant time complexity.
- Linear: If the algorithm runtime is linearly proportional to the input size then the algorithm is said to exhibit linear time complexity.
- Exponential: If the algorithm runtime depends on the input value raised to an exponent then it is said to exhibit exponential time complexity.
- Logarithmic: When the algorithm runtime increases very slowly compared to an increase in input size i.e. logarithm of input size then the algorithm is said to exhibit logarithmic time complexity.
Notation | Complexity |
---|
O(1) | Constant |
---|
O(log N) | Logarithmic |
---|
O(N) | Linear time |
---|
O(N * log N) | Log linear |
---|
O(N^2) | Quadratic |
---|
O(N^3) | Cubic |
---|
O(2^N) | Exponential |
---|
O(N!) | Factorial |
---|
Measurement of Complexity analysisWhat is a Logarithm?
The power to which a base needs to be raised to reach a given number is called the logarithm of that number for the respective base.
For finding logarithmic two necessary factors that need to be known are base and number.

Examples:
logarithm of 8 for base 2 = log2(8) = 3,
Explanation: 23 = 8 Since 2 needs to be raised to a power of 3 to give 8, Thus logarithm of 8 base 2 is 3.
logarithm of 81 for base 9 = log9(81) = 2,
Explanation: 92 = 81 Since 9 needs to be raised to a power of 2 to give 81, Thus logarithm of 81 base 9 is 2.
Note: An exponential function is the exact opposite of a logarithmic function. When a value is being multiplied repeatedly it is said to grow exponentially whereas when the value is being divided repeatedly it is said to grow logarithmically.
Different Types of Logarithmic Complexities
Now that we know what is a logarithm, let's deep dive into different types of logarithmic complexities that exists, such as:
Simple Log Complexity (Loga b)
Simple logarithmic complexity refers to log of b to the base a. As mentioned, it refers to the time complexity in terms of base a. In design and analysis of algorithms, we generally use 2 as the base for log time complexities. The below graph shows how the simple log complexity behaves.
Simple Log Complexity (Log(a) b) There are several standard algorithms that have logarithmic time complexity:
Double Logarithm (log log N)
Double logarithm is the power to which a base must be raised to reach a value 'x' such that when the base is raised to a power 'x' it reaches a value equal to given number.
Double Logarithm (log log N)Example:
logarithm (logarithm (256)) for base 2 = log2(log2(256)) = log2(8) = 3.
Explanation: 28 = 256, Since 2 needs to be raised to a power of 8 to give 256, Thus logarithm of 256 base 2 is 8. Now 2 needs to be raised to a power of 3 to give 8 so log2(8) = 3.
N logarithm N (N * log N)
N*logN complexity refers to product of N and log of N to the base 2. N * log N time complexity is generally seen in sorting algorithms like Quick sort, Merge Sort, Heap sort. Here N is the size of data structure (array) to be sorted and log N is the average number of comparisons needed to place a value at its right place in the sorted array.
N * log Nlogarithm2 N (log2 N)
log2 N complexity refers to square of log of N to the base 2.
log2 NN2 logarithm N (N2 * log N)
N2*log N complexity refers to product of square of N and log of N to the base 2. This Order of time complexity can be seen in case where an N * N * N 3D matrix needs to be sorted along the rows. The complexity of sorting each row would be N log N and for N rows it will be N * (N * log N). Thus the complexity will be N2 log N,
N2 * log NN3 logarithm N (N3 log N)
N3*log N complexity refers to product of cube of N and log of N to the base 2. This Order of time complexity can be seen in cases where an N * N matrix needs to be sorted along the rows. The complexity of sorting each row would be N log N and for N rows it will be N * (N * log N) and for N width it will be N * N * (N log N). Thus the complexity will be N3 log N,
N3 log Nlogarithm √N (log √N)
log √N complexity refers to log of square root of N to the base 2.
log √NExamples To Demonstrate Logarithmic Time Complexity
Example 1: loga b
Task: We have a number N which has an initial value of 16 and the task is to reduce the given number to 1 by repeated division of 2.
Approach:
- Initialize a variable number_of_operation with a value 0 .
- Run a for loop from N till 1.
- In each iteration reduce the value of N to half.
- Increment the number_of_operation variable by one.
- Return the number_of_operation variable.
Implementation:
C++
// C++ code for reducing a number to its logarithm
#include <bits/stdc++.h>
using namespace std;
int main()
{
int N = 16;
int number_of_operations = 0;
cout << "Logarithmic reduction of N: ";
for (int i = N; i > 1; i = i / 2) {
cout << i << " ";
number_of_operations++;
}
cout << '\n'
<< "Algorithm Runtime for reducing N to 1: "
<< number_of_operations;
}
Java
/*package whatever //do not write package name here */
import java.io.*;
class GFG {
public static void main (String[] args) {
int N = 16;
int number_of_operations = 0;
System.out.print("Logarithmic reduction of N: ");
for (int i = N; i > 1; i = i / 2) {
System.out.print(i + " ");
number_of_operations++;
}
System.out.println();
System.out.print("Algorithm Runtime for reducing N to 1: " + number_of_operations);
}
}
Python
# python3 code for the above approach
# Driver Code
if __name__ == "__main__":
N = 16
number_of_operations = 0
print("Logarithmic reduction of N: ", end = "")
i = N
while(i>1) :
print( i , end = " ")
number_of_operations += 1
i = i // 2
print()
print("Algorithm Runtime for reducing N to 1:", number_of_operations)
# This code is contributed by sanjoy_62.
C#
// C# implementation of above approach
using System;
using System.Collections.Generic;
class GFG {
// Driver Code
public static void Main()
{
int N = 16;
int number_of_operations = 0;
Console.Write("Logarithmic reduction of N: ");
for (int i = N; i > 1; i = i / 2) {
Console.Write(i + " ");
number_of_operations++;
}
Console.WriteLine();
Console.WriteLine("Algorithm Runtime for reducing N to 1: " + number_of_operations);
}
}
JavaScript
let number_of_operations = 0;
let n= 16;
for(let i=n; i>1; i=i/2) {
console.log(i);
number_of_operations++;
}
console.log(number_of_operations);
OutputLogarithmic reduction of N: 16 8 4 2
Algorithm Runtime for reducing N to 1: 4
Explanation:
It is clear from the above algorithm that in each iteration the value is divided by a factor of 2 starting from 16 till it reaches 1, it takes 4 operations.
As the input value gets reduced by a factor of 2, In mathematical terms the number of operations required in this case is log2(N), i.e. log2(16) = 4.So, in terms of time complexity, the above algorithm takes logarithmic runtime to complete i.e. log2(N).
Linearly Searching a value in an array of size N can be very hectic, even when the array is sorted but using binary search this can be done in a lot easier way and in lesser time as the algorithm reduces the search space by half in each operation thus gives a complexity of log2(N), Here base is 2 because process repeatedly reduces to half.
Consider an array Arr[] = {2, 4, 6, 8, 10, 12, 14, 16, 18}, If it is required to find the index of 8 then the algorithm will work as following:
C++
// C++ program for finding the index of 8
#include <iostream>
using namespace std;
int find_position(int val, int Arr[], int n, int& steps)
{
int l = 0, r = n - 1;
while (l <= r) {
steps++;
int m = l + (r - l) / 2;
if (Arr[m] == val)
return m;
else if (Arr[m] < val)
l = m + 1;
else
r = m - 1;
}
return -1;
}
// Driver Code
int main()
{
int Arr[8] = { 2, 4, 6, 8, 10, 12, 14, 16 };
int steps = 0;
// Function Call
int idx = find_position(8, Arr, 8, steps);
cout << "8 was present on index: "<<idx << endl;
// Since the worst case runtime of Binary search is
// log(N) so the count of steps must be less than log(N)
cout << "Algorithm Runtime: " << steps << endl;
return 0;
}
Java
// Java program for finding the index of 8
import java.io.*;
class GFG {
static int steps = 0;
static int find_position(int val, int Arr[], int n)
{
int l = 0, r = n - 1;
while (l <= r) {
steps++;
int m = l + (r - l) / 2;
if (Arr[m] == val)
return m;
else if (Arr[m] < val)
l = m + 1;
else
r = m - 1;
}
return -1;
}
// Driver Code
public static void main (String[] args)
{
int Arr[] = { 2, 4, 6, 8, 10, 12, 14, 16 };
steps = 0;
// Function Call
int idx = find_position(8, Arr, 8);
System.out.println("8 was present on index: "+idx);
// Since the worst case runtime of Binary search is
// log(N) so the count of steps must be less than log(N)
System.out.println("Algorithm Runtime: " + steps);
}
}
// This code is contributed by Aman Kumar
Python
# Python program for finding the index of 8
def find_position(val,Arr,n):
global steps
l=0
r=n-1
while(l<=r):
steps+=1
m=l+(r-l)//2
if(Arr[m] == val):
return m
elif(Arr[m] < val):
l=m+1
else:
l=m-1
return -1
# Driver code
Arr=[2, 4, 6, 8, 10, 12, 14, 16]
steps=0
# Function Call
idx = find_position(8, Arr, 8)
print("8 was present on index: {0}".format(idx))
# Since the worst case runtime of Binary search is
# log(N) so the count of steps must be less than log(N)
print("Algorithm Runtime: {0}".format(steps))
# This code is contributed by Pushpesh Raj.
C#
using System;
namespace GFG {
class Program {
static int steps = 0;
static int FindPosition(int val, int[] arr, int n) {
int l = 0, r = n - 1;
while (l <= r) {
steps++;
int m = l + (r - l) / 2;
if (arr[m] == val) {
return m;
}
else if (arr[m] < val) {
l = m + 1;
}
else {
r = m - 1;
}
}
return -1;
}
static void Main(string[] args) {
int[] arr = { 2, 4, 6, 8, 10, 12, 14, 16 };
steps = 0;
int idx = FindPosition(8, arr, 8);
Console.WriteLine("8 was present on index: " + idx);
Console.WriteLine("Algorithm runtime: " + steps);
}
}
}
//This code is contributed by Edula Vinay Kumar Reddy
JavaScript
// JavaScript program for finding the index of 8
let steps = 0;
function find_position(val, Arr, n) {
let l = 0;
let r = n - 1;
while (l <= r) {
steps += 1;
let m = Math.floor(l + (r - l) / 2);
if (Arr[m] === val) {
return m;
} else if (Arr[m] < val) {
l = m + 1;
} else {
r = m - 1;
}
}
return -1;
}
// Driver code
let Arr = [2, 4, 6, 8, 10, 12, 14, 16];
// Function Call
let idx = find_position(8, Arr, 8);
console.log(`8 was present on index: ${idx}`);
// Since the worst case runtime of Binary search is
// log(N) so the count of steps must be less than log(N)
console.log(`Algorithm Runtime: ${steps}`);
Output8 was present on index: 3
Algorithm Runtime: 1
Explanation:
Binary search works on Divide and conquer approach, In above example In worst case 3 comparisons will be needed to find any value in array. Also the value of log (N) where N is input size i.e. 8 for above example will be 3. Hence the algorithm can be said to exhibit logarithmic time complexity.
Example 3: Binary search algorithm (log log N)
An example where the time complexity of algorithm is Double logarithmic along with a length factor N is when prime numbers from 1 to N need to be found.
C++
#include <bits/stdc++.h>
using namespace std;
const long long MAX_SIZE = 1000001;
// isPrime[] : isPrime[i] is true if number is prime
// prime[] : stores all prime number less than N
// SPF[] that store smallest prime factor of number
// [for Exp : smallest prime factor of '8' and '16'
// is '2' so we put SPF[8] = 2 , SPF[16] = 2 ]
vector<long long> isprime(MAX_SIZE, true);
vector<long long> prime;
vector<long long> SPF(MAX_SIZE);
// Function generate all prime number less than N in O(n)
void manipulated_seive(int N)
{
// 0 and 1 are not prime
isprime[0] = isprime[1] = false;
// Fill rest of the entries
for (long long int i = 2; i < N; i++) {
// If isPrime[i] == True then i is
// prime number
if (isprime[i]) {
// put i into prime[] vector
prime.push_back(i);
// A prime number is its own smallest
// prime factor
SPF[i] = i;
}
// Remove all multiples of i*prime[j] which are
// not prime by making isPrime[i*prime[j]] = false
// and put smallest prime factor of i*Prime[j] as
// prime[j] [ for exp :let i = 5 , j = 0 , prime[j]
// = 2 [ i*prime[j] = 10 ] so smallest prime factor
// of '10' is '2' that is prime[j] ] this loop run
// only one time for number which are not prime
for (long long int j = 0;
j < (int)prime.size() && i * prime[j] < N
&& prime[j] <= SPF[i];
j++) {
isprime[i * prime[j]] = false;
// put smallest prime factor of i*prime[j]
SPF[i * prime[j]] = prime[j];
}
}
}
// Driver program to test above function
int main()
{
int N = 13; // Must be less than MAX_SIZE
manipulated_seive(N);
// Print all prime number less than N
for (int i = 0; i < prime.size() && prime[i] <= N; i++)
cout << prime[i] << " ";
return 0;
}
Java
import java.util.*;
public class Main {
static final int MAX_SIZE = 1000001;
// isprime[] : isprime[i] is true if number is prime
// prime[] : stores all prime numbers less than N
// SPF[] that store smallest prime factor of number
// [for Exp : smallest prime factor of '8' and '16'
// is '2' so we put SPF[8] = 2 , SPF[16] = 2 ]
static boolean[] isprime = new boolean[MAX_SIZE];
static List<Integer> prime = new ArrayList<Integer>();
static int[] SPF = new int[MAX_SIZE];
// Function generate all prime numbers less than N in
// O(n)
static void manipulated_seive(int N)
{
Arrays.fill(isprime, true);
// 0 and 1 are not prime
isprime[0] = isprime[1] = false;
// Fill rest of the entries
for (int i = 2; i < N; i++) {
// If isprime[i] is true then i is prime number
if (isprime[i]) {
// put i into prime[] list
prime.add(i);
// A prime number is its own smallest prime
// factor
SPF[i] = i;
}
// Remove all multiples of i*prime[j] which are
// not prime by making isprime[i*prime[j]] =
// false and put the smallest prime factor of
// i*Prime[j] as prime[j] [for example: let i =
// 5, j = 0, prime[j] = 2 [i*prime[j] = 10] so
// the smallest prime factor of '10' is '2' that
// is prime[j]] this loop runs only one time for
// numbers which are not prime
for (int j = 0;
j < prime.size() && i * prime.get(j) < N
&& prime.get(j) <= SPF[i];
j++) {
isprime[i * prime.get(j)] = false;
// put the smallest prime factor of
// i*prime[j]
SPF[i * prime.get(j)] = prime.get(j);
}
}
}
// Driver program to test above function
public static void main(String[] args)
{
int N = 13; // Must be less than MAX_SIZE
manipulated_seive(N);
// Print all prime numbers less than N
for (int i = 0;
i < prime.size() && prime.get(i) <= N; i++) {
System.out.print(prime.get(i) + " ");
}
}
}
// This code is contributed by divyansh2212
Python
import math
MAX_SIZE = 1000001
# isprime[]: isprime[i] is True if number is prime
# prime[]: stores all prime numbers less than N
# SPF[] that store smallest prime factor of number
# [for Exp: smallest prime factor of '8' and '16'
# is '2' so we put SPF[8] = 2, SPF[16] = 2]
isprime = [True] * MAX_SIZE
prime = []
SPF = [0] * MAX_SIZE
# Function generate all prime numbers less than N in O(n)
def manipulated_seive(N):
global isprime, prime, SPF
# 0 and 1 are not prime
isprime[0] = isprime[1] = False
# Fill rest of the entries
for i in range(2, N):
# If isprime[i] is True then i is prime number
if isprime[i]:
# put i into prime[] list
prime.append(i)
# A prime number is its own smallest prime factor
SPF[i] = i
# Remove all multiples of i*prime[j] which are
# not prime by making isprime[i*prime[j]] = False
# and put the smallest prime factor of i*Prime[j] as
# prime[j] [for example: let i = 5, j = 0, prime[j]
# = 2 [i*prime[j] = 10] so the smallest prime factor
# of '10' is '2' that is prime[j]] this loop runs
# only one time for numbers which are not prime
j = 0
while j < len(prime) and i * prime[j] < N and prime[j] <= SPF[i]:
isprime[i * prime[j]] = False
# put the smallest prime factor of i*prime[j]
SPF[i * prime[j]] = prime[j]
j += 1
# Driver program to test above function
if __name__ == "__main__":
N = 13 # Must be less than MAX_SIZE
manipulated_seive(N)
# Print all prime numbers less than N
for i in range(len(prime)):
if prime[i] <= N:
print(prime[i], end=" ")
else:
break
C#
using System;
using System.Collections.Generic;
class MainClass
{
static readonly int MAX_SIZE = 1000001;
// isprime[] : isprime[i] is true if number is prime
// prime[] : stores all prime numbers less than N
// SPF[] that store smallest prime factor of number
// [for Exp : smallest prime factor of '8' and '16'
// is '2' so we put SPF[8] = 2 , SPF[16] = 2 ]
static bool[] isprime = new bool[MAX_SIZE];
static List<int> prime = new List<int>();
static int[] SPF = new int[MAX_SIZE];
// Function generate all prime numbers less than N in
// O(n)
static void manipulated_seive(int N)
{
Array.Fill(isprime, true);
// 0 and 1 are not prime
isprime[0] = isprime[1] = false;
// Fill rest of the entries
for (int i = 2; i < N; i++)
{
// If isprime[i] is true then i is prime number
if (isprime[i])
{
// put i into prime[] list
prime.Add(i);
// A prime number is its own smallest prime
// factor
SPF[i] = i;
}
// Remove all multiples of i*prime[j] which are
// not prime by making isprime[i*prime[j]] =
// false and put the smallest prime factor of
// i*Prime[j] as prime[j] [for example: let i =
// 5, j = 0, prime[j] = 2 [i*prime[j] = 10] so
// the smallest prime factor of '10' is '2' that
// is prime[j]] this loop runs only one time for
// numbers which are not prime
for (int j = 0;
j < prime.Count && i * prime[j] < N
&& prime[j] <= SPF[i];
j++)
{
isprime[i * prime[j]] = false;
// put the smallest prime factor of
// i*prime[j]
SPF[i * prime[j]] = prime[j];
}
}
}
// Driver program to test above function
public static void Main(string[] args)
{
int N = 13; // Must be less than MAX_SIZE
manipulated_seive(N);
// Print all prime numbers less than N
for (int i = 0;
i < prime.Count && prime[i] <= N; i++)
{
Console.Write(prime[i] + " ");
}
}
}
JavaScript
const MAX_SIZE = 1000001;
// isprime[]: isprime[i] is true if number is prime
// prime[]: stores all prime numbers less than N
// SPF[] that store smallest prime factor of number
// [for Exp: smallest prime factor of '8' and '16'
// is '2' so we put SPF[8] = 2, SPF[16] = 2]
let isprime = Array(MAX_SIZE).fill(true);
let prime = [];
let SPF = Array(MAX_SIZE).fill(0);
// Function generate all prime numbers less than N in O(n)
function manipulated_seive(N) {
// 0 and 1 are not prime
isprime[0] = isprime[1] = false;
// Fill rest of the entries
for (let i = 2; i < N; i++) {
// If isprime[i] is true then i is prime number
if (isprime[i]) {
// put i into prime[] list
prime.push(i);
// A prime number is its own smallest prime factor
SPF[i] = i;
}
// Remove all multiples of i*prime[j] which are
// not prime by making isprime[i*prime[j]] = false
// and put the smallest prime factor of i*Prime[j] as
// prime[j] [for example: let i = 5, j = 0, prime[j]
// = 2 [i*prime[j] = 10] so the smallest prime factor
// of '10' is '2' that is prime[j]] this loop runs
// only one time for numbers which are not prime
let j = 0;
while (j < prime.length && i * prime[j] < N && prime[j] <= SPF[i]) {
isprime[i * prime[j]] = false;
// put the smallest prime factor of i*prime[j]
SPF[i * prime[j]] = prime[j];
j++;
}
}
}
// Driver program to test above function
const N = 13; // Must be less than MAX_SIZE
manipulated_seive(N);
// Print all prime numbers less than N
console.log(prime.join(' '));
// Contributed by adityasha4x71
In above example the complexity of finding prime numbers in a range of 0 to N is O(N * log (log (N))).
Practice Problems for Logarithmic Time Complexity
Comparison of various Logarithmic Time Complexities
Below is a graph to show the comparison between different logarithmic time complexities that have been discussed above:
Comparison between various Logarithmic Time ComplexitiesConclusion
From the above discussion, we conclude that the analysis of an algorithm is very important for choosing an appropriate algorithm and the Logarithm order of complexities is one of the most optimal order of time complexities.
Similar Reads
Basics & Prerequisites
Data Structures
Array Data StructureIn this article, we introduce array, implementation in different popular languages, its basic operations and commonly seen problems / interview questions. An array stores items (in case of C/C++ and Java Primitive Arrays) or their references (in case of Python, JS, Java Non-Primitive) at contiguous
3 min read
String in Data StructureA string is a sequence of characters. The following facts make string an interesting data structure.Small set of elements. Unlike normal array, strings typically have smaller set of items. For example, lowercase English alphabet has only 26 characters. ASCII has only 256 characters.Strings are immut
2 min read
Hashing in Data StructureHashing is a technique used in data structures that efficiently stores and retrieves data in a way that allows for quick access. Hashing involves mapping data to a specific index in a hash table (an array of items) using a hash function. It enables fast retrieval of information based on its key. The
2 min read
Linked List Data StructureA linked list is a fundamental data structure in computer science. It mainly allows efficient insertion and deletion operations compared to arrays. Like arrays, it is also used to implement other data structures like stack, queue and deque. Hereâs the comparison of Linked List vs Arrays Linked List:
2 min read
Stack Data StructureA Stack is a linear data structure that follows a particular order in which the operations are performed. The order may be LIFO(Last In First Out) or FILO(First In Last Out). LIFO implies that the element that is inserted last, comes out first and FILO implies that the element that is inserted first
2 min read
Queue Data StructureA Queue Data Structure is a fundamental concept in computer science used for storing and managing data in a specific order. It follows the principle of "First in, First out" (FIFO), where the first element added to the queue is the first one to be removed. It is used as a buffer in computer systems
2 min read
Tree Data StructureTree Data Structure is a non-linear data structure in which a collection of elements known as nodes are connected to each other via edges such that there exists exactly one path between any two nodes. Types of TreeBinary Tree : Every node has at most two childrenTernary Tree : Every node has at most
4 min read
Graph Data StructureGraph Data Structure is a collection of nodes connected by edges. It's used to represent relationships between different entities. If you are looking for topic-wise list of problems on different topics like DFS, BFS, Topological Sort, Shortest Path, etc., please refer to Graph Algorithms. Basics of
3 min read
Trie Data StructureThe Trie data structure is a tree-like structure used for storing a dynamic set of strings. It allows for efficient retrieval and storage of keys, making it highly effective in handling large datasets. Trie supports operations such as insertion, search, deletion of keys, and prefix searches. In this
15+ min read
Algorithms
Searching AlgorithmsSearching algorithms are essential tools in computer science used to locate specific items within a collection of data. In this tutorial, we are mainly going to focus upon searching in an array. When we search an item in an array, there are two most common algorithms used based on the type of input
2 min read
Sorting AlgorithmsA Sorting Algorithm is used to rearrange a given array or list of elements in an order. For example, a given array [10, 20, 5, 2] becomes [2, 5, 10, 20] after sorting in increasing order and becomes [20, 10, 5, 2] after sorting in decreasing order. There exist different sorting algorithms for differ
3 min read
Introduction to RecursionThe process in which a function calls itself directly or indirectly is called recursion and the corresponding function is called a recursive function. A recursive algorithm takes one step toward solution and then recursively call itself to further move. The algorithm stops once we reach the solution
14 min read
Greedy AlgorithmsGreedy algorithms are a class of algorithms that make locally optimal choices at each step with the hope of finding a global optimum solution. At every step of the algorithm, we make a choice that looks the best at the moment. To make the choice, we sometimes sort the array so that we can always get
3 min read
Graph AlgorithmsGraph is a non-linear data structure like tree data structure. The limitation of tree is, it can only represent hierarchical data. For situations where nodes or vertices are randomly connected with each other other, we use Graph. Example situations where we use graph data structure are, a social net
3 min read
Dynamic Programming or DPDynamic Programming is an algorithmic technique with the following properties.It is mainly an optimization over plain recursion. Wherever we see a recursive solution that has repeated calls for the same inputs, we can optimize it using Dynamic Programming. The idea is to simply store the results of
3 min read
Bitwise AlgorithmsBitwise algorithms in Data Structures and Algorithms (DSA) involve manipulating individual bits of binary representations of numbers to perform operations efficiently. These algorithms utilize bitwise operators like AND, OR, XOR, NOT, Left Shift, and Right Shift.BasicsIntroduction to Bitwise Algorit
4 min read
Advanced
Segment TreeSegment Tree is a data structure that allows efficient querying and updating of intervals or segments of an array. It is particularly useful for problems involving range queries, such as finding the sum, minimum, maximum, or any other operation over a specific range of elements in an array. The tree
3 min read
Pattern SearchingPattern searching algorithms are essential tools in computer science and data processing. These algorithms are designed to efficiently find a particular pattern within a larger set of data. Patten SearchingImportant Pattern Searching Algorithms:Naive String Matching : A Simple Algorithm that works i
2 min read
GeometryGeometry is a branch of mathematics that studies the properties, measurements, and relationships of points, lines, angles, surfaces, and solids. From basic lines and angles to complex structures, it helps us understand the world around us.Geometry for Students and BeginnersThis section covers key br
2 min read
Interview Preparation
Practice Problem