Difference between Recursion and Dynamic Programming
Last Updated :
17 Jan, 2024
Recursion and Dynamic programming are two effective methods for solving big problems into smaller, more manageable subproblems. Despite their similarities, they differ in some significant ways.
Below is the Difference Between Recursion and Dynamic programming in Tabular format:
|
By breaking a difficulty down into smaller problems of the same problem, a function calls itself to solve the problem until a specific condition is met.
| It is a technique by breaks them into smaller problems and stores the results of these subproblems to avoid repeated calculations.
|
Recursion frequently employs a top-down method in which the primary problem is broken down into more manageable subproblems.
| Using a bottom-up methodology, dynamic programming starts by resolving the smallest subproblems before moving on to the primary issue.
|
To avoid infinite loops, it is necessary to have a base case (termination condition) that stops the recursion when a certain condition is satisfied.
| Although dynamic programming also needs a base case, it focuses mostly on iteratively addressing subproblems.
|
Recursion might be slower due to the overhead of function calls and redundant calculations.
| Dynamic programming is often faster due to optimized subproblem solving and memoization.
|
It does not require extra memory, only requires stack space
| Dynamic programming require additional memory to record intermediate results.
|
It include computing factorials, using the Fibonacci sequence.
| It include computing the nth term of a series using bottom-up methods, the knapsack problem
|
Let's take an example: Find out the Nth Fibonacci number
1) Using Recursion:
Below is the code of Fibonacci series using recursion:
C++
#include <iostream>
int fibonacci_recursive(int n) {
if (n <= 1) {
return n;
} else {
return fibonacci_recursive(n - 1) + fibonacci_recursive(n - 2);
}
}
int main() {
int n = 5;
int result = fibonacci_recursive(n);
std::cout << result << std::endl;
return 0;
}
Java
public class GFG {
// Recursive function to calculate the nth Fibonacci number
public static int fibonacciRecursive(int n) {
if (n <= 1) {
// Base case: Fibonacci of 0 is 0, and Fibonacci of 1 is 1
return n;
} else {
// Recursively calculate Fibonacci for n-1 and n-2
return fibonacciRecursive(n - 1) + fibonacciRecursive(n - 2);
}
}
public static void main(String[] args) {
int n = 5;
int result = fibonacciRecursive(n);
System.out.println(result);
}
}
Python
def fibonacci_recursive(n):
if n <= 1:
return n
else:
return fibonacci_recursive(n-1) + fibonacci_recursive(n-2)
print(fibonacci_recursive(5))
C#
using System;
public class Solution
{
public int FibonacciRecursive(int n)
{
if (n <= 1)
{
return n;
}
else
{
return FibonacciRecursive(n - 1) + FibonacciRecursive(n - 2);
}
}
static void Main()
{
Solution solution = new Solution();
Console.WriteLine(solution.FibonacciRecursive(5));
}
}
JavaScript
function fibonacciRecursive(n) {
if (n <= 1) {
return n;
} else {
return fibonacciRecursive(n - 1) + fibonacciRecursive(n - 2);
}
}
const n = 5;
const result = fibonacciRecursive(n);
console.log(result);
Time Complexity: O(2n), which is highly inefficient.
Auxiliary Space: Recursion consumes memory on the call stack for each function call, which can also lead to high space complexity. Inefficient recursive algorithms can lead to stack overflow errors.
2) Using Dynamic Programming:
Below is the code of Fibonacci series using Dynamic programming:
C++
#include <iostream>
#include <vector>
int fibonacci_dp(int n) {
std::vector<int> fib(n + 1, 0);
fib[1] = 1;
for (int i = 2; i <= n; ++i) {
fib[i] = fib[i - 1] + fib[i - 2];
}
return fib[n];
}
int main() {
std::cout << fibonacci_dp(5) << std::endl;
return 0;
}
Java
import java.util.Arrays;
public class FibonacciDP {
static int fibonacciDP(int n) {
int[] fib = new int[n + 1];
fib[1] = 1;
for (int i = 2; i <= n; ++i) {
fib[i] = fib[i - 1] + fib[i - 2];
}
return fib[n];
}
public static void main(String[] args) {
System.out.println(fibonacciDP(5));
}
}
Python
def fibonacci_dp(n):
fib = [0] * (n + 1)
fib[1] = 1
for i in range(2, n + 1):
fib[i] = fib[i - 1] + fib[i - 2]
return fib[n]
print(fibonacci_dp(5))
C#
using System;
class Program
{
// Function to calculate the nth Fibonacci number using dynamic programming
static int FibonacciDP(int n)
{
// Create an array to store Fibonacci numbers
int[] fib = new int[n + 1];
// Initialize the first two Fibonacci numbers
fib[1] = 1;
// Calculate Fibonacci numbers from the bottom up
for (int i = 2; i <= n; ++i)
{
fib[i] = fib[i - 1] + fib[i - 2];
}
// Return the nth Fibonacci number
return fib[n];
}
static void Main()
{
// Test the FibonacciDP function with n = 5
Console.WriteLine(FibonacciDP(5));
}
}
JavaScript
function fibonacci_dp(n) {
const fib = new Array(n + 1).fill(0);
fib[1] = 1;
for (let i = 2; i <= n; ++i) {
fib[i] = fib[i - 1] + fib[i - 2];
}
return fib[n];
}
console.log(fibonacci_dp(5));
Time Complexity: O(n), a vast improvement over the exponential time complexity of recursion.
Auxiliary Space: DP may have higher space complexity due to the need to store results in a table. In the Fibonacci example, it's O(n) for the storage of the Fibonacci sequence.
Application of Recursion:
- Finding the Fibonacci sequence
- Finding the factorial of a number
- Binary tree traversals such as in-order, pre-order, and post-order traversals.
Application of Dynamic Programming:
- Calculation of Fibonacci numbers and storing the results
- Finding longest subsequences
- To identify the shortest pathways in graphs, dynamic programming techniques like Dijkstra's.
Conclusion:
Recursion and dynamic programming both use common problem-solving techniques, although they focus differently on optimisation and memory usage. The nature of the issue and the intended outcome of the solution will determine which option is best.
Similar Reads
DSA Tutorial - Learn Data Structures and Algorithms DSA (Data Structures and Algorithms) is the study of organizing data efficiently using data structures like arrays, stacks, and trees, paired with step-by-step procedures (or algorithms) to solve problems effectively. Data structures manage how data is stored and accessed, while algorithms focus on
7 min read
Non-linear Components In electrical circuits, Non-linear Components are electronic devices that need an external power source to operate actively. Non-Linear Components are those that are changed with respect to the voltage and current. Elements that do not follow ohm's law are called Non-linear Components. Non-linear Co
11 min read
Quick Sort QuickSort is a sorting algorithm based on the Divide and Conquer that picks an element as a pivot and partitions the given array around the picked pivot by placing the pivot in its correct position in the sorted array. It works on the principle of divide and conquer, breaking down the problem into s
12 min read
Merge Sort - Data Structure and Algorithms Tutorials Merge sort is a popular sorting algorithm known for its efficiency and stability. It follows the divide-and-conquer approach. It works by recursively dividing the input array into two halves, recursively sorting the two halves and finally merging them back together to obtain the sorted array. Merge
14 min read
Data Structures Tutorial Data structures are the fundamental building blocks of computer programming. They define how data is organized, stored, and manipulated within a program. Understanding data structures is very important for developing efficient and effective algorithms. What is Data Structure?A data structure is a st
2 min read
Bubble Sort Algorithm Bubble Sort is the simplest sorting algorithm that works by repeatedly swapping the adjacent elements if they are in the wrong order. This algorithm is not suitable for large data sets as its average and worst-case time complexity are quite high.We sort the array using multiple passes. After the fir
8 min read
Breadth First Search or BFS for a Graph Given a undirected graph represented by an adjacency list adj, where each adj[i] represents the list of vertices connected to vertex i. Perform a Breadth First Search (BFS) traversal starting from vertex 0, visiting vertices from left to right according to the adjacency list, and return a list conta
15+ min read
Binary Search Algorithm - Iterative and Recursive Implementation Binary Search Algorithm is a searching algorithm used in a sorted array by repeatedly dividing the search interval in half. The idea of binary search is to use the information that the array is sorted and reduce the time complexity to O(log N). Binary Search AlgorithmConditions to apply Binary Searc
15 min read
Insertion Sort Algorithm Insertion sort is a simple sorting algorithm that works by iteratively inserting each element of an unsorted list into its correct position in a sorted portion of the list. It is like sorting playing cards in your hands. You split the cards into two groups: the sorted cards and the unsorted cards. T
9 min read
Array Data Structure Guide In this article, we introduce array, implementation in different popular languages, its basic operations and commonly seen problems / interview questions. An array stores items (in case of C/C++ and Java Primitive Arrays) or their references (in case of Python, JS, Java Non-Primitive) at contiguous
4 min read