0% found this document useful (0 votes)
24 views

Design and Analysis of Algorithms Lab Aman Dubey

The document discusses implementing and analyzing divide and conquer algorithms for matrix multiplication and multiplying large numbers. It compares the performance of naive and optimized algorithms like Strassen's multiplication for matrices and divide and conquer for number multiplication.

Uploaded by

Rythm
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views

Design and Analysis of Algorithms Lab Aman Dubey

The document discusses implementing and analyzing divide and conquer algorithms for matrix multiplication and multiplying large numbers. It compares the performance of naive and optimized algorithms like Strassen's multiplication for matrices and divide and conquer for number multiplication.

Uploaded by

Rythm
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

Name :Aman Pandey

Enrollment number:07419011922
Batch : B1 Ds

Design and Analysis of Algorithms Lab


Lab sheet -3
Topic: Divide and Conquer

Q1. Implement the strassen’s multiplication method (using Divide and


Conquer Strategy) and naive multiplication method. Compare these
methods in terms of time taken using the nXn matrix where n=3, 4, 5, 6,
7 and 8 (compare in bar graph).

Analysis

The code compares the performance of two matrix multiplication


methods: naive multiplication and Strassen's algorithm. Naive
multiplication uses standard nested loops, while Strassen's algorithm
employs a divide-and-conquer approach. The code generates random
matrices of various sizes, measures the time taken by each method to
multiply them, and plots the comparison using bar graphs. Although
Strassen's algorithm theoretically has better complexity, in practice, its
overhead may affect performance. The code provides a practical
demonstration of comparing these methods' efficiency for different
matrix sizes.

Algorithm
1. Naive Multiplication (naive_multiply):
• Calculate each element of the output matrix by iterating through rows and
columns, computing the dot product.
2. Strassen's Multiplication (strassen_multiply):
• Recursively divide matrices into submatrices.
• Compute seven products (P1 to P7) recursively.
• Combine submatrices to form the result matrix.
3. Matrix Generation (generate_random_matrix):
• Generate random matrices of specified sizes.
4. Comparison (compare_methods):
• Measure time taken by both methods for different matrix sizes.
• Plot a comparison graph.
5. Plotting (matplotlib):
• Use matplotlib to visualize the comparison graphically.
Code
import numpy as np
import time
import matplotlib.pyplot as plt

def strassen(A, B):


"""
Implement Strassen's algorithm for matrix multiplication.
"""
n = A.shape[0]

# Check if the matrix size is a power of 2, if not, pad with zeros


if not (n & (n - 1) == 0):
new_size = 2 ** (int(np.log2(n)) + 1)
A = np.pad(A, ((0, new_size - n), (0, new_size - n)), mode='constant')
B = np.pad(B, ((0, new_size - n), (0, new_size - n)), mode='constant')
n = A.shape[0]

if n <= 2:
return np.dot(A, B)

# Divide matrices into sub-matrices


A11, A12, A21, A22 = A[:n//2, :n//2], A[:n//2, n//2:], A[n//2:, :n//2], A[n//2:, n//2:]
B11, B12, B21, B22 = B[:n//2, :n//2], B[:n//2, n//2:], B[n//2:, :n//2], B[n//2:, n//2:]

# Compute intermediate values


M1 = strassen(A11 + A22, B11 + B22)
M2 = strassen(A21 + A22, B11)
M3 = strassen(A11, B12 - B22)
M4 = strassen(A22, B21 - B11)
M5 = strassen(A11 + A12, B22)
M6 = strassen(A21 - A11, B11 + B12)
M7 = strassen(A12 - A22, B21 + B22)

# Combine intermediate values to get the final result


C11 = M1 + M4 - M5 + M7
C12 = M3 + M5
C21 = M2 + M4
C22 = M1 - M2 + M3 + M6

# Combine the four sub-matrices into the final result matrix


C = np.block([[C11, C12],
[C21, C22]])

# Remove the padding if necessary


if A.shape != C.shape:
C = C[:n, :n]

return C

def naive_multiply(A, B):


"""
Implement the naive matrix multiplication algorithm.
"""
n = A.shape[0]
C = np.zeros((n, n))

for i in range(n):
for j in range(n):
for k in range(n):
C[i, j] += A[i, k] * B[k, j]

return C

# Compare performance of Strassen's and naive multiplication methods


matrix_sizes = [3, 4, 5, 6, 7, 8]
strassen_times = []
naive_times = []

for n in matrix_sizes:
A = np.random.randint(1, 11, (n, n))
B = np.random.randint(1, 11, (n, n))

start_time = time.time()
C_strassen = strassen(A, B)
strassen_time = time.time() - start_time
strassen_times.append(strassen_time)

start_time = time.time()
C_naive = naive_multiply(A, B)
naive_time = time.time() - start_time
naive_times.append(naive_time)

# Plot the results


fig, ax = plt.subplots(figsize=(12, 6))
x = np.arange(len(matrix_sizes))
width = 0.35

ax.bar(x - width/2, strassen_times, width, color='blue', label='Strassen')


ax.bar(x + width/2, naive_times, width, color='red', label='Naive')

ax.set_xlabel('Matrix Size (n x n)')


ax.set_ylabel('Time (seconds)')
ax.set_title('Comparison of Strassen\'s and Naive Multiplication Methods')
ax.set_xticks(x)
ax.set_xticklabels(matrix_sizes)
ax.legend()

plt.show()
Output

Q2. Implement the multiplication of two N-bit numbers (using Divide and
Conquer Strategy) and naive multiplication method. Compare these
methods in terms of time taken using N-bit numbers where n=4, 8, 16, 32
and 64.

Analysis
1.Methods:
• Implement Divide and Conquer and naive multiplication for N-bit
numbers.
2.Comparison:
• Compare time for multiplying N-bit numbers (N = 4, 8, 16, 32, 64).
3.Conclusion:
• Evaluate efficiency and preferences.
Divide And Conquer Algorithm

1. Function DivideAndConquerMultiplication(x, y):


2. if either x or y is a single bit:
3. return the product of x and y
4. else:
5. Split x and y into two halves, x1, x0 and y1, y0
6. Recursively compute:
7. a = DivideAndConquerMultiplication(x1, y1)
8. b = DivideAndConquerMultiplication(x1, y0)
9. c = DivideAndConquerMultiplication(x0, y1)
10. d = DivideAndConquerMultiplication(x0, y0)
11. Calculate the product as follows:
12. return (a << n) + ((b + c) << (n/2)) + d

Naive Multiplication Algorithm:


1. Function NaiveMultiplication(x, y):
2. Initialize result = 0
3. Iterate i over the range of bits in x:
4. If ith bit of x is 1:
5. Add y shifted left by i bits to result
6. Return result

Code
import time

def divide_and_conquer_multiply(x, y):


if x < 10 or y < 10:
return x * y

n = max(len(str(x)), len(str(y)))
half = n // 2

a = x // 10 ** half
b = x % (10 ** half)
c = y // 10 ** half
d = y % (10 ** half)
ac = divide_and_conquer_multiply(a, c)
bd = divide_and_conquer_multiply(b, d)
ad_bc = divide_and_conquer_multiply(a + b, c + d) - ac - bd

return ac * 10 ** (2 * half) + ad_bc * 10 ** half + bd

def naive_multiply(x, y):


return x * y

if __name__ == "__main__":
bit_sizes = [4, 8, 16, 32, 64]
print("Bit Size\tDivide & Conquer (s)\tNaive Multiplication (s)")
for n in bit_sizes:
x = 2 ** n - 1 # Generate an N-bit number
y = 2 ** n - 1 # Generate an N-bit number

start_time = time.time()
divide_and_conquer_multiply(x, y)
dnc_time = time.time() - start_time

start_time = time.time()
naive_multiply(x, y)
naive_time = time.time() - start_time

print(f"{n}\t\t{dnc_time:.6f}\t\t\t{naive_time:.6f}")

Output

Bit Size Divide & Conquer (s) Naive Multiplication (s)


4 0.000011 0.000001
8 0.000016 0.000001
16 0.000048 0.000001
32 0.000140 0.000001
64 0.000367 0.000001
c = y // 10 ** half
d = y % (10 ** half)

ac = divide_and_conquer_multiply(a, c)
bd = divide_and_conquer_multiply(b, d)
ad_bc = divide_and_conquer_multiply(a + b, c + d) - ac - bd

return ac * 10 ** (2 * half) + ad_bc * 10 ** half + bd

def naive_multiply(x, y):


return x * y

if name == " main ":


bit_sizes = [4, 8, 16, 32, 64]
print("Bit Size\tDivide & Conquer (s)\tNaive Multiplication (s)")
for n in bit_sizes:
x = 2 ** n - 1 # Generate an N-bit number
y = 2 ** n - 1 # Generate an N-bit number

start_time = time.time()
divide_and_conquer_multiply(x, y)
dnc_time = time.time() - start_time

start_time = time.time()
naive_multiply(x, y)
naive_time = time.time() - start_time

print(f"{n}\t\t{dnc_time:.6f}\t\t\t{naive_time:.6f}")

Bit Size Divide & Conquer (s) Naive Multiplication (s)


4 0.000011 0.000001
8 0.000016 0.000001
16 0.000048 0.000001
32 0.000140 0.000001
64 0.000367 0.000001
.
Q3. Maximum Value Contiguous Subsequence: Given a sequence of n
numbers A(1) ...A(n), give an algorithm for finding a contiguous
subsequence A(i) ...A(j) for which the sum of elements in the
subsequence is maximum. Example : {-2, 11, -4, 13, -5, 2} → 20 and {1,
-3, 4, -2, -1, 6 } → 7.
def max_subarray(arr):
max_ending_here = max_so_far = arr[0]

for num in arr[1:]:


max_ending_here = max(num, max_ending_here + num)
max_so_far = max(max_so_far, max_ending_here)

return max_so_far

# Example usage:
arr1 = [-2, 11, -4, 13, -5, 2]
arr2 = [1, -3, 4, -2, -1, 6]

print("Maximum sum of contiguous subsequence in arr1:",


max_subarray(arr1)) # Output: 20
print("Maximum sum of contiguous subsequence in arr2:",
max_subarray(arr2)) # Output: 7

Maximum sum of contiguous subsequence in arr1: 20


Maximum sum of contiguous subsequence in arr2: 7

Q4. Implement the algorithm (Algo_1) presented below and discuss which
task this algorithm performs. Also, analyse the time complexity and space
complexity of the given algorithm. Further, implement the algorithm with
following modification: replace m = ⌈2n/3⌉ with m = ⌊2n/3⌋, and compare
the tasks performed by the given algorithm and modified algorithm.

Algo_1(A [0 ... n-1])


{
if n = 2 and A[0] > A[1]
swap A[0] ↔ A[1]
else if n > 2
m = ⌈2n/3⌉
Algo_1 (A [0 .. m − 1])
Algo_1 (A [n – m .. n − 1])
Algo_1 (A [0 .. m − 1])
The given algorithm, Algo_1, is a recursive algorithm that
performs a specific task on an array A of size n. Let's discuss
what task this algorithm performs and analyze its time and
space complexity.
Task Performed by Algo_1:

The task performed by Algo_1 is to sort the array A in non-


decreasing order. It does so by recursively dividing the array
into three parts and applying the same algorithm to each
part.
Time Complexity Analysis:

Let T(n) be the time complexity of Algo_1 for an array of size n.

In each recursive call, the algorithm divides the array into three
parts and recursively calls itself on two of these parts.

The size of the array being processed reduces by a factor


of2/3 in each recursive call (m = ⌈2n/3⌉).

Therefore, the time complexity can be expressed as the sum of


a geometric series, resulting in O(n log n) time complexity.
Space Complexity Analysis:

The space complexity of the algorithm is determined by the


recursive calls it makes. Since each recursive call requires
additional space on the call stack, the space complexity is
O(log n), where n is the size of the input array.
Modified Algorithm:

Now, let's implement the modified version of Algo_1 where m =


⌊2n/3⌋ instead of ⌈2n/3⌉. The only change in the modified
version is the rounding of m to the nearest integer towards
thelower side instead of the upper side.
Comparison:

The given algorithm and the modified algorithm perform


similartasks, which is to sort the array in non-decreasing
order.
However, the modified algorithm may have a slightly different
behavior due to the difference in how the value of m is
calculated.

Summary:

The given algorithm, Algo_1, sorts the array A in non-


decreasing order using a recursive approach with a time
complexity of O(n log n) and a space complexity of O(log n).

The modified algorithm, with m = ⌊2n/3⌋, may behave slightly


differently but performs the same task as Algo_1. Its time and
space complexity remain the same as Algo_1.

def modified_algo_1(A, start, end):


if end - start == 1 and A[start] > A[end]:
A[start], A[end] = A[end], A[start]
elif end - start > 1:
m = (2 * (end - start)) // 3
modified_algo_1(A, start, start + m - 1)
modified_algo_1(A, end - m, end - 1)
modified_algo_1(A, start, start + m - 1)

You might also like