0% found this document useful (0 votes)
23 views17 pages

Design Ass2

Uploaded by

guda49123
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views17 pages

Design Ass2

Uploaded by

guda49123
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 17

ADAMA SCIENCE AND TECHNOLOGY UNIVERSITY

Computer Science and Engineering

Design and Analysis of Algorithms [CSEg2208]: Assignment2


Individual assignment 2
Name;- Guda Tiruneh
Id no;- ugr/30603/15

Submitted to: instructor:


Submission date: Tuesday,sept19
Individual Assignment 2 Question and answers Accordingly
1.Consider the numbers (An)n>0 = (1, 1, 3, 4, 8, 11, 21, 29, 55, . . .) defined
as follows:
A1 = A2 = 1

An = Bn−1 + An−2 n>2

B1 = B2 = 2

Bn = An−1 + Bn−2 n>2

An can be computed using the following recursive procedures:


ComputeA(n)

if n<3 then

return 1

else

return ComputeB(n-1)+ComputeA(n-2)

fi

end

ComputeB(n)

if n<3 then

return 2

else

return ComputeA(n-1)+ComputeB(n-2)

fi

end

(a) Show that the running time TA(n) of ComputeA(n) is exponential in n. (Hint: Show, for

example, that TA(n) = Ω(2n/2))

(b) Describe and analyze a more efficient algorithm for computing An.

ANSWER:
(A) let analyze the recursive procedures for ComputeA(n) and ComputeB(n).
For n>3,the functions are defined as:

- ComputeA(n) = ComputeB(n-1) + ComputeA(n-2)

-ComputeB(n)=ComputeA(n-1) + ComputeB(n-2)

We can see that each call to ComputeA(𝑛) makes two recursive calls: one to ComputeA(𝑛−2)
and one to ComputeB(n−1), and each call to ComputeB(n) also makes two recursive calls: one to
ComputeA(n−1) and one to ComputeB(n−2).

multiple times. If we let 𝑇𝐴(𝑛) and 𝑇𝐵(𝑛) denote the time complexity of computing
This pattern of recursion is highly inefficient because the same subproblems are recomputed

ComputeA(n) and ComputeB(n), respectively, we can observe the following:

-𝑇𝐴(𝑛) =𝑇𝐵(𝑛-1) +𝑇𝐴(𝑛-2) + O(1)

-𝑇𝐵(𝑛) =𝑇𝐴(𝑛-1) + 𝑇𝐵(𝑛-2) + O(1)

In the worst case, the recursive calls form a binary recursion tree with height n/2, since each
function makes calls to smaller arguments until base cases (for n<3) are reached.

doubles the number of calls. The time complexity can be shown to grow as 𝑇𝐴(𝑛)=Ω(2𝑛/2),
This leads to an exponential number of calls, similar to the Fibonacci sequence where each level

which is exponential.

(B) A More Efficient Algorithm for Computing


The inefficiency in the original recursive algorithm arises because it recomputes the same values
multiple times. To improve this, we can use dynamic programming to store already computed
values and avoid redundant calculations.

Here a dynamic programming approach:

1: Initialize two arrays A and B of size n.

2: Set the base cases:

-A[1] = 1, A[2]=1

-B[1]=2, B[2]=2

3:For i >3, compute:

-A[i]=B[i - 1] + A[i -2]

-B[i]=A[i-1] + B[i-2]

4: Return A[n]

here us pseudocode for this dynamic programming approach:

def ComputeA(n):
A = [0] * (n+1)

B = [0] * (n+1)

# Base cases

A[1] = 1

A[2] = 1

B[1] = 2

B[2] = 2

# Dynamic programming for n >= 3

for i in range(3, n+1):

A[i] = B[i-1] + A[i-2]

B[i] = A[i-1] + B[i-2]

return A[n]

Time Complexity:

-This algorithm runs in O(n) time since each value of A[i] and B[i] is computed exactly once.

-This space complexity is O(n), but this can be reduced to O(1) if we only keep track of the
last two values of A and B.

2. Given a set {x1 <= x2 <= … <= xn} of points on the real line , determine the
smallest set

of unit-length closed intervals (e.g. the interval [1.25, 2.25] includes all xi such that

1.25<= xi <= 2.25) that contains all of the points.

Give the most efficient algorithm you can to solve this problem, prove it is correct, and

analyze the time complexity.

ANSWER:
Greedy Algorithm:

We can use a greedy algorithm to solve this problem optimally. The key observation is that to
minimize the number of intervals, we should cover as many points as possible with each interval.

Greedy Strategy:

1: Start by selecting the leftmost uncovered point.

2: Place a unit-length interval starting from that point (i.e., covering the range from 𝑥𝑖 to 𝑥𝑖+1)
3: Skip all points that are covered by this interval.

4: Repeat the process until all points are covered.

pseudocode:
def smallest_unit_intervals(points):

points.sort() # Ensure the points are sorted

intervals = []

i=0

n = len(points)

while i < n:

# Place an interval starting at the current point

interval_start = points[i]

interval_end = interval_start + 1

intervals.append((interval_start, interval_end))

# Move to the first point that is not covered by this interval

while i < n and points[i] <= interval_end:

i += 1

return intervals

Example:
Given points {1.2,2.5,2.8,3.0,4.6,5.7}:

First interval: [1.2,2.2], which covers point 1.2.

Second interval: [2.5,3.5], which covers points 2.5, 2.8, 3.0.

Third interval: [4.6,5.6], which covers point 4.6.

Fourth interval: [5.7,6.7], which covers point 5.7.

Thus, the set of intervals is {[1.2,2.2],[2.5,3.5],[4.6,5.6],[5.7,6.7]}.

Correctness:

The algorithm is correct because at each step, it greedily selects the leftmost
uncovered point and covers as many points as possible with a single interval.
This guarantees that we use the fewest number of intervals possible, as no
interval can cover more than the points it currently includes.
Time Complexity:

-Sorting the points takes O(nlogn).

-The while loop that selects intervals runs in O(n), as each point is processed exactly once.

Thus, the overall time complexity is O(nlogn), which is the most efficient possible for this
problem due to the sorting step.

3. Suppose you were to drive from Adama to Nekemte. Your gas tank, when full, holds
enough gas to travel m kilometers, and you have a map that gives distances between gas

stations along the route. Let d1 < d2 < … dn be the locations of all the gas stations along

the route, where di is the distance from Adama to the gas station. You can assume that the

distance between neighboring gas stations is at most m kilometers.Your goal is to make as


few gas stops as possible along the way. Give the most efficient

algorithm you can find to determine at which gas stations you should stop and prove that

your strategy yields an optimal solution. Be sure to give the time complexity of your

algorithm as a function of n.

ANSWER:

Greedy Strategy:

A greedy approach can be used to solve this problem optimally. The basic idea is to always drive
as far as possible without running out of gas. At each gas station, you should stop only if you
cannot reach the next gas station without refueling.

Greedy Algorithm:

1:Start from Adama, which can be considered at distance 0.

2: At each gas station 𝑑𝑖, check if you can reach the next gas station 𝑑𝑖+1 without refueling.

-If you cannot reach 𝑑𝑖+1(i.e., 𝑑𝑖+1 − 𝑑𝑖 >𝑚), stop at gas station 𝑑𝑖.

- If you can reach 𝑑𝑖+1, continue driving without stopping.

3: Repeat this process until you reach Nekemte.

pseudocode:

def min_gas_stops(distances, m):

n = len(distances)

stops = [] # List of gas stations where we stop


# Start from the beginning (Adama)

current_position = 0

i=0

while i < n:

# Find the farthest gas station we can reach without stopping

last_stop = current_position

while i < n and distances[i] - current_position <= m:

last_stop = distances[i]

i += 1

# If we cannot move further, stop at the last reachable station

if last_stop == current_position:

return None # Impossible to reach Nekemte

# If we need to stop at the last reachable station

if i < n:

stops.append(last_stop)

# Update the current position to the last stop

current_position = last_stop

return stops

Proof of Optimality:
The greedy algorithm works optimally because:

1.At each step, we make the farthest possible progress without refueling.

2.If there is a gas station within reach, we will always choose the one farthest along the route,
which minimizes the number of stops.

3.If the car can reach the destination without stopping, no unnecessary stops are made. If the
car cannot reach the next gas station, stopping at the last reachable station is necessary to avoid
running out of gas.

Thus, the algorithm guarantees that the number of stops is minimized, and no better solution
exists.

Time Complexity:
- The while loop iterates over all gas stations once.

- For each gas station, we check if we can reach the next one, which is an O(1) operation.

Therefore, the overall time complexity is O(n), where n is the number of gas stations along the
route.

4.Consider the following puzzle: There is a row of n chairs and two types of people: M for
mathematicians and P for poets. You want to assign one person to each seat, but you can

never seat two mathematicians together, or they will start talking about mathematics and

everyone else in the room will get bored. For example, if n = 3, the following are some

valid seatings: PPP, MPM, and PPM. However, the following is an invalid seating: MMP.

In this problem, your goal is as follows: Let f(n) be the number of valid seatings when there

are n chairs in a row. Write and solve a recurrence relation for f(n). Please show your work.

ANSWER:
To solve the problem of seating mathematicians (M) and poets (P) in a row of n chairs under the
constraint that no two mathematicians can sit next to each other, we can define a recurrence
relation for f(n), which represents the number of valid seatings for n chairs.

Step 1: Establish the Base Cases


a. For n = 1:

- The valid seatings are: M, P.

- Thus, f(1) = 2.

b. For n = 2:

- The valid seatings are: PP, PM, MP.

- Thus, f(2) = 3.

Step 2: Formulate the Recurrence Relation


To derive the recurrence relation, we consider the last chair in the row and how it can be filled:

a. If the last chair is occupied by a poet (P):

- The preceding n-1 chairs can be filled in any valid configuration of n-1 chairs. Therefore, there
are f(n-1) ways to fill the first n-1 chairs.

b. If the last chair is occupied by a mathematician (M):


- The chair before the last one must be occupied by a poet (P) (to avoid seating two
mathematicians together). Thus, the last two chairs will be PM.

- The first n-2 chairs can then be filled in any valid configuration of n-2 chairs. Therefore, there
are f(n-2) ways to fill the first n-2 chairs.

Combining these two cases gives us the recurrence relation:


f(n) = f(n-1) + f(n-2)

Step 3: Solve the Recurrence Relation


We already established the base cases:

- f(1) = 2

- f(2) = 3

Now we can compute further values using our recurrence relation:

- For n = 3:

f(3) = f(2) + f(1) = 3 + 2 = 5

- For n = 4:

f(4) = f(3) + f(2) = 5 + 3 = 8

- For n = 5

f(5) = f(4) + f(3) = 8 + 5 = 13

Continuing this pattern, we can compute more values if needed.

Step 4: Recognizing the Pattern

The recurrence relation we derived resembles the Fibonacci sequence but starts with different
initial conditions. Specifically, it can be noted that:

f(n) = f(n-1) + f(n-2)

with initial conditions f(1) = 2, f(2) = 3.

This means that the sequence follows a pattern similar to Fibonacci numbers but shifted. To
express this formally, we can relate it to Fibonacci numbers as follows:

Let F_k denote the k-th Fibonacci number where F1 = 1, F2 = 1, F3 = 2, F4 = 3, F5 = 5, F6 = 8, ....

We can observe that:

f(n) = Fn+2

Conclusion
Thus, the number of valid seatings of mathematicians and poets in n chairs is given by:

f(n) = F_n+2

where F_k is the k-th Fibonacci number. This provides both a recurrence relation and a closed-
form solution for counting valid arrangements of M and P in a row of chairs.

5. In the knapsack problem, we have a knapsack of volume V and a collection of n objects


whose volumes are v1, …, vn and whose costs are c1, …, cn. Your task is to select items to

pack in your knapsack so that the total cost of those items is maximized, subject to the

constraint that the total volume of the selected items does not exceed V.

a. It seems reasonable in selecting items to base the selection upon the ratio ci/vi of cost to

volume. Specify a greedy algorithm based on this principle.

b. Show by giving an example with 3 items that your greedy algorithm does not always

provide an optimal solution to the Knapsack problem.

c. Present the definition of an approximation ratio to measure the approximation quality of

the greedy algorithm.

d. Explain what the difference is between the approximation ratio rA of an approximation

algorithm A for P and the approximation threshold r of P.

e. What is a tight example of an approximation algorithm with a given approximation ratio,

rA?

f. Discuss the gap by introducing the reduction technique, which shows that a given problem

cannot be approximated within a certain ratio r (<10 lines).

ANSWER:
a. Greedy Algorithm Based on Cost-to-Volume Ratio

To design a greedy algorithm for the knapsack problem based on the cost-to-volume ratio, follow
these steps:

1: Sort all items in decreasing order of their cost-to-volume ratio, i.e., 𝑐𝑖/𝑣𝑖, where 𝑐𝑖 is the cost
and 𝑣i is the volume of item 𝑖.

2:Initialize the knapsack with no items and a remaining volume 𝑉.

3: Iterate through the sorted items:


-For each item 𝑖, if its volume 𝑣𝑖 fits in the remaining volume of the knapsack, select the
item (add it to the knapsack).

- Subtract the volume 𝑣𝑖 of the selected item from the remaining volume.

4: Terminate when no more items can be selected due to the volume constraint.

This greedy algorithm attempts to pack items with the highest cost-to-volume ratio first to
maximize the value within the knapsack's volume capacity.

b. Example Where Greedy Algorithm Fails to Provide an Optimal Solution

Consider the following three items and a knapsack of volume V=50:

Cost ci cost/volume
ci/vi
Item Volume

1 10 60 6.0

2 20 100 5.0

3 30 120 4.0

Using the greedy approach, the items are sorted by their cost-to-volume ratio:

- Select item 1: remaining volume = 50−10=40

- Select item 2: remaining volume = 40−20=20

At this point, the knapsack contains items 1 and 2 with a total cost of
60+100=160. There is no remaining volume for item 3.

Optimal solution: Choose items 2 and 3, which fit exactly in the knapsack
(20+30=50) and give a total cost of 100+120=220. The greedy algorithm
results in a suboptimal solution in this case.

c. Approximation Ratio Definition

The approximation ratio 𝑟𝐴 of an algorithm A is a measure of how close the


solution produced by A is to the optimal solution. It is defined as:

𝑟𝐴 = max (OPT(𝐼)/𝐴(𝐼))
where 𝐴(𝐼) is the cost of the solution found by algorithm 𝐴 on instance 𝐼,
and OPT(𝐼) is the optimal solution for the same instance. For maximization
problems, 𝑟𝐴 ≥1, where a lower ratio indicates a better approximation.

d. Approximation Ratio 𝑟𝐴 vs. Approximation Threshold 𝑟

- Approximation ratio 𝑟𝐴 of an algorithm 𝐴 refers to the worst-case


ratio of the optimal solution to the solution provided by the algorithm over
all problem instances.
- Approximation threshold 𝑟 of a problem 𝑃 refers to the best
possible approximation ratio that can be achieved by any polynomial-time
algorithm for the problem P .It represents the theoretical limit on how well
the problem can be approximated.

e. Tight Example of an Approximation Algorithm

A tight example for an approximation algorithm with a given approximation


ratio 𝑟𝐴 is an instance of the problem where the performance of the
algorithm exactly matches the worst-case approximation ratio. For example,
in the greedy knapsack algorithm, if an instance of the problem produces a
solution such that OPT/A = 𝑟𝐴, this is a tight example demonstrating the
limits of the algorithm's performance.

f. Gap and Reduction Technique

A gap reduction technique is used to show that a problem cannot be


approximated within a certain ratio. The general approach involves reducing
a known hard problem (e.g., a version of the knapsack problem that is
known to have a large approximation gap) to the problem under
consideration. If the reduction can be done in polynomial time and it
preserves the gap, this implies that no polynomial-time algorithm can
approximate the problem within a certain factor. Such techniques are often
used in proving hardness of approximation results in complexity theory.

6.. Discuss in detail brute force-based searching, string matching, and


closest pair problems.
ANSWER:
I. Brute Force-Based Searching
Definition: Brute force searching refers to a straightforward approach to solving a problem by
systematically checking all possible candidates. It guarantees finding the solution but can be
inefficient for large datasets.

Key Characteristics:

- Exhaustive Search: The algorithm checks every possible solution or candidate to find the
optimal one.

- Simplicity: The implementation is often straightforward and easy to understand.

- Inefficiency: The time complexity can be very high, especially for large input sizes, making it
impractical for real-world applications.

Example: Searching for an element in an unsorted array.

- Algorithm:

1. Iterate through each element in the array.


2. Check if the current element matches the target.

3. If found, return the index; if not, continue until the end of the array.

- Time Complexity: (O(n)), where (n) is the number of elements in the array.

II. String Matching


Definition: String matching is the process of finding occurrences of a substring (or pattern) within
a larger string (or text). Brute force is one of the simplest methods for string matching.

Brute Force String Matching Algorithm:

a. For each position (i) in the text (T):

- Compare the substring of (T) starting at (i) with the pattern (P).

- If all characters match, record the starting index (i).

Example:

- Text: ABABDABACDABABCABAB

- Pattern: ABABCABAB

- Algorithm Steps:

1. Start at index 0 and compare characters.

2. If a mismatch occurs, move to the next index and repeat.

- Time Complexity: In the worst case, (O(n m)), where (n) is the length of the text and (m) is the
length of the pattern.

More Efficient Algorithms:

- Knuth-Morris-Pratt (KMP): Achieves(O(n + m)) time complexity by preprocessing the pattern to


avoid unnecessary comparisons.

- Boyer-Moore Algorithm: Also achieves efficient performance by skipping sections of text based
on mismatches.

III. Closest Pair Problem


Definition: The closest pair problem involves finding the two points in a set that are closest
together in Euclidean space. This problem has practical applications in fields such as computer
graphics, clustering, and geographical information systems.

Brute Force Approach:

1. For each pair of points, calculate the distance between them.

2. Keep track of the minimum distance found and the corresponding pair of points.
Algorithm Steps:

1. Initialize a variable to store the minimum distance (set it to infinity).

2. Loop through each point (Pi):

- Loop through each point (Pj) where (j > i):

- Calculate the distance (d(Pi, Pj)).

- If (d < text{min distance}), update the minimum distance and store the pair.

Time Complexity: (O(n^2)), where (n) is the number of points.

More Efficient Algorithms:

- Divide and Conquer Approach:

- Sort points based on their x-coordinates.

- Recursively find the closest pairs in left and right halves.

- Check for pairs that cross the dividing line within a certain distance.

Time Complexity: This approach can achieve (O(n log n)).

Summary
- Brute Force Searching is simple but inefficient for large datasets, offering a guaranteed solution
by exploring all possibilities.

- String Matching can also be approached with brute force but benefits from more efficient
algorithms like KMP and Boyer-Moore for better performance.

- The Closest Pair Problem showcases how brute force can be improved upon with divide-and-
conquer techniques, significantly reducing time complexity for larger datasets.

7.Compare and contrast the device and conquer, decrease and conquer,
and transform and conquer approaches in algorithm analysis.
ANSWER:
Certainly! The three approaches—Divide and Conquer, Decrease and Conquer, and Transform
and Conquer—are fundamental strategies in algorithm design and analysis. While they share
some similarities, they also have distinct characteristics and applications. Here's a detailed
comparison:

1. Divide and Conquer


Definition: This approach involves breaking a problem into smaller subproblems of the same
type, solving each subproblem independently, and then combining their solutions to solve the
original problem.
Key Characteristics:

- Subproblem Division: The problem is divided into multiple smaller subproblems (usually two or
more).

- Independent Solutions: Each subproblem is solved independently, often recursively.

- Combining Solutions: The results of the subproblems are combined to form the final solution.

Examples:

- Merge Sort: Divides the array into halves, sorts each half, and merges them.

- Quick Sort: Divides the array based on a pivot, sorts the partitions recursively.

- Binary Search: Divides the search space in half to find an element.

Time Complexity: Often expressed using recurrence relations, e.g., T(n) = aT(n/b) + f(n), where a
is the number of subproblems, n/b is the size of each subproblem, and f(n) is the cost of
combining solutions.

2. Decrease and Conquer


Definition: This approach involves reducing the problem size by a constant amount (usually one)
and solving the smaller problem to find the solution to the original problem.

Key Characteristics:

- Single Subproblem Reduction: The problem is reduced to a single smaller instance, typically by
decreasing its size by a constant factor.

- Direct Solution Building: The solution to the original problem can often be constructed directly
from the solution to the smaller problem.

Examples:

- Insertion Sort: Sorts an array by taking one element at a time and inserting it into the already
sorted part.

- Binary Search: Reduces the search space by half with each iteration.

- Finding the Maximum Element: Reduces the problem by comparing elements one at a time.

Time Complexity: Usually simpler than divide-and-conquer; often expressed as T(n) = T(n - 1) +
O(1) or similar forms.

3. Transform and Conquer


Definition: This approach involves transforming the problem into a different representation or
form that is easier to solve. The focus is on changing the problem structure rather than dividing
or reducing it.

Key Characteristics:
- Problem Transformation: The original problem is transformed into another problem that may
be easier to solve.

- Use of Data Structures: Often involves choosing appropriate data structures or representations
(e.g., converting an unsorted array to a sorted one).

- Algorithmic Techniques: Can include techniques such as dynamic programming, greedy


algorithms, or graph algorithms.

Examples:

- Sorting Algorithms: Transforming an unsorted list into a sorted list (e.g., using heaps or trees).

- Dynamic Programming: Breaking problems into overlapping subproblems and storing solutions
(e.g., Fibonacci sequence).

- Graph Algorithms: Transforming graph problems into matrix representations for easier
processing.

Time Complexity: Varies widely depending on the transformation; can lead to significant
efficiency improvements.

Comparison Summary
| Aspect | Divide and Conquer | Decrease and Conquer |
Transform and Conquer |

|———————–|—————————————-|—————————————-|
—————————————-|

| Problem Division | Multiple subproblems | Single smaller instance |


Transformation of problem structure |

| Independence of Subproblems | Yes | No (solves one at a time) | Not


applicable |

| Combining Solutions | Required | Directly builds from smaller solution | Not


necessarily required |

| Examples | Merge Sort, Quick Sort, Binary Search | Insertion Sort, Binary Search |
Dynamic Programming, Graph Algorithms |

| Complexity Analysis | Often uses recurrence relations | Simpler forms |


Varies widely based on transformation |

Conclusion
Each approach has its own strengths and weaknesses. Divide and Conquer is powerful for
problems that can be broken down into independent subproblems. Decrease and Conquer is
effective for problems that can be solved incrementally. Transform and Conquer emphasizes
changing the problem representation to facilitate easier solutions. Understanding these
strategies helps in selecting appropriate algorithms based on specific problem requirements.
References :
 Course Textbook(Introduction to the design and analysis of algorithms, Anany levitin
2013).
 http:// www.rabieramadan.org
 http://www.tutorialspoint.com/python/

You might also like