0% found this document useful (0 votes)
13 views85 pages

Week 1 - Sessions 1, 2 - Chapter 1 - Mathematical Review, Recursion Review

The document discusses methods for finding the k-th largest value among N values, presenting two solutions involving sorting and iterative processes. It emphasizes the importance of algorithm efficiency, especially for large datasets, and suggests that both proposed algorithms are not optimal. Additionally, it covers basic mathematical concepts such as exponents, logarithms, and geometric series, providing formulas and properties relevant to these topics.

Uploaded by

lebaneseboy2001
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views85 pages

Week 1 - Sessions 1, 2 - Chapter 1 - Mathematical Review, Recursion Review

The document discusses methods for finding the k-th largest value among N values, presenting two solutions involving sorting and iterative processes. It emphasizes the importance of algorithm efficiency, especially for large datasets, and suggests that both proposed algorithms are not optimal. Additionally, it covers basic mathematical concepts such as exponents, logarithms, and geometric series, providing formulas and properties relevant to these topics.

Uploaded by

lebaneseboy2001
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 85

Mathematics Review

Many Solutions for the Same Problem


Finding the k t h (e.g. 3rd largest value (k = 3)) largest value
among N Values.

Solution 1
1. Read the N numbers (e.g. 10 values) into an array
2. Sort the array in decreasing order (meaning the largest values will
be positioned at the beginning of the array) by some simple
algorithm such as bubble sort (compares adjacent elements, and
swaps them if they are in the wrong order)
3. Return the element in position k.
Many Solutions for the Same Problem
Example for solution 1
S u p p o s e we h ave t h e f o l l ow i n g a r ray o f 1 0 n u m b e r s :
{25,12,40,15,8,30,35,10,5,20}

Re a d t h e N n u m b e r s i n t o a n a r ray :
A r ray = { 2 5 , 1 2 , 4 0 , 1 5 , 8 , 3 0 , 3 5 , 1 0 , 5 , 2 0 }

S o r t t h e a r ray i n d e c r e a s i n g o r d e r u s i n g a s i m p l e a l g o r i t h m ( e . g . , b u b b l e s o r t ) :
S o r t e d A r ray = { 4 0 , 3 5 , 3 0 , 2 5 , 2 0 , 1 5 , 1 2 , 1 0 , 8 , 5 }

Re t u r n t h e e l e m e n t i n p o s i t i o n k ( i n t h i s c a s e , k = 3 ) :
T h e 3 rd l a r g e s t va l u e i s 3 0 a t i n d ex 2 , a s i t i s i n t h e t h i r d p o s i t i o n o f t h e s o r t e d
a r ray.
Many Solutions for the Same Problem
Finding the k t h largest value a m o n g N Va l u e s
Solution 2
1. Read the first k elements into an array
2. Sort them (in decreasing order)
3. Next: (iterative process)
For each remaining element after the initial k elements:
• Read one element at a time.
• If the new element is smaller than the kth element in the array,
ignore it .
• If the new element is larger than the kth element in the array:
o Place the new element in its correct position in the array.
o This involves shifting elements to make room for the new
element while maintaining the sorted order.
4. When the algorithm ends, the element in the kth position is returned.
Many Solutions for the Same Problem
Example for solution 2
1.Let's say our array is [14, 8, 21, 17, 25, 12, 19, 10, 16, 22], and we want to find the 3rd largest value
(k=3).
2.Read the first 3 elements into an array: [14, 8, 21]
3.Sort them in decreasing order: [21, 14, 8]
Now, you start the iterative process with the remaining elements:
3.Read the next element (17). Since 17 is larger than the kth element (8) in the array, place it in its
correct position by shifting elements: [21, 17, 14, 8].
4.Read the next element, 25. Again, it's larger than the kth element (14), so insert it in its correct
position: [25, 21, 17, 14, 8]
5.Continue this process for the rest of the elements, maintaining the array in sorted order:
3. Read 12, ignore it. Because 12< 17 the kth element
4. Read 19, insert it: [25, 21, 19, 17, 14, 8]
5. Read 10, ignore it. Because 10< 19 the kth element
6. Read 16, ignore it.
7. Read 22, insert it: [25, 22, 21, 19, 17, 14, 8]
8. ...
6.When you reach the end of the elements, the array becomes [25, 22, 21, 19, 17, 14, 8], and the 3rd
largest element is 21, which is in the kth position.
Best Solutions for the Same Problem
Finding the k t h largest value among N Values
• Both algorithms are simple to code
• Two natural questions:
1. Which algorithm is better?
2. And, more important, is either algorithm good enough?

• A simulation using 30 million random elements and k=15,000,000


shows that neither algorithm finishes in a reasonable amount of time:
several days of computer processing to terminate.
• An alternative method, discussed later, gives a solution in about a
second.
• Thus, although both proposed algorithms work, they cannot be
considered good algorithms,
Best Solutions for the Same Problem
• Writing a working program is not good enough.

• If the program is to be run on a large data set, then the running time
becomes an issue.

• It is important to know how to estimate the running time of a


program for large inputs.

• It is more important, to know how to compare the running times of two


programs without actually coding them. This means assessing the
theoretical efficiency of algorithms before implementing them in
actual code. It involves a detailed analysis of algorithms based on their
time and space complexities, and considering worst-case scenarios.
Best Solutions for the Same Problem
• Some techniques can significantly improve the speed of a program and
determine program bottlenecks.

• These techniques help to find the section of the code on which to


concentrate the optimization efforts.

• For instance, changing a quadratic-time sorting algorithm (e.g.,


bubble sort) to a more efficient linearithmic-time algorithm (e.g.,
merge sort or quicksort) can drastically improve performance,
especially with large datasets.
Best Solutions for the Same Problem
• A quadratic-time sorting algorithm has a time complexity that
grows quadratically with the size of the input.

• In Big-O notation, this is expressed as O(n^2), where "n" is the size of


the input.

• The time complexity notation for a linearithmic-time algorithm is


O(nlog n). Merge sort divides the input into smaller halves,
recursively sorts each half, and then merges them back together.
Mathematics review
Lists some of the basic formulas you need to memorize,
or be able to derive, and reviews basic proof techniques:
• Exponents
• Logarithms
• Series
• Modular Arithmetic
• The P Word
Exponents
Exponents
The commutative property of exponents is a fundamental rule in algebra. It states that
when you multiply two exponential expressions with the same base, the order in which
you multiply them does not affect the result.

If the same base raised to two different exponents and you multiply them, the result is the
same as if you added the exponents and then raised the base to the sum of those
exponents. It is expressed as follows:
Exponents
The quotient property of exponents that deals with the division of expressions with the
same base.

It states that in the fraction when you divide two exponential expressions with the same
base, you can simplify the expression by subtracting the exponent in the denominator
from the exponent in the numerator. It is expressed as follows:
Exponents
This is the power of a power property, which states that when you raise a power to
another power, you multiply the exponents. It is expressed as follows:
Exponents
Adding exponents with the same base. This is expressed as follows:

Example:

is not equal to
has different result. It represents the result of raising a to the power of 2m. In this
case, you multiply the base a by itself 2m times:
Exponents
The below equation holds true only for base 2, this is because you are essentially
doubling the quantity.

Adding the exponents essentially doubles the quantity (2 raised to N) twice, which
mathematically is the same as raising 2 to the combined exponent (m+1).

These two formulas share a similar structure.


However, there is a key difference between the
two, which lies in the specific base used in each
expression. (e.g. base 2)
Logarithms
When analyzing algorithms, especially in the context of time complexity, it's
common to use logarithms to express how the running time of an algorithm grows
as the input size increases.

The choice of the logarithm base is somewhat arbitrary, but in many cases, base-2
logarithms are used.

So, when no base is explicitly mentioned, it is often assumed to be a base-2


logarithm in the context of computer science.
Logarithms
For instance, Binary search is a common algorithm that operates on a sorted array by
repeatedly dividing the search interval in half.

If we have an array of size N, and we perform binary search, the number of


comparisons required to find an element can be expressed as the logarithm base 2 of
N. The formula for time complexity might look like this:

Here, the base-2 logarithm is used because binary search continuously divides the
search space in half.
If we don't specify the base, it's generally assumed to be base 2 in the context of
computer science.
Logarithms
Definition 1.1
• X A = B if and only if log X B = A (logarithm of base x of b is equal to A)
• if and only if means that the exponential form and the logarithmic form
are equivalent statements:
X is the base.
A is the exponent.
B is the result of raising X to the power of A.
For example, 2^3=8 is an exponential expression where X=2, A=3, and B=8.
• log X B = A
Logarithms - Theorem 1.1 (The Change-of-Base Formula)
the logarithm of B to base A is equal to
the logarithm of B to base C divided by
the logarithm of A to base C.

Proof
• Let X = log C B, Y = log C A, and Z = log A B.

• Then, by the definition of logarithms, C X = B , C Y = A, and A Z = B . N o w,

s u b s t i t u t e A b y C Y f r o m C Y = A i n AZ = B . S o , B = ( C Y )Z
• Combining these three equalities yields B = C X = (C Y ) Z .
• Since the bases are the same (C) for C X = (C Y ) Z , we can equate the exponents X = YZ,
which implies Z = X / Y, proving the theorem. See next example.
Example for:
X = YZ, which implies Z = X/Y, proving the theorem

let's say you know the speed (Y) is 60 miles per hour and you traveled for 2 hours (Z).
Using the equation:
X= 60 × 2 = 120 miles
So, Z= 120 / 60 = 2
Logarithms - Theorem 1.1

Proof
• Let X = logA, Y = logB, and Z = logAB.

• Then, assuming the default base of 2, 2 X = A, 2 Y = B , and 2 Z = AB

• Breaks down AB into 2 X 2 Y.


• Based on the property of exponents that when you multiply two numbers with
the same base, you add the exponents. Therefore, 2 X+Y = 2 X 2 Y
• Combining the last equalities yields 2 X+Y = 2 X 2 Y = AB = 2 Z .
• Since, 2 X+Y = 2 Z and have the same base, So, Z=X + Y, which proves the
theorem.
Logarithms
Some other useful formulas, which can all be derived in a
similar manner, follow

Because the logarithmic function grows more slowly than a linear function. For any
positive number X greater than 1
Logarithms
Some other useful formulas, which can all be derived in a
similar manner, follow

The logarithm
of 1 to any base
is always 0
What is a geometric series?
The geometric series formulas are used to find the sum of an finite or
infinite geometric series.
A geometric series is a series in which each term is equal to the
previous term times (*) a constant ratio.

For instance: 1 2 4
If constant A = 2 and N = 3, then the sum of the series is 2^0 + 2^1 + 2^2
+ 2^3 = 1 + 2 + 4 + 8 = 15.

1, (1*2=2) , (2*2=4) , (4*2=8)

If A = 3 and N = 1, then the sum of the series is 3^0 + 3^1 = 1 + 3 = 4.


Geometric Series Sum of a geometric series

The easiest formulas to remember are

and the companion,

In the latter formula, if 0 < A < 1, then

and as N tends to ∞, the s u m approaches 1 / (1 - A).


Geometric Series – First formula Sum of a geometric series
The easiest formulas to remember are
sigma symbol (Σ) means sum. So, sum of 2 raised to the
power of i, from i = 0 to N, is equal to 2 raised to the power
of N + 1 – 1. So, The right side of the equation is equal to Starting index to the ending
value N
the expression on the left side.
Σi=0N 2^i = 2^(N+1)-1
2^(N+1): This term represents the next power of 2 after
the last term in the series (2^N). For example, if N=3, then
2^(N+1) is 2^4, which is 16.

-1: Subtracting 1 adjusts the formula to account for the fact


that we started the series from 2^0, which is 1, and if you
were to include that in the sum calculated, you would be
counting one more term than the actual series.
Geometric Series – Second formula
and the companion, Sum of a geometric series
In this case, the constant ratio is A. The sum of A raised to all
the powers from 0 to N. For example, if N = 2 and A = 3, then
the expression inside the sigma symbol is 3^0 + 3^1 + 3^2 =
1 + 3 + 9 = 13. Σi=0N A^i = (A^(N+1)-1)/A-1

Σ(i=0 to 4) 3^i = (3^(4+1) - 1) / (3 - 1)


Difference between first and second formula
The first formula Σi=0N 2^i = 2^(N+1)-1 is specific to geometric series
with a first term of 2, while the second formula Σi=0N A^i = (A^(N+1)-
1)/A-1 is more general and can be used for any geometric series.

Both formulas give the same result if A=2 for the sum of the series, but
they represent different approaches to calculating it.
Geometric Series – third formula
if 0 < A < 1, then
the sum of A raised to the
power of i, from i = 0 to N,
is less than or equal to
and as N tends to ∞, the s u m
bounded by <1
approaches 1 / (1 - A).

So, Σi=0N A^i (1.875) is less than or equal to 1 / (1 - A) (2).


Geometric Series
We will implement the following steps( Step 1. Start with Series S,
Step 2. Then Multiply A by S, then Step 3. Subtract S-AS)
We can derive the last formula in the following manner
Let S be the s u m . Then
S t e p 1 : S = 1 + A + A 2 + A 3 + A 4 + A 5 + ···

Step 2: Then product of A and the series S.


each term in AS is obtained by taking the
corresponding term in S and multiplying it by A

A* S = A + A 2 + A 3 + A 4 + A 5 + ···
Geometric Series
S = 1 + A + A 2 + A 3 + A 4 + A 5 + ···
product of A and the series S
A* S = A + A 2 + A 3 + A 4 + A 5 + ···
Then step 3
If we subtract these two equations, virtually all the terms on the right side
cancel,

S - AS = 1 + (A - A) + (A^2 - A^2) + (A^3 - A^3) + ···


So, after cancellation, you're left with : S − AS = 1
Now, factor out S from both terms on the left side (S-AS) to get S(1−A)=1 same as
S*1-S*A= (S-AS).
1 is the first term of the series
Then, divide both sides by (1−A) to isolate S and get A is the common ratio
So, the sum of a geometric series is given by the formula:
Geometric Series
The steps multiply the series by the common ratio A and then
subtract the result from the original series, help simplify the
series and generate the formula
closed-form expression
(means simplified)

This help to have an expression (formula) that directly gives you


the sum of the series.

This makes the expression easier to calculate and work with.


Geometric Series
We can use the same previous technique to compute a sum
that occurs frequently. We write

Step 1

Step 2

=
Geometric Series
We can use the same previous technique to compute a sum
that occurs frequently. We write
Step 1

Step 2

Step 3

2S-S

Since both have the same


denominator (2), you can subtract
the numerators directly to get
Arithmetic series vs Geometric series
In arithmetic series, the terms have a constant difference, while in
geometric series, the terms have a constant ratio (e.g. A)

For instance:
Consider the arithmetic series 2+5+8+11+14. Here, the common
difference between consecutive terms is d=3.

Consider the geometric series 2+6+18+54+162. The common ratio


between consecutive terms is r=3. First term (a): 2 and Common ratio (r): 3
Arithmetic Series

Another type of common series in analysis is the arithmetic series.


Any such series can be evaluated from the basic formula:

approximately equal
Arithmetic Series
The numbers in the series below follow an arithmetic progression with a common
difference of 3.
To find the sum 2 + 5 + 8 + … + (3k - 1) , rewrite it as 3(1 + 2 + 3 + … + k) - (1 + 1 +
1 + … + 1), which is clearly 3k(k + 1)/2 - k.
"-1" subtracts 1 from that result to form the actual term
For example:
•When k = 1, the term is 3k - 1 = (3 * 1) - 1 = 2 (the first term).
•When k = 2, the term is 3k - 1 = (3 * 2) - 1 = 5 (the second term).
•When k = 3, the term is 3k - 1 = (3 * 3) - 1 = 8 (the third term).
So, the given series can be rewritten as 3*1−1 + 3*2−1 + 3*3−1 +…+3*k−1.
- Notice that the expression 3*1+3*2+3*3+…+3*k3 is actually the sum of the first k natural
numbers multiplied by 3: 3(1+2+3+…+k) and notice the second part of the expression,
−(1+1+1+…+1)
Combining both parts, you get 3(1+2+3+…+k)−(1+1+1+…+1), which simplifies to the given sum
2+5+8+…+(3k−1).
Arithmetic Series
approximately equal

1. Recall that we rewrote the original series as 3(1 + 2 + 3 + … + k) - (1 + 1 + 1 + … + 1).

2. Applying Formula to First Series: The first series (1 + 2 + 3 + … + k) is an arithmetic series with
k terms. Using the formula for the sum of an arithmetic series N(N + 1)/2, we get its sum as k(k
+ 1)/2.

3. Simplifying Second Series: The second series (1 + 1 + 1 + … + 1) consists of k terms, each


being 1. Its sum is simply k.

4. Combining Simplified first and second Series: Substituting the sums of the two series back
into the rewritten expression, we have: 3(k(k + 1)/2) – k. (3 is the common difference)

5. Distributing and Rearranging: Distribute the 3 to get 3k(k + 1)/2, and then rearrange the
terms to get the final simplified expression: 3k(k + 1)/2 - k
Harmonic numbers
Harmonic numbers are used in the analysis of algorithms,
particularly in the study of time complexity.

In certain algorithms, the sum of harmonic numbers may represent


the time required for a certain operation, helping to analyze the
efficiency of an algorithm.
Series
H N are harmonic numbers, and the sum is harmonic sum.
Harmonic numbers are a sequence of numbers that are defined by the
sum of the reciprocals of the positive integers.
Reciprocals are the multiplicative inverses of numbers.

multiply a whole number (2) by the numerator

The error in the approximation tends to γ≈0.57721566 (Euler’s constant


denoted by e).

denotes the summation from (i=1 to N), and ln N is the natural logarithm (base e) of N.
Series
denotes the summation from i=1 to N, and loge N is the
natural logarithm of N.

Lets illustrate the relationship between harmonic numbers and the natural
logarithm. Suppose we want to approximate , the 10th harmonic number:

If you calculate this equation, you'll find that the sum of the reciprocals of the first 10
positive integers is very close to 2.30259.

Now, let's compare this to the natural logarithm of 10, denoted as ln(10). Using a
calculator: ln (10) ≈2.30259

This illustrates the approximate relationship between harmonic numbers and the natural logarithm.
Modular Arithmetic
• M o d u l a r a r i t h m e t i c d e a l s w i t h i n te g e r s a n d t h e i r re m a i n d e r s wh e n
d iv i d e d by a f i xe d p o s i t ive i n te g e r , c a l l e d t h e m o d u l u s .

• T h e b a s i c i d e a i s t o c o n s i d e r o n ly t h e re m a i n d e r wh e n d iv i d i n g
integers.

• T h e o p e ra t i o n i nvo lve d i s u s u a l ly d e n o t e d by t h e s y m b o l " ≡ " a n d i s


re a d a s " i s c o n g r u e n t to .“

• Fo r i n s t a n c e , a ≡ b ( m o d m ) i s re a d a i s c o n g r u e n t to b m o d m . m e a n s
t h a t a a n d b h ave t h e s a m e re m a i n d e r wh e n d iv i d e d by m .

Mathematically, this is expressed as:


a≡b(mod m) ⟺ (a−b) mod m=0
Modular Arithmetic
Consider the integers a=17 and b=8, and let the modulus be m=3. We want to determine
whether a is congruent to b modulo m, i.e., whether a≡b(mod3).

we calculate the remainders when a and b are divided by m:


1.For a:
a mod 3 = Since both a and b have the same
17 mod 3=2 (remainder) remainder (2) when divided by 3, we can
say that a is congruent to b modulo 3.
2. For b:
b mod 3 = Mathematically, this is expressed as:
8 mod 3=2 (remainder) 17≡8 (mod 3)

The congruence holds true because the difference (a−b) = 17−8 = 9 is divisible by
the modulus m=3, which is 9 mod 3=0.
Modular Arithmetic
Properties of Congruence:
If A ≡ B (mod N), then adding or subtracting the same value from both A
and B doesn't change the congruence.
A+C ≡ B+C (mod N)
A−C ≡ B−C (mod N)
Similarly, if A ≡ B (mod N), then multiplying both A and B by the same
integer doesn't change the congruence.
AD ≡ BD (mod N) for any integer D.
Modular Arithmetic in the context of prime numbers
If N is prime number, then ab ≡ 0 (mod N) is true if and only if
a ≡ 0 (mod N) or b≡0 ( mod N):

This theorem states that if N is a prime number (e.g. 5, 3) and the product
ab is congruent to 0 modulo N, then either a or b (or both) must be
congruent to 0 modulo N.
In other words, if a prime number N divides the product of two numbers, it
must also divide at least one of the two numbers.
Proof methods
What are Proof methods?
• In algorithm analysis, various proof methods are employed to
analyze the correctness and efficiency of algorithms.

• These methods help researchers and computer scientists establish


the properties and performance characteristics of algorithms.
What are Proof methods?
• Proof methods are often used in combination, and the choice of
method depends on the specific properties or characteristics of the
algorithm being analyzed.

• The following are some common proof methods used in


algorithm analysis:
1. Direct Proof:
• In algorithm analysis, a direct proof shows, step-by-step, why an
algorithm is guaranteed to achieve its desired outcome.

• Direct proofs are opposite of indirect proofs (such as proof by


contradiction), where the algorithm's correctness or efficiency is
established by assuming the opposite and demonstrating that this
assumption leads to a contradiction.
Direct Proof:
• Here are some examples of situations where direct proofs
might be used in algorithm analysis:
• Example1: Showing that a sorting algorithm always orders its
input correctly: The proof would analyze each pass of the algorithm
and demonstrate how it progressively refines the order until the final
sorted state is achieved.

• Example 2: Proving that a search algorithm finds the desired


element in a data structure: The proof would trace the execution of
the algorithm based on the input and show how it narrows down the
search space until the target element is identified.
Direct Proof:
• Suppose we have an algorithm that finds the maximum element
in an array of integers.

• The algorithm works by iterating through the array and keeping


track of the current maximum element.

• Now, let's provide a direct proof of the correctness of this


algorithm.
Direct Proof:
• Proof:
• Initialization: The Max algorithm initializes max_element to the first element of
the array. This means that it correctly handles the case when the array has only one
element.
• Maintenance: During each iteration of the loop, the algorithm compares the current
element with max_element and updates max_element if the current element is greater.
This ensures that max_element always holds the maximum element seen so far.
• Termination: After the loop completes, max_element contains the maximum element
in the array. Therefore, the algorithm correctly finds the maximum element.

• This is a direct proof because we've directly shown that the algorithm satisfies the
three conditions (initialization, maintenance, termination) required for correctness. No
assumptions were made, and no contradictions were derived.
Direct Proof:
• Direct proofs in algorithm analysis can be achieved through both manual
analysis and automation, though the approach depends on the complexity of
the algorithm.
• Manual analysis:
• Pros:
 Provides deeper understanding of the algorithm's logic and behavior.
 Easier to explain and communicate the proof reasoning.
• Cons:
 Can be time-consuming and error-prone for complex algorithms.
 Difficult to scale for large or repetitive computations.
 May not be able to exhaustively explore all possible input cases.
• Automation:
• Pros:
 Faster and more efficient for repetitive and well-defined calculations.
 Can deeply explore all possible input cases for algorithms.
 Reduces the risk of human error in calculations.
• Cons:
 Requires significant development effort to create and maintain automated proof tools.
2. Proof by Contrapositive:
• Proof by Contrapositive is used in Algorithm analysis to proof
correctness of algorithms.
• For instance, if "if P then Q" is true, then "if not Q then not P" is
also true.

• It proves the contrapositive of the statement instead of the


original statement.

• Once you successfully prove the contrapositive, you can conclude that
the original statement "if P, then Q" is also true. This is because the
two statements are logically equivalent.
Proof by Contrapositive:
• You might choose to prove a statement by contrapositive instead of
directly, because, sometimes, the contrapositive might be easier to
prove than the original statement because it might involve simpler
concepts or lead to a more straightforward reasoning.
Proof by Contrapositive:
• Example of Proof by Contrapositive in Algorithm Analysis

• Proof by Contrapositive:
• We need to prove the contrapositive: "If an algorithm has a worst-
case time complexity of n^2, then it doesn't use a divide-and-conquer
strategy.“ (this is true)
• Negate the Conclusion: If an algorithm uses a divide-and-conquer
strategy, then it doesn't have a worst-case time complexity of n^2.
Proof by Contrapositive:
• Reasoning: Divide-and-conquer algorithms typically follow a three-step
pattern:
1. Divide the problem into smaller subproblems.
2. Conquer (solve) the subproblems recursively.
3. Combine the solutions to the subproblems to solve the original problem.
• In the worst case, each subproblem will be of size roughly n/2.
• Dividing a problem of size n into subproblems of size n/2 can be achieved in at
most logarithmic time (log n).
• Combining these observations, it's impossible for a divide-and-conquer
algorithm to have a worst-case time complexity of n^2.
• Conclusion: Since we successfully proved the contrapositive, we can
deduce that the original statement is true. Therefore, if a sorting algorithm has a
worst-case time complexity of n^2, it cannot use a divide-and-conquer strategy.
3. Proof by Mathematical Induction
The proof by induction te c h ni q ue i s use d i n a n a lysis o f a lgor ith ms
a nd d ata str u c tu re s , w he re math e mat ica l p rop e r t ie s ne e d to b e
e sta bl i she d fo r inf in ite sets o f in p u t va lu e s .

M ath e m atica l p ro p e r tie s refe r to c h a ra c te r i stic s o r b e h avi o rs o f


a n a l go r i t hm t h at ca n b e de s cr i be d usi ng m at he mat i ca l co n ce pt s .
Thi s co ul d b e do n e to sh o w t hat t h e a l go r i t hm' s be h avi o r re m a i ns
co nsi ste nt a s t he i nput si ze i nc re a se s .

M ath e matica l p ro p e r tie s mi g ht i n cl u de ti me co mp lex ity , s p a c e


comp lex ity , co r re ctn e ss , optima lity , etc .
Proof by Mathematical Induction
The proof by induction involves two main steps:.
1. Base Case step:
The first step is to prove that the statement holds for the smallest or simplest
value in the set. The base case step aims to show that the proposition is true
when the parameter (usually a natural number) takes its smallest value.

2. Inductive step:
Inductive step is assumed to be true for all cases up to some limit k.
The inductive step aims to show that it must also be true for the next
value (k+1).

Once both the base case and the inductive step are proven, the conclusion
is that the proposition (e.g. i = 1, 2, … , k;) is true for all values in the set.
Proof by Mathematical Induction
Example 1: Proving Fibonacci numbers < (5/3)i, for i ≥1

Let’s proof the above theorem by Induction:


1. Base case: We first verify that the theorem is true for the base case: F 1 = 1
< 5/3; this proves the basis.

2. Inductive step: F 0 = 1, F1 = 1, F 2 = 2, F 3 = 3, F 4 = 5, . . . , Fi = F i−1 - F i−2 ,


satisfy F i < (5/3)i, for i ≥1. We assume that the theorem is true for i = 1, 2,
… , k; this is the inductive hypothesis(step). To prove the theorem, we
need to show that F k+1 < (5/3) k+1.
Proof by Mathematical Induction
Recall the fibonacci recurrence relation defines how each term in the Fibonacci sequence is
obtained by adding the two preceding terms. The relation is given by:

( )-1 = ( )-2 =

Apply the Fibonacci recurrence relation for i = k+1:

We are using the fact that each Fibonacci number is the sum of the
two preceding Fibonacci numbers.
Proof by Mathematical Induction
We have, by the definition,
F k+1 = F k + F k−1
use the inductive hypothesis on the right-hand side, we obtain
Fk+1 < (5/3)k + (5/3)k−1 now factor out (5/3)^(k-1) from both term,
then multiply (3/5) on both side to obtain the following form
… more calculation are done here to get the below
To add 3/5 + 9/25, you first need to find a
< (3/5)(5/3)k+1 + (3/5)2(5/3)k+1 common denominator. The LCM of 5 and 25
< (3/5)(5/3)k+1 + (9/25)(5/3)k+1 is 25, because 25 is the smallest multiple
that is divisible by both 5 and 25.
Now, you need to express both fractions with
Factor out the common term (5/3)^(k+1): the common denominator:
F k+1 < (3/5 + 9/25)(5/3)k+1
< (24/25)(5/3)k+1 Now, you can add the fractions:
< (5/3)k+1
This proved the theorem
4. Proof by Counterexample
Proof by counterexample is a method used in mathematics to
demonstrate that a statement is false.

Instead of proving that a statement is true, proof by counterexample


aims to show that there exists at least one instance where the
statement is false.

Example:
The statement F k ≤ k 2 . P r o o f t h a t t h i s s t a t e m e n t is false.

The easiest way to prove this is to compute F 11 =144 > 11 2 .


5. Proof by Contradiction
Proof by contradiction, is a method of mathematical or logical
proof in which one assumes the negation (false) of what one seeks
to prove and then shows that this assumption leads to a
contradiction.

Suppose you want to prove statement P.


So, start by assuming the opposite (not P), and see what consequences
follow from this assumption.

Since the assumption of not P leads to a contradiction, you can then


conclude that the original statement P must be true.
Proof by Contradiction
• Theorem: there is an infinite number of primes. (this is true)
• Let P 1 , P 2 , ... , P k be all the prime numbers arranged in ascending order, where
k is the total number of primes.
• Construct a New Number N: N = P 1* P 2* P 3 ··· P k + 1
• It is clear that N is larger than P k (the last prime number). This is evident because N is
formed by multiplying the primes and then adding 1. Therefore, N is at least 1 greater
than the product of all the primes.

• Assumption: assu me that the above theorem is false. So, suppose, that
P k is the largest prime and there are only finitely (limited) prime
numbers. So, by assumption, the original list contains all the
primes up to P k and N is not in the list, and if it were a prime
number, it would contradict the assumption.
Proof by Contradiction
• So, by assumption, N is not prime.
• However, none of P 1 , P 2 , ... , P k divides N exactly, because there will
always be a remainder of 1. Therefore, N is a prime . This is a
contradiction, because every number is either prime or a product of
primes.

• Hence, the original assumption, that P k is the largest prime, is


false, which implies that the theorem is true. In other words, the
assumption that there are only finitely many primes numbers must be
false.
Contrapositive vs Contradiction
1. Contrapositive proves the contrapositive of the statement instead
of the original statement.

if "if P then Q" is true, then "if not Q then not P" is also true.

2. Contradiction is a proof method where you assume the opposite


of what you want to prove and then derive a logical contradiction to
establish the truth of the original statement. For example if the
original statement is true, prove that the original statement is
false (the opposite).
Review of Recursion
• Most mathematical functions that we are familiar with are
described by a simple formula.
Example: we can convert temperatures from Fahrenheit to Celsius
by applying the formula
C = 5(F − 32)/9
• Mathematical functions are sometimes defined in a less
standard form.
Example, we can define a function f , valid on positive integers,
that satisfies the basic case f (0) = 0 and recursive rule f (x) = 2f (x - 1)
+ x2.
From this definition we see that f (1) = 1, f (2) = 6, f (3) = 21, and
f (4) = 58. (see calculation in next slide)
• A function that is defined in terms of itself is called
recursive.
Review of Recursion
• how the function is evaluated for some specific values :
f (0) = 0

f (x) = 2f (x - 1) + x 2
Review of Recursion
• Not all mathematically recursive functions are efficiently (or
correctly) implemented by C++’s simulation of recursion.

• The idea is that the recursive function f should be expressible


in only a few lines, just like a non-recursive function.

1 int f( int x )
2{
3 if( x == 0 ) //base case
4 return 0;
5 Else //has no sense without base case
6 return 2 * f( x - 1 ) + x * x; //recursive
7 }
Review of Recursion
1. recursive function step by step with x = 4. 2. After reaching the base case, stop making further
recursive calls and start returning values. This is the
backtracking step.

If the input values don't progress toward the base case,


the recursion becomes infinite.
Review of Recursion
• Identify the error in the following code:

int strangeSum(int n) {
if (n == 0)
return 0;
else
return n + strangeSum(n * 2);
}
Review of Recursion
This recursive function is intended to calculate the sum of a series of numbers, but it contains
a subtle error (difficult to identify) that leads to unexpected behavior.

In the else branch, the function makes a recursive call with the argument n * 2. The result
of this recursive call is added to n. The recursive call strangeSum(n * 2) may not
necessarily bring n closer to the base case.
Depending on the initial value of n, the expression n * 2 might lead to an increase in n in
each recursive call.
This can result in an infinite recursion, leading to a stack overflow (stack becomes full and
cannot accommodate additional function calls) int strangeSum(int n) {
if (n == 0)
return 0;
else
return n + strangeSum(n * 2);
}
Review of Recursion
• Two fundamental rules of recursion:

1. Base cases: You must always have some base


cases, which can be solved without recursion.

2. Making progress: For the cases that are to be solved


recursively, the recursive call must always be to a case
that makes progress toward a base case.
Review of Recursion
• it's essential for developers to design recursive functions carefully to
avoid accidental infinite recursion and to ensure that the base case is
reachable.

• So, recursive function is not circular (infinite loop).

• The term "recursion" refers to the process in which a function calls


itself, but it is designed to converge toward a base case.

• A well-designed recursive function ensures that the sequence of calls


eventually reaches a termination point, preventing an infinite loop.
Non-Mathematical example: Dictionary
Illustrate the concept of circular definitions
• Imagine you have a large dictionary where words are
defined in terms of other words.

• When you encounter a word you don't understand, you look


it up in the dictionary.

• However, the definitions themselves might contain words


you don't know, leading you to look up those words as well.

• This process can continue, forming a chain of definitions


and the cycle repeats.
Non-Mathematical example: Dictionary
describes a recursive strategy for understanding words using a dictionary.
T h e recursive strategy to understand words is as follows:

1.Initial Condition:
If we already know the meaning of a word, then we're done.

2. First Iteration:
If we don't know the meaning of a word, the next step is to consult the dictionary to
find its definition. The dictionary serves as a source of definitions for words.

3. Recursive Exploration:
Upon looking up a word's definition, if we understand all the words within that
definition, then our understanding is complete, and we are done. However, if there
are words in the definition that we don't understand, the process becomes
recursive.
Non-Mathematical example: Dictionary
describes a recursive strategy for understanding words using a dictionary.

T h e recursive strategy to understand words is a s follows:

4. Recursive Step:
For any unfamiliar words within a definition, we recursively apply the same strategy: we
look up those words in the dictionary. This recursive exploration continues until we either
understand all the words in a particular definition or encounter a word we don't know.

5. Termination Conditions:
The overall procedure is designed to terminate in two scenarios:
1. If we successfully understand the meanings of all the words in a definition, we
can retrace our steps and comprehend the original word.

2. If we encounter circular definitions or reach a point where a word is not defined


in the dictionary, the process may loop indefinitely.
Recursion and Induction: Printing Out Numbers
• Suppose we wish to print out a positive integer, n . O u r routine will
have the heading printOut(n).

// Recursive function to print digits of a number in reverse order If we run the code
// example n=12345 with number =
void printOut(int n) { 12345, the output
// Base case: If n is greater than or equal to 10 will be:
if (n >= 10) {
// Recursively call printout. Printing digits of
printOut(n / 10); //dividing an integer by 10 removes the 12345 in reverse
rightmost digit order: 1 2 3 4 5.
}
// At each step, the last digit of the current number is printed the first digit printed
printDigit(n % 10); out is 5 then 4...
}
Recursion and Induction: Printing Out Numbers
• Assume that the only I / O routines available will take a single-digit
number a n d output it: for example printDigit(4) will output 4.
• The base case is not satisfied (4 >= 10)=false.
• Since 4 is less than 10 (base case), it directly calls printDigit(4). Since n is equal to 4,
the function simply prints the last digit of n, which is 4.
// Recursive function to print digits of a number in reverse order
void printOut(int n) {
// Base case: If n is greater than or equal to 10
if (n >= 10) {
// Recursively call printout.
printOut(n / 10); //dividing an integer by 10 removes the rightmost digit
}
// At each step, the last digit of the current number is printed (e.g. last digit 4 is printed)
printDigit(n % 10);
}
Recursion and Induction
The recursive number-printing algorithm is correct for n≥0.
Proof (By induction on the number of digits in n)
1. Base case: If n has one digit (e.g. 4), then the program is trivially
correct: makes a call to printDigit.

2. Inductive step: Assume then that printOut works for


all num bers of k (k is a variable represe nt i ng a
posit ive inte ger for which we assume that the func tion printOut (k)
works correctly.).

Therefore, if the function works for 'k', it also works for 'k+1'.
Since the inductive step has shown that if the function works for an
arbitrary k-digit (removing the rightmost digit) number, it also works for
the next (k+1)-digit number.

Thus, by induction, all numbers are correctly printed.


If n=0, the printOut function will directly call printDigit(n % 10), where n
% 10 is 0. , the digit 0 will be printed.
The four basic rules of recursion
• When writing recursive routines, it is crucial to keep in
mind the four basic rules of recursion:

1. Base cases: You must always have some base cases, which can be solved without
recursion. These are the termination conditions for the recursive algorithm.

2. Making progress: For the cases that are to be solved recursively, the recursive call
must always be to a case that makes progress toward a base case.
The four basic rules of recursion
When writing recursive routines, it is crucial to keep in
mind the four basic rules of recursion:

3. Design rule: The "Design Rule" suggests that when designing recursive algorithms, you don't
necessarily need to understand the complex details or trace through the unknown number of
times the function will be called.

This rule implies that when you make a recursive call, you should assume that the call will
correctly solve the sub problem. This means you trust that the recursive call will correctly
solve the sub problem it addresses.
The four basic rules of recursion
• When writing recursive routines, it is cru cial to keep in
mind the four basic rules of recursion:

4. Compound interest rule: Never duplicate work by solving the same instance of a problem in separate
recursive calls. If a sub problem has already been solved, its result should be stored and reused instead of
recalculating it. Here you can Memoize the result using the Memoization technique.

So, before making a recursive call, check if the result for the current input is already stored in the
data structure (e.g array).
If the result is present in the array, return it immediately without recomputing. If not, proceed
with the recursive call, calculate the result, and store it for future use.

Recursion without optimization techniques like memoization can lead to inefficiency and
performance issues

You might also like