Notes
Algorithms: A Brief Introduction
Slides by Christopher M. Bourke
Instructor: Berthe Y. Choueiry
Spring 2006
Computer Science & Engineering 235
Introduction to Discrete Mathematics
Section 2.1 of Rosen
[email protected]Algorithms Notes
Brief Introduction
Real World Computing World
Objects Data Structures, ADTs, Classes
Relations Relations and functions
Actions Operations
Problems are instances of objects and relations between them.
Algorithms1 are methods or procedures that solve instances of
problems
1
”Algorithm” is a distortion of al-Khwarizmi, a Persian mathematician
Algorithms Notes
Formal Definition
Definition
An algorithm is a sequences of unambiguous instructions for
solving a problem. Algorithms must be
I Finite – must eventually terminate.
I Complete – always gives a solution when there is one.
I Correct (sound) – always gives a “correct” solution.
For an algorithm to be a feasible solution to a problem, it must
also be effective. That is, it must give a solution in a “reasonable”
amount of time.
There can be many algorithms for the same problem.
Algorithms Notes
General Techniques
There are many broad categories of Algorithms: Randomized
algorithms, Monte-Carlo algorithms, Approximation algorithms,
Parallel algorithms, et al.
Usually, algorithms are studied corresponding to relevant data
structures. Some general styles of algorithms include
1. Brute Force (enumerative techniques, exhaustive search)
2. Divide & Conquer
3. Transform & Conquer (reformulation)
4. Greedy Techniques
Pseudo-code Notes
Algorithms are usually presented using some form of pseudo-code.
Good pseudo-code is a balance between clarity and detail.
Bad pseudo-code gives too many details or is too implementation
specific (i.e. actual C++ or Java code or giving every step of a
sub-process).
Good pseudo-code abstracts the algorithm, makes good use of
mathematical notation and is easy to read.
Good Pseudo-code Notes
Example
Intersection
Input : Two sets of integers, A and B
Output : A set of integers C such that C = A ∩ B
1 C←∅
2 if |A| > |B| then
3 swap(A, B)
4 end
5 for every x ∈ A do
6 if x ∈ B then
7 C ← C ∪ {x}
8 end
9 end
10 output C
Latex notation: \leftarrow.
Designing An Algorithm Notes
A general approach to designing algorithms is as follows.
1. Understand the problem, assess its difficulty
2. Choose an approach (e.g., exact/approximate,
deterministic/probabilistic)
3. (Choose appropriate data structures)
4. Choose a strategy
5. Prove termination
6. Prove correctness
7. Prove completeness
8. Evaluate complexity
9. Implement and test it.
10. Compare to other known approaches and algorithms.
MAX Notes
When designing an algorithm, we usually give a formal statement
about the problem we wish to solve.
Problem
Given a set A = {a1 , a2 , . . . , an } integers.
Output the index i of the maximum integer ai .
A straightforward idea is to simply store an initial maximum, say
a1 then compare it to every other integer, and update the stored
maximum if a new maximum is ever found.
MAX Notes
Pseudo-code
Max
Input : A set A = {a1 , a2 , . . . , an } of integers.
Output : An index i such that ai = max{a1 , a2 , . . . , an }
1 index ← 1
2 for i = 2, . . . , n do
3 if ai > aindex then
4 index ← i
5 end
6 end
7 output i
MAX Notes
Analysis
This is a simple enough algorithm that you should be able to:
I Prove it correct
I Verify that it has the properties of an algorithm.
I Have some intuition as to its cost.
That is, how many “steps” would it take for this algorithm to
complete its run? What constitutes a step? How do we measure
the complexity of the step?
These questions will be answered in the next few lectures, for now
let us just take a look at a couple more examples.
Other examples Notes
Check Bubble Sort and Insertion Sort in your textbooks, which you
have seen ad nauseum, in CSE155, CSE156, and will see again in
CSE310.
I will be glad to discuss them with any of you if you have not seen
them yet.
Greedy algorithm Notes
Optimization
In many problems, we wish to not only find a solution, but to find
the best or optimal solution.
A simple technique that works for some optimization problems is
called the greedy technique.
As the name suggests, we solve a problem by being greedy—that is,
choosing the best, most immediate solution (i.e. a local solution).
However, for some problems, this technique is not guaranteed to
produce the best globally optimal solution.
Example Notes
Change-Making Problem
For anyone who’s had to work a service job, this is a familiar
problem: we want to give change to a customer, but we want to
minimize the number of total coins we give them.
Problem
Given An integer n and a set of coin denominations (c1 , c2 , . . . , cr )
with c1 > c2 > · · · > cr
Output A set of coins d1 , d2 , · · · , dk such that ki=1 di = n and k
P
is minimized.
Example Notes
Change-Making Algorithm
Change
Input : An integer n and a set of coin denominations (c1 , c2 , . . . , cr )
with c1 > c2 > · · · > cr .
: A set of coins d1 , d2 , · · · , dk such that ki=1 di = n and k is
P
Output
minimized.
1 C←∅
2 for i = 1, . . . , r do
3 while n ≥ ci do
4 C ← C ∪ {ci }
5 n ← n − ci
6 end
7 end
8 output C
Change-Making Algorithm Notes
Analysis
Will this algorithm always produce an optimal answer?
Consider a coinage system:
I where c1 = 20, c2 = 15, c3 = 7, c4 = 1
I and we want to give 22 “cents” in change.
What will this algorithm produce?
Is it optimal?
It is not optimal since it would give us one c4 and two c1 , for three
coins, while the optimal is one c2 and one c3 for two coins.
Change-Making Algorithm Notes
Optimal?
What about the US currency system—is the algorithm correct in
this case?
Yes, in fact, we can prove it by contradiction.
For simplicity, let c1 = 25, c2 = 10, c3 = 5, c4 = 1.
Change-Making Algorithm Notes
Proving optimality
Proof.
I Let C = {d1 , d2 , . . . , dk } be the solution given by the greedy
algorithm for some integer n. By way of contradiction, assume
there is another solution C 0 = {d01 , d02 , . . . , d0l } with l < k.
I Consider the case of quarters. Say in C there are q quarters
and in C 0 there are q 0 . If q 0 > q we are done.
I Since the greedy algorithm uses as many quarters as possible,
n = q(25) + r. where r < 25, thus if q 0 < q, then in
n = q 0 (25) + r0 , r0 ≥ 25 and so C 0 does not provide an
optimal solution.
I Finally, if q = q 0 , then we continue this argument on dimes
and nickels. Eventually we reach a contradiction.
I Thus, C = C 0 is our optimal solution.
Change-Making Algorithm Notes
Proving optimality
Why (and where) does this proof fail in our previous counter
example to the general case?
We need the following lemma:
If n is a positive integer then n cents in change using quarters,
dimes, nickels, and pennies using the fewet coins possible
1. has at most two dimes, at most one nickel at most most
four pennies, and
2. cannot have two dimes and a nickel.
The amount of change in dimes, nickels, and pennies cannot
exceed 24 cents.