0% found this document useful (0 votes)
47 views62 pages

01 Introduction Old

This document discusses algorithms and their analysis. It begins by defining an algorithm as a step-by-step procedure for solving a problem in a finite amount of time. It provides examples of algorithms and distinguishes between algorithms and computer programs. It discusses properties of algorithms like being composed of precise steps and terminating. It covers analyzing algorithms by considering their worst-case, average-case, and best-case scenarios. The document emphasizes the importance of choosing efficient algorithms and analyzing them mathematically to determine their runtime.

Uploaded by

Anushka Sharma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
47 views62 pages

01 Introduction Old

This document discusses algorithms and their analysis. It begins by defining an algorithm as a step-by-step procedure for solving a problem in a finite amount of time. It provides examples of algorithms and distinguishes between algorithms and computer programs. It discusses properties of algorithms like being composed of precise steps and terminating. It covers analyzing algorithms by considering their worst-case, average-case, and best-case scenarios. The document emphasizes the importance of choosing efficient algorithms and analyzing them mathematically to determine their runtime.

Uploaded by

Anushka Sharma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 62

Design and Analysis of Algorithms

Algorithm Definition
• An algorithm is a step-by-step procedure for solving a particular
problem in a finite amount of time.

• More generally, an algorithm is any well defined computational


procedure that takes collection of elements as input and produces
a collection of elements as output.

Input ALGORITHM
Some mysterious Output
X processing Y=
F: X→Y
F(X)
Algorithm -- Examples
• Repairing a lamp
• A cooking recipe
• Calling a friend on the phone
• The rules of how to play a game
• Directions for driving from A to B
• A car repair manual
• Human Brain Project
• Internet & Communication Links (Graph)
• Matrix Multiplication
Algorithm vs. Program
• A computer program is an instance, or concrete representation, for
an algorithm in some programming language

• Set of instructions which the computer follows to solve a problem

Problem Algorithm: A sequence of


instructions describing how
to do a task

High Level
Language
Program
Solving Problems (1)
When faced with a problem:
1. First clearly define the problem

2. Think of possible solutions

3. Select the one that seems the best under the prevailing
circumstances

4. And then apply that solution

5. If the solution works as desired, fine; else go back to step 2


Solving Problems (2)
• It is quite common to first solve a problem for a particular case
• Then for another
• And, possibly another
• And watch for patterns and trends that emerge
• And to use the knowledge from these patterns and trends in
coming up with a general solution
• And this general solution is called …………….
“Algorithm”
One Problem, Many Algorithms
Problem
• The statement of the problem specifies, in general terms, the
desired input/output relationship.
Algorithm
• The algorithm describes a specific computational procedure for
achieving input/output relationship.
Example
• Sorting a sequence of numbers into non-decreasing order.
Algorithms
• Various algorithms e.g. merge sort, quick sort, heap sorts etc.
Problem Instances
• An input sequence is called an instance of a Problem

• A problem has many particular instances

• An algorithm must work correctly on all instances of the


problem it claims to solve

• Many interesting problems have infinitely many instances


– Since computers are finite, we usually need to limit the number and/or
size of possible instances in this case
– This restriction doesn’t prevent us from doing analysis in the abstract
Properties of Algorithms
• It must be composed of an ordered sequence of precise steps.

• It must have finite number of well-defined instructions /steps.

• The execution sequence of instructions should not be ambiguous.

• It must be correct.

• It must terminate.
Syntax & Semantics
An algorithm is “correct” if its: WARNINGS:
• Semantics are correct
• Syntax is correct 1. An algorithm can be
syntactically correct, yet
semantically incorrect –
Semantics: dangerous situation!
• The concept embedded in an
algorithm (the soul!) 2. Syntactic correctness is
easier to check as
compared to semantic
Syntax: correctness
• The actual representation of
an algorithm (the body!)
Algorithm Summary
• Problem Statement
• Relationship b/w input and output

• Algorithm
• Procedure to achieve the relationship

• Definition
• A sequence of steps that transform the input to output

• Instance
• The input needed to compute solution

• Correct Algorithm
• for every input it halts with correct output
Al-Khwarizmi’s Golden Principle
All complex problems can be and must be solved
using the following simple steps:

1. Break down the problem into small, simple sub-problems


2. Arrange the sub-problems in such an order that each of them
can be solved without effecting any other
3. Solve them separately, in the correct order
4. Combine the solutions of the sub-problems to form the
solution of the original problem
Why Algorithms are Useful?
• Once we find an algorithm for solving a problem, we do not need
to re-discover it the next time we are faced with that problem

• Once an algorithm is known, the task of solving the problem


reduces to following (almost blindly and without thinking) the
instructions precisely

• All the knowledge required for solving the problem is present in


the algorithm
Why Write an Algorithm Down?
• For your own use in the future, so that you don’t have spend the
time for rethinking it

• Written form is easier to modify and improve

• Makes it easy when explaining the process to others


Designing of Algorithms
• Selecting the basic approaches to the solution of the problem
• Choosing data structures
• Putting the pieces of the puzzle together
• Expressing and implementing the algorithm
• clearness, conciseness, effectiveness, etc.

Major Factors in Algorithms Design


• Correctness: An algorithm is said to be correct if for every input, it
halts with correct output. An incorrect algorithm might not halt at
all OR it might halt with an answer other than desired one. Correct
algorithm solves a computational problem
• Algorithm Efficiency: Measuring efficiency of an algorithm
• do its analysis i.e. growth rate.
• Compare efficiencies of different algorithms for the same problem.
Designing of Algorithms
• Most basic and popular algorithms are search and sort algorithms
• Which algorithm is the best?
• Depends upon various factors, for example in case of sorting
• The number of items to be sorted
• The extent to which the items are already sorted
• Possible restrictions on the item values
• The kind of storage device to be used etc.
Important Designing Techniques
• Brute Force–Straightforward, naive approach–Mostly expensive

• Divide-and-Conquer –Divide into smaller sub-problems


• e.g merge sort

• Iterative Improvement–Improve one change at a time.


• e.g greedy algorithms

• Decrease-and-Conquer–Decrease instance size


• e.g fibonacci sequence

• Transform-and-Conquer–Modify problem first and then solve it


• e.g repeating numbers in an array

• Dynamic programming–Dependent sub-problems, reuse results


Algorithm Efficiency
• Several possible algorithms exist that can solve a particular problem
• each algorithm has a given efficiency
• compare efficiencies of different algorithms for the same problem
• The efficiency of an algorithm is a measure of the amount of resources
consumed in solving a problem of size n
• Running time (number of primitive steps that are executed)
• Memory/Space
• Analysis in the context of algorithms is concerned with predicting the
required resources
• There are always tradeoffs between these two efficiencies
• allow one to decrease the running time of an algorithm solution by increasing
space to store and vice-versa
• Time is the resource of most interest
• By analyzing several candidate algorithms, the most efficient one(s) can be
identified
Algorithm Efficiency
Two goals for best design practices:
1. To design an algorithm that is easy to understand, code, debug.
2. To design an algorithm that makes efficient use of the computer’s
resources.

How do we improve the time efficiency of a program?


The 90/10 Rule
• 90% of the execution time of a program is spent in executing 10% of the
code. So, how do we locate the critical 10%?
• software metrics tools
• global counters to locate bottlenecks (loop executions, function calls)

Time Efficiency improvement


• Good programming: Move code out of loops that does not belong there
• Remove any unnecessary I/O operations
• Replace an inefficient algorithm (best solution)

Moral - Choose the most appropriate algorithm(s) BEFORE


program implementation
Analysis of Algorithms
• Two essential approaches to measuring algorithm efficiency:
• Empirical analysis:
• Program the algorithm and measure its running time on example
instances
• Theoretical analysis
• Employ mathematical techniques to derive a function which relates the
running time to the size of instance
Analysis of Algorithms
• Many criteria affect the running time of an algorithm, including
• speed of CPU, bus and peripheral hardware
• design time, programming time and debugging time
• language used and coding efficiency of the programmer
• quality of input (good, bad or average)
• But
• Programs derived from two algorithms for solving the same problem
should both be
• Machine independent
• Language independent
• Amenable to mathematical study
• Realistic
Analysis of Algorithms
• The following three cases are investigated in algorithm analysis:

• A) Worst case: The worst outcome for any possible input


• We often concentrate on this for our analysis as it provides a clear
upper bound of resources
• an absolute guarantee

• B) Average case: Measures performance over the entire set of


possible instances
• Very useful, but treat with care: what is “average”?
• Random (equally likely) inputs vs. real-life inputs

• C) Best Case: The best outcome for any possible input


• provides lower bound of resources
Analysis of Algorithms
• An algorithm may perform very differently on different example
instances. e.g: bubble sort algorithm might be presented with
data:
• already in order
• in random order
• in the exact reverse order of what is required

• Average case analysis can be difficult in practice


• to do a realistic analysis we need to know the likely distribution of
instances
• However, it is often very useful and more relevant than worst case;
for example quicksort has a catastrophic (extremly harmful) worst
case, but in practice it is one of the best sorting algorithms known

• The average case uses the following concept in probability theory.


Suppose the numbers n1, n2 , …, nk occur with respective
probabilities p1, p2,…..pk. Then the expectation or average value E
is given by E = n1p1 + n2p2 + ...+ nk.рk
Empirical Analysis
• Most algorithms transform input
objects into output objects

• The running time of an algorithm


typically grows with the input size

• Average case time is often


difficult to determine

• We focus on the worst case


running time
• Easier to analyze
• Crucial to applications such as
games, finance and robotics
Empirical Analysis
• Write a program implementing
the algorithm

• Run the program with inputs of


varying size and compositions

• Use timing routines to get an


accurate measure of the actual
running time e.g.
System.currentTimeMillis()

• Plot the results


Limitations of Empirical Analysis
• Implementation dependent
• Execution time differ for different
implementations of same
program

• Platform dependent
• Execution time differ on different
architectures

• Data dependent
• Execution time is sensitive to
amount and type of data
manipulated.

• Language dependent
• Execution time differ for same
code, coded in different
languages
∴ absolute measure for an algorithm is not appropriate
Theoretical Analysis
• Data independent
• Takes into account all possible inputs

• Platform independent

• Language independent

• Implementation independent
• not dependent on skill of programmer
• can save time of programming an inefficient solution

• Characterises running time as a function of input size, n.


Easy to extrapolate without risk
Why Analysis of Algorithms?
• For real-time problems, we would like to prove that an
algorithm terminates in a given time.

• Algorithmics may indicate which is the best and fastest


solution to a problem without having to code up and test
different solutions

• Many problems are in a complexity class for which no


practical algorithms are known
• better to know this before wasting a lot of time trying to develop a
”perfect” solution: verification
But Computers are So Fast These Days??
• Do we need to bother with algorithmics and complexity any
more?
• computers are fast, compared to even 10 years ago...

• Many problems are so computationally demanding that no


growth in the power of computing will help very much.

• Speed and efficiency are still important


Importance of Analyzing Algorithms
• Need to recognize limitations of various algorithms for solving a
problem

• Need to understand relationship between problem size and running


time
• When is a running program not good enough?

• Need to learn how to analyze an algorithm's running time without


coding it

• Need to learn techniques for writing more efficient code

• Need to recognize bottlenecks in code as well as which parts of code


are easiest to optimize
Importance of Analyzing Algorithms
• An array-based list retrieve operation takes at most one operation, a
linked-list-based list retrieve operation at most “n” operations.

• But insert and delete operations are much easier on a linked-list-


based list implementation.

• When selecting the implementation of an Abstract Data Type (ADT),


we have to consider how frequently particular ADT operations occur
in a given application.

• For small problem size, we can ignore the algorithm’s efficiency.

• We have to weigh the trade-offs between an algorithm’s time


requirement and memory requirements.
What do we analyze about Algorithms?
• Algorithms are analyzed to understand their behavior and to
improve them if possible

• Correctness
• Does the input/output relation match algorithm requirement?
• Amount of work done
• Basic operations to do task
• Amount of space used
• Memory used
• Simplicity, clarity
• Verification and implementation.
• Optimality
• Is it impossible to do better?
Problem Solving Process
• Problem
• Strategy
• Algorithm
• Input
• Output
• Steps
• Analysis
• Correctness
• Time & Space Optimality
• Implementation
• Verification
Computation Model for Analysis
• To analyze an algorithm is to determine the amount of resources
necessary to execute it. These resources include computational
time, memory and communication bandwidth.

• Analysis of the algorithm is performed with respect to a


computational model called RAM (Random Access Machine)

• A RAM is an idealized uni-processor machine with an infinite large


random-access memory
• Instruction are executed one by one
• All memory equally expensive to access
• No concurrent operations
• Constant word size
• All reasonable instructions (basic operations) take unit time
Complexity of an Algorithm
• The complexity of an algorithm is the amount of work the algorithm
performs to complete its task. It is the level of difficulty in solving
mathematically posed problems as measured by:
• Time (time complexity)
• No. of steps or arithmetic operations (computational complexity)
• Memory space required (space complexity)

• Complexity is a function T(n) which yields the time (or space)


required to execute the algorithm of a problem of size ‘n’.
Pseudocode
• High-level description of an algorithm
• More structured than English prose but Less detailed than a
program
• Preferred notation for describing algorithms
• Hides program design issues

ArrayMax(A, n)
Input: Array A of n integers
Output: maximum element of A

1. currentMax  A[0];
2. for i = 1 to n-1 do
3. if A[i] > currentMax then
4. currentMax 
A[i]
5. return currentMax;
Pseudocode
• Indentation indicates block structure. e.g body of loop
• Looping Constructs while, for and the conditional if-then-else
• The symbol // indicates that the reminder of the line is a comment.
• Arithmetic & logical expressions: (+, -,*,/, ) (and, or and not)
• Assignment & swap statements: a b , ab c, a  b
• Return/Exit/End: termination of an algorithm or block

ArrayMax(A, n)
Input: Array A of n integers
Output: maximum element of A

1. currentMax  A[0];
2. for i = 1 to n-1 do
3. if A[i] > currentMax then
4. currentMax 
A[i]
5. return currentMax;
Pseudocode
• Local variables mostly used unless global variable explicitly defined
• If A is a structure then |A| is size of structure. If A is an Array then
n =length[A], upper bound of array. All Array elements are accessed
by name followed by index in square brackets A[i].
• Parameters are passed to a procedure by values
• Semicolons used for multiple short statement written on one line

ArrayMax(A, n)
Input: Array A of n integers
Output: maximum element of A

1. currentMax  A[0]
2. for i = 1 to n-1 do
3. if A[i] > currentMax then
4. currentMax 
A[i]
5. return currentMax
Elementary Operations
• An elementary operation is an operation which takes constant time
regardless of problem size.
• The running time of an algorithm on a particular input is determined
by the number of “Elementary Operations” executed.
• Theoretical analysis on paper from a description of an algorithm

• Defining elementary operations is a little trickier than it appears


• We want elementary operations to be relatively machine and language
independent, but still as informative and easy to define as possible

• Example of elementary operations include


• variable assignment
• arithmetic operations (+, -, x, /) on integers
• comparison operations (a < b)
• boolean operations
• accessing an element of an array

• We will measure number of steps taken in term of size of input


Components of an Algorithm
• Variables and values
• Instructions
• Sequences
• Selections
• Repetitions
Instruction and Sequence
• A linear sequence of elementary operations is also performed in
constant time.
• More generally, given two program fragments P1 and P2 which run
sequentially in times t1 and t2
• use the maximum rule which states that the larger time dominates
• complexity will be max(t1,t2)

e.g. Assignment Statements


x=a .....................1
x= a+b*c/h-u ......1 T(n) = 1+1+1
a>b ......................1 T(n) = 3
Sequences
• Analysing a group of consecutive statements
• The statement taking the maximum time will be the one counted
• use the maximum rule

Block #1 t1

T(n) = max(t1,t2)
Block #2 t2

• e.g. a fragment with single for-loop followed by double for- loop


• T(n) = n2
• Always analyze function calls first
Selection
• If <test> then P1 else P2 structures are a little harder; conditional
loops.
• The maximum rule can be applied here too:
• max(t1, t2), assuming t1, t2 are times of P1, P2
• However, the maximum rule may prove too conservative
• if <test> is always true the time is t1
• if <test> is always false the time is t2

e.g. if (a>b) then ...........1


a=2......................1
b=b+3*t .............1
else
x=x+2*3 ................1
Block #1 t1 Block #2 t2 Max(t1,t2)
T(n)= 1+max(2,1) = 3
Repetition (Loops)
• Analyzing loops: Any loop has two parts:
• How many iterations are performed?
• How many steps per iteration?

for i = 1 to n do
P(i);

• Assume that P(i) takes time t, where t is independent of i


• Running time of a for-loop is at most the running time of the
statements inside the for-loop times number of iterations

T(n) = nt

• This approach is reasonable, provided n is positive


• If n is zero or negative the relationship T(n) = nt is not valid
Repetition (Loops)
Analysing Nested Loops

for i = 0 to n do
for j = 0 to m do
P(j);

• Assume that P(j) takes time t, where t is independent of i and j


• Start with outer loop:
• How many iterations? n
• How much time per iteration? Need to evaluate inner loop

• Analyze inside-out. Total running time is running time of the


statement multiplied by product of the sizes of all the for-loops

T(n) = nmt
Repetition (Loops)
Analysing Nested Loops

for i = 0 to n do
for j = 0 to i do
P(j);
• Assume that P(j) takes time t, where t is independent of i and j
• How do we analyze the running time of an algorithm that has
complex nested loop?
• The answer is we write out the loops as summations and then
solve the summations. To convert loops into summations, we
work from inside-out.

T(n) = n + +t

= n + n(n+1)/2 + tn(n+1)/2
Analysis Example
Algorithm: Number of times executed
1. n = read input from user 1
2. sum = 0 1
3. i = 0 1
4. while i < n n
5. number = read input from user n or
6. sum = sum + number n or
7. i=i+1 n or
8. mean = sum / n 1

The computing time for this algorithm in terms on input size n is:
T(n) = 1 + 1 + 1 + n + n + n + n + 1
T(n) = 4n + 4
Asymptotic Growth Rate
• Changing the hardware/software environment
• Affects T(n) by constant factor, but does not alter the growth rate of T(n)

• Algorithm complexity is usually very complex. The growth of the


complexity functions is what is more important for the analysis and is a
suitable measure for the comparison of algorithms with increasing input
size n.
• Asymptotic notations like big-O, big-Omega, and big-Theta are used to
compute the complexity because different implementations of algorithm
may differ in efficiency.
• The big-Oh notation gives an upper bound on the growth rate of a
function.
• The statement “f(n) is O(g(n))” means that the growth rate of f(n) is no
more than the growth rate of g(n).
• We can use the big-Oh notation to rank functions according to their growth
rate.
Asymptotic Growth Rate
Two reasons why we are interested in asymptotic growth rates:

• Practical purposes: For large problems, when we expect to have


big computational requirements

• Theoretical purposes: concentrating on growth rates frees us


from some important issues:
• fixed costs (e.g. switching the computer on!), which may
dominate for a small problem size but be largely irrelevant
• machine and implementation details
• The growth rate will be a compact and easy to understand the
function
Properties of Growth-Rate Functions
Example: 5n + 3
Estimated running time for different values of n:
n = 10 => 53 steps
n = 100 => 503 steps
n = 1,000 => 5003 steps
n = 1,000,000 => 5,000,003 steps

As “n” grows, the number of steps grow in linear proportion to n for


this function “Sum”

What about the “+3” and “5” in 5n+3?


As n gets large, the +3 becomes insignificant
5 is inaccurate, as different operations require varying amounts of
time and also does not have any significant importance

What is fundamental is that the time is linear in n.


Asymptotic Algorithm Analysis
• The asymptotic analysis of an algorithm determines the running time
in big-Oh notation
• To perform the asymptotic analysis
• We find the worst-case number of primitive operations executed as a
function of the input size n
• We express this function with big-Oh notation

• Example: An algorithm executes T(n) = 2n2 + n elementary


operations. We say that the algorithm runs in O(n2) time

• Growth rate is not affected by constant factors or lower-order terms


so these terms can be dropped
• The 2n2 + n time bound is said to "grow asymptotically" like n2
• This gives us an approximation of the complexity of the algorithm
• Ignores lots of (machine dependent) details
Algorithm Efficiency
Measuring efficiency of an algorithm
• do its analysis i.e. growth rate.
• Compare efficiencies of different algorithms for the same problem.

As inputs get larger, any algorithm of a smaller order will be more


efficient than an algorithm of a larger order

0.05 N2 = O(N2)

3N = O(N)
Time (steps)

Input (size)
N = 60
Important Functions
These functions often appear in algorithm analysis:
A comparison of Growth-Rate Functions
Size does Matter:
What happens if we increase the input size N?
Performance Classification
f(n) Classification

1
:
Constant run time is fixed, and does not depend upon n. Most instructions are
executed once, or only a few times, regardless of the amount of information being processed

log n Logarithmic: when n increases, so does run time, but much slower. Common in
programs which solve large problems by transforming them into smaller problems.

n Linear: run time varies directly with n. Typically, a small amount of processing is done on
each element.

n log n When n doubles, run time slightly more than doubles. Common in programs which
break a problem down into smaller sub-problems, solves them independently, then combines solutions

n2 Quadratic: when n doubles, runtime increases fourfold. Practical only for small
problems; typically the program processes all pairs of input (e.g. in a double nested loop).

n3 Cubic: when n doubles, runtime increases eightfold

2n Exponential: when n doubles, run time squares. This is often the result of a natural, “brute
force” solution.
Running Time vs. Time Complexity
• Running time is how long it takes a program to run.

• Time complexity is a description of the asymptotic behavior of


running time as input size tends to infinity.

• The exact running time might be 2036.n2 + 17453.n + 18464 but


you can say that the running time "is" O(n2), because that's the
formal(idiomatic) way to describe complexity classes and big-O
notation.

• Infact, the running time is not a complexity class, IT'S EITHER A


DURATION, OR A FUNCTION WHICH GIVES YOU THE DURATION.
"Being O(n2)" is a mathematical property of that function, not a
full characterization of it.
Model of Computation
Drawbacks:
• poor assumption that each basic operation takes constant time
• Adding, Multiplying, Comparing etc.

Finally what about Our Model?


• With all these weaknesses, our model is not so bad because
• We have to give the comparison, not absolute analysis of any algorithm.
• We have to deal with large inputs not with the small size

• Model seems to work well describing computational power of


modern nonparallel machines

Can we do Exact Measure of Efficiency ?


• Exact, not asymptotic, measure of efficiency can be sometimes
computed but it usually requires certain assumptions concerning
implementation
Complexity Examples

What does the following algorithm compute?

procedure who_knows(a1, a2, …, an: integers)


m := 0
for i := 1 to n-1
for j := i + 1 to n
if |ai – aj| > m then m := |ai – aj|

{m is the maximum difference between any two numbers in the input


sequence}

Comparisons: n-1 + n-2 + n-3 + … + 1


= n*(n – 1)/2 = 0.5n2 – 0.5n

Time complexity is O(n2).


Complexity Examples

Another algorithm solving the same problem:

procedure max_diff(a1, a2, …, an: integers)


min := a1
max := a1
for i := 2 to n
if ai < min then min := ai
else if ai > max then max := ai
m := max - min

Comparisons: 2n + 2
Time complexity is O(n).
Example: Linear Search Algorithm
• Given a linear array A containing n elements, locate the position of an
Item ‘x’ or indicate that ‘x’ does not appear in A.

• The linear search algorithm solves this problem by comparing ‘x’, one
by one, with each element in A. That is, we compare ITEM with A[1],
then A[2], and so on, until we find the location of ‘x’.

LinearSearch(A, x) Number of times executed


i←1 1
while (i ≤ n and A[i] ≠ x) n
i ← i+1 n
if i ≤ n 1
return true 1 T(n) = 2n+3
else
return false 1
Best/Worst Case
Best case: ‘x’ is located in the first location of the array and loop
executes only once
T(n) = 1 + n + n + 1 +1
= 1+1+0+1+1
= O(1)

Worst case: ‘x’ is located in last location of the array or is not there
at all.
T(n) = 1 + n + n + 1 +1
= 2n +3
= O(n)
Average case
Average Case: Assume that it is equally likely for ‘x’ to appear at any
position in array A,. Accordingly, the number of comparisons can be
any of the numbers 1,2,3,..., n, and each number occurs with
probability p = 1/n.

T(n) = 1.1/n + 2. 1/n +……….+ n.1/n


= (1+2+3+……+n).1/n
= [n(n+1)/2] 1/n = n+1/2
= O(n)

This agrees with our intuitive feeling that the average number of
comparisons needed to find the location of ‘x’ is approximately equal
to half the number of elements in the A list.

You might also like