Informit 573360863402659
Informit 573360863402659
doi: 10.15347/wjs/2019.005
Encyclopedic Review Article
Abstract
In In computer science, binary search, also known as half-interval search,[1] logarithmic search,[2] or binary
chop,[3] is a search algorithm that finds a position of a target value within a sorted array.[4] Binary search compares
the target value to an element in the middle of the array. If they are not equal, the half in which the target cannot
Downloaded from search.informit.org/doi/10.3316/informit.573360863402659. on 09/15/2023 11:50 PM AEST; UTC+10:00. © WikiJournal of Science , 2019.
lie is eliminated and the search continues on the remaining half, again taking the middle element to compare to
the target value, and repeating this until the target value is found. If the search ends with the remaining half being
empty, the target is not in the array.
Binary search runs in logarithmic time in the worst case, making 𝑂(log 𝑛) comparisons, where 𝑛 is the number of
elements in the array, the 𝑂 is ‘Big O’ notation, and 𝑙𝑜𝑔 is the logarithm.[5] Binary search is faster than linear search
except for small arrays. However, the array must be sorted first to be able to apply binary search. There are spe-
Available under a Creative Commons Attribution Licence.
cialized data structures designed for fast searching, such as hash tables, that can be searched more efficiently than
binary search. However, binary search can be used to solve a wider range of problems, such as finding the next-
smallest or next-largest element in the array relative to the target even if it is absent from the array.
There are numerous variations of binary search. In particular, fractional cascading speeds up binary searches for
the same value in multiple arrays. Fractional cascading efficiently solves a number of search problems in compu-
tational geometry and in numerous other fields. Exponential search extends binary search to unbounded lists. The
binary search tree and B-tree data structures are based on binary search.
1 of 13 | WikiJournal of Science
WikiJournal of Science, 2019, 2(1):5
doi: 10.15347/wjs/2019.005
Encyclopedic Review Article
every iteration. Some implementations leave out this To find the leftmost element, the following procedure
check during each iteration. The algorithm would per- can be used:[9]
form this check only when one element is left (when
1. Set 𝐿 to 0 and 𝑅 to 𝑛.
𝐿 = 𝑅). This results in a faster comparison loop, as one
comparison is eliminated per iteration. However, it re- 2. While 𝐿 < 𝑅,
quires one more iteration on average.[1] 1. Set 𝑚 (the position of the middle element) to
𝐿+𝑅
the floor of , which is the greatest integer
Hermann Bottenbruch published the first implementa- 2
𝐿+𝑅
tion to leave out this check in 1962.[1][8] less than or equal to .
2
1. Set 𝐿 to 0 and 𝑅 to 𝑛 − 1. 2. If 𝐴𝑚 < 𝑇, set 𝐿 to 𝑚 + 1.
2. While 𝐿 ≠ 𝑅, 3. Else 𝐴𝑚 ≥ 𝑇, set 𝑅 to 𝑚.
1. Set 𝑚 (the position of the middle element) to 3. Return 𝐿.
𝐿+𝑅
the ceiling of , which is the least integer If 𝐿 < 𝑛 and 𝐴𝐿 = 𝑇, then 𝐴𝐿 is the leftmost element
2
𝐿+𝑅
greater than or equal to . that equals 𝑇. Even if 𝑇 is not in the array, 𝐿 is the rank
2
2. If 𝐴𝑚 > 𝑇, set 𝑅 to 𝑚 − 1. of 𝑇 in the array, or the number of elements in the array
that are less than 𝑇.
3. Else 𝐴𝑚 ≤ 𝑇, set 𝐿 to 𝑚.
3. Now 𝐿 = 𝑅, the search is done. If 𝐴𝐿 = 𝑇, return 𝐿. Where floor is the floor function, the pseudocode for
Otherwise, the search terminates as unsuccessful. this version is:
2 of 13 | WikiJournal of Science
WikiJournal of Science, 2019, 2(1):5
doi: 10.15347/wjs/2019.005
Encyclopedic Review Article
1. Set 𝐿 to 0 and 𝑅 to 𝑛.
2. While 𝐿 < 𝑅,
1. Set 𝑚 (the position of the middle element) to
𝐿+𝑅
the floor of , which is the greatest integer
2
𝐿+𝑅
less than or equal to .
2
2. If 𝐴𝑚 > 𝑇, set 𝑅 to 𝑚.
3. Else 𝐴𝑚 ≤ 𝑇, set 𝐿 to 𝑚 + 1.
3. Return 𝐿 − 1.
Figure 1 | Binary search can be adapted to compute approxi-
If 𝐿 > 0 and 𝐴𝐿−1 = 𝑇, then 𝐴𝐿−1 is the rightmost ele-
mate matches. In the example above, the rank, predecessor,
ment that equals 𝑇. Even if 𝑇 is not in the array, successor, and nearest neighbor are shown for the target
Downloaded from search.informit.org/doi/10.3316/informit.573360863402659. on 09/15/2023 11:50 PM AEST; UTC+10:00. © WikiJournal of Science , 2019.
(𝑛 − 1) − 𝐿 is the number of elements in the array that value 5, which is not in the array.
are greater than 𝑇.
Where floor is the floor function, the pseudocode for • The nearest neighbor of the target value is either its
this version is: predecessor or successor, whichever is closer.
• Range queries are also straightforward.[11] Once the
Available under a Creative Commons Attribution Licence.
function binary_search_rightmost(A, n, T): ranks of the two values are known, the number of
L := 0 elements greater than or equal to the first value and
R := n
while L < R: less than the second is the difference of the two
m := floor((L + R) / 2) ranks. This count can be adjusted up or down by one
if A[m] > T: according to whether the endpoints of the range
R := m
else: should be considered to be part of the range and
L := m + 1 whether the array contains entries matching those
return L - 1 endpoints.[12]
3 of 13 | WikiJournal of Science
WikiJournal of Science, 2019, 2(1):5
doi: 10.15347/wjs/2019.005
Encyclopedic Review Article
two, then this is always search. By dividing the array in half, binary search en-
the case. Otherwise, the sures that the size of both subarrays are as similar as
search may perform possible.[13]
⌊log 2 (𝑛) + 1⌋ iterations
if the search reaches the Space complexity
deepest level of the
Binary search requires three pointers to elements,
tree. However, it may
which may be array indices or pointers to memory loca-
make ⌊log 2 (𝑛)⌋ itera-
tions, regardless of the size of the array. However, it re-
tions, which is one less
quires at least ⌈log 2 (𝑛) bits to encode a pointer to an
Figure 2 | A tree representing bi- than the worst case, if
element of an array with 𝑛 elements.[16] Therefore, the
nary search. The array being the search ends at the
searched here is [20, 30, 40, 50,
space complexity of binary search is 𝑂(log 𝑛). In addi-
second-deepest level of
tion, it takes 𝑂(𝑛) space to store the array.
Downloaded from search.informit.org/doi/10.3316/informit.573360863402659. on 09/15/2023 11:50 PM AEST; UTC+10:00. © WikiJournal of Science , 2019.
Figure 3 | The worst case is reached when the search reaches the deepest level of the tree,
while the best case is reached when the target value is the middle element.
4 of 13 | WikiJournal of Science
WikiJournal of Science, 2019, 2(1):5
doi: 10.15347/wjs/2019.005
Encyclopedic Review Article
unique internal paths. Since there is only one path from that remains during the last iteration. An external path
the root to any single node, each internal path repre- is a path from the root to an external node. The external
sents a search for a specific element. If there are 𝑛 ele- path length is the sum of the lengths of all unique exter-
ments, which is a positive integer, and the internal path nal paths. If there are 𝑛 elements, which is a positive in-
length is 𝐼(𝑛) then the average number of iterations for teger, and the external path length is 𝐸(𝑛), then the av-
𝐼(𝑛) erage number of iterations for an unsuccessful search
a successful search 𝑇(𝑛) = 1 + , with the one itera-
𝑛 𝐸(𝑛)
tion added to count the initial iteration.[13] 𝑇 ′ (𝑛) = , with the one iteration added to count the
𝑛+1
initial iteration. The external path length is divided by
Since binary search is the optimal algorithm for search-
𝑛 + 1 instead of 𝑛 because there are 𝑛 + 1 external
ing with comparisons, this problem is reduced to calcu-
paths, representing the intervals between and outside
lating the minimum internal path length of all binary
the elements of the array.[13]
trees with 𝑛 nodes, which is equal to:[17]
Downloaded from search.informit.org/doi/10.3316/informit.573360863402659. on 09/15/2023 11:50 PM AEST; UTC+10:00. © WikiJournal of Science , 2019.
∑⌊𝑙𝑜𝑔2 (𝑘)⌋ = 0 + 2(1) + 4(2) Substituting the equation for 𝐸(𝑛) into the equation for
𝑘=1 𝑇 ′ (𝑛), the average case for unsuccessful searches can
= 2+8 be determined:[13]
= 10
(𝑛 + 1)(⌊𝑙𝑜𝑔2 (𝑛)⌋ + 2) − 2⌊𝑙𝑜𝑔2(𝑛)⌋+1
10 3 𝑇 ′ (𝑛) =
The average number of iterations would be 1 + = 2 (𝑛 + 1)
7 7
based on the equation for the average case. The sum for = ⌊𝑙𝑜𝑔2 (𝑛)⌋ + 2 − 2⌊𝑙𝑜𝑔2(𝑛)⌋+1 /(𝑛 + 1)
𝐼(𝑛) can be simplified to:[13]
Performance of alternative procedure
𝑛
Each iteration of the binary search procedure defined
𝐼(𝑛) = ∑⌊𝑙𝑜𝑔2 (𝑘)⌋
𝑘=1
above makes one or two comparisons, checking if the
= (𝑛 + 1)⌊𝑙𝑜𝑔2 (𝑛 + 1)⌋ − 2⌊𝑙𝑜𝑔2(𝑛+1)⌋+1 + 2 middle element is equal to the target in each iteration.
Assuming that each element is equally likely to be
Substituting the equation for 𝐼(𝑛) into the equation for searched, each iteration makes 1.5 comparisons on av-
𝑇(𝑛):[13] erage. A variation of the algorithm checks whether the
middle element is equal to the target at the end of the
(𝑛 + 1)⌊𝑙𝑜𝑔2 (𝑛 + 1)⌋ − 2⌊𝑙𝑜𝑔2(𝑛+1)⌋+1 + 2 search. On average, this eliminates half a comparison
𝑇(𝑛) = 1 +
𝑛 from each iteration. This slightly cuts the time taken per
= ⌊𝑙𝑜𝑔2 (𝑛)⌋ + 1 − (2⌊𝑙𝑜𝑔2(𝑛)⌋+1 − ⌊𝑙𝑜𝑔2 (𝑛)⌋ − 2)/𝑛 iteration on most computers. However, it guarantees
that the search takes the maximum number of itera-
For integer 𝑛, this is equivalent to the equation for the tions, on average adding one iteration to the search. Be-
average case on a successful search specified above. cause the comparison loop is performed only
⌊log 2 (𝑛) + 1⌋ times in the worst case, the slight in-
Unsuccessful searches crease in efficiency per iteration does not compensate
Unsuccessful searches can be represented by augment- for the extra iteration for all but very large 𝑛.[b][18][19]
ing the tree with external nodes, which forms an ex-
tended binary tree. If an internal node, or a node present Running time and cache use
in the tree, has fewer than two child nodes, then addi-
In analyzing the performance of binary search, another
tional child nodes, called external nodes, are added so
consideration is the time required to compare two ele-
that each internal node has two children. By doing so,
ments. For integers and strings, the time required in-
an unsuccessful search can be represented as a path to
creases linearly as the encoding length (usually the
an external node, whose parent is the single element
5 of 13 | WikiJournal of Science
WikiJournal of Science, 2019, 2(1):5
doi: 10.15347/wjs/2019.005
Encyclopedic Review Article
locations close to it. For example, when an array ele- Linear search is a simple search algorithm that checks
ment is accessed, the element itself may be stored every record until it finds the target value. Linear search
along with the elements that are stored close to it in can be done on a linked list, which allows for faster in-
RAM, making it faster to sequentially access array ele- sertion and deletion than an array. Binary search is
ments that are close in index to each other (locality of faster than linear search for sorted arrays except if the
reference). On a sorted array, binary search can jump to array is short, although the array needs to be sorted be-
distant memory locations if the array is large, unlike al- forehand.[lower-alpha 3][24] All sorting algorithms based on
gorithms (such as linear search and linear probing in comparing elements, such as quicksort and merge sort,
hash tables) which access elements in sequence. This require at least 𝑂(𝑛 log 𝑛) comparisons in the worst
adds slightly to the running time of binary search for case.[25] Unlike linear search, binary search can be used
large arrays on most systems.[20] for efficient approximate matching. There are opera-
tions such as finding the smallest and largest element
that can be done efficiently on a sorted array but not on
Binary search versus other schemes an unsorted array.[26]
6 of 13 | WikiJournal of Science
WikiJournal of Science, 2019, 2(1):5
doi: 10.15347/wjs/2019.005
Encyclopedic Review Article
Figure 4 | Binary search trees are searched using Other data structures
an algorithm similar to binary search. There exist data structures that may improve on binary
Chris Martin, public domain
search in some cases for both searching and other op-
binary search trees, the tree may be severely imbal- erations available for sorted arrays. For example,
anced with few internal nodes with two children, result- searches, approximate matches, and the operations
Available under a Creative Commons Attribution Licence.
ing in the average and worst-case search time ap- available to sorted arrays can be performed more effi-
proaching 𝑛 comparisons.[d] Binary search trees take ciently than binary search on specialized data structures
more space than sorted arrays.[29] such as van Emde Boas trees, fusion trees, tries, and bit
arrays. These specialized data structures are usually
Binary search trees lend themselves to fast searching in only faster because they take advantage of the proper-
external memory stored in hard disks, as binary search ties of keys with a certain attribute (usually keys that are
trees can be efficiently structured in filesystems. The B- small integers), and thus will be time or space consum-
tree generalizes this method of tree organization. B- ing for keys that lack that attribute.[22] As long as the
trees are frequently used to organize long-term storage keys can be ordered, these operations can always be
such as databases and filesystems.[30][31] done at least efficiently on a sorted array regardless of
the keys. Some structures, such as Judy arrays, use a
Hashing combination of approaches to mitigate this while re-
taining efficiency and the ability to perform approxi-
For implementing associative arrays, hash tables, a
mate matching.[37]
data structure that maps keys to records using a hash
function, are generally faster than binary search on a
sorted array of records.[32] Most hash table implemen-
tations require only amortized constant time on aver-
Variations
age.[e][34] However, hashing is not useful for approxi-
mate matches, such as computing the next-smallest, Uniform binary search
next-largest, and nearest key, as the only information Main article: Uniform binary search
given on a failed search is that the target is not present
in any record.[35] Binary search is ideal for such matches, Uniform binary search stores, instead of the lower and
performing them in logarithmic time. Binary search also upper bounds, the difference in the index of the middle
supports approximate matches. Some operations, like element from the current iteration to the next iteration.
finding the smallest and largest element, can be done A lookup tablecontaining the differences is computed
efficiently on sorted arrays but not on hash tables.[22] beforehand. For example, if the array to be searched is
7 of 13 | WikiJournal of Science
WikiJournal of Science, 2019, 2(1):5
doi: 10.15347/wjs/2019.005
Encyclopedic Review Article
Figure 6 | Visualization of exponential searching finding the upper bound for the subsequent binary search.
[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], the middle element (𝑚) that the midpoint is not the best guess in many cases.
would be 6. In this case, the middle element of the left For example, if the target value is close to the highest
subarray ([1, 2, 3, 4, 5]) is 3 and the middle element of element in the array, it is likely to be located near the
the right subarray ([7, 8, 9, 10, 11]) is 9. Uniform binary end of the array.[43]
Downloaded from search.informit.org/doi/10.3316/informit.573360863402659. on 09/15/2023 11:50 PM AEST; UTC+10:00. © WikiJournal of Science , 2019.
computers.[41]
uniform, interpolation search makes 𝑂(log log 𝑛) com-
parisons.[43][44][45]
Exponential search
In practice, interpolation search is slower than binary
Main article: Exponential search search for small arrays, as interpolation search requires
Exponential search extends binary search to un- extra computation. Its time complexity grows more
bounded lists. It starts by finding the first element with slowly than binary search, but this only compensates
an index that is both a power of two and greater than for the extra computation for large arrays.[43]
the target value. Afterwards, it sets that index as the up-
per bound, and switches to binary search. A search Fractional cascading
takes ⌊log 2 𝑥 + 1⌋ iterations before binary search is
Main article: Fractional cascading
started and at most ⌊log 2 𝑥⌋ iterations of the binary
search, where 𝑥 is the position of the target value. Ex- Fractional cascading is a technique that speeds up bi-
ponential search works on bounded lists, but becomes nary searches for the same element in multiple sorted
an improvement over binary search only if the target arrays. Searching each array separately requires
value lies near the beginning of the array.[42] 𝑂(𝑘 log 𝑛) time, where 𝑘 is the number of arrays. Frac-
tional cascading reduces this to 𝑂(𝑘 + log 𝑛) by storing
Interpolation search specific information in each array about each element
and its position in the other arrays.[46][47]
Main article: Interpolation search
Fractional cascading was originally developed to effi-
Instead of calculating the midpoint, interpolation ciently solve various computational geometry prob-
search estimates the position of the target value, taking lems. Fractional cascading has been applied elsewhere,
into account the lowest and highest elements in the ar- such as in data mining and Internet Protocol routing.[46]
ray as well as length of the array. It works on the basis
Figure 7 | Visualization of interpolation search. In this case, no searching is needed because the estimate of
the target's location within the array is correct. Other implementations may specify another function for esti-
mating the target's location.
8 of 13 | WikiJournal of Science
WikiJournal of Science, 2019, 2(1):5
doi: 10.15347/wjs/2019.005
Encyclopedic Review Article
Downloaded from search.informit.org/doi/10.3316/informit.573360863402659. on 09/15/2023 11:50 PM AEST; UTC+10:00. © WikiJournal of Science , 2019.
Figure 8 | In fractional cascading, each array has pointers to every second element of another array, so only one
binary search has to be performed to search all the arrays.
types of graphs, where the target value is stored in a exactly ⌊log 2 𝑛 + 1⌋ iterations when performing binary
vertex instead of an array element. Binary search trees search. Quantum algorithms for binary search are still
are one such generalization—when a vertex (node) in bounded to a proportion of log 2 𝑛 queries (representing
the tree is queried, the algorithm either learns that the iterations of the classical procedure), but the constant
vertex is the target, or otherwise which subtree the tar- factor is less than one, providing for a lower time com-
get would be located in. However, this can be further plexity on quantum computers. Any exact quantum bi-
generalized as follows: given an undirected, positively nary search procedure—that is, a procedure that always
1
weighted graph and a target vertex, the algorithm yields the correct result—requires at least (ln 𝑛 −
π
learns upon querying a vertex that it is equal to the tar- 1) ≈ 0.22 log 2 𝑛 queries in the worst case, where 𝑙𝑛 is
get, or it is given an incident edge that is on the shortest the natural logarithm.[54] There is an exact quantum bi-
path from the queried vertex to the target. The stand- nary search procedure that runs in 4 log 605 𝑛 ≈
ard binary search algorithm is simply the case where the 0.433 log 2 𝑛 queries in the worst case.[55] In compari-
graph is a path. Similarly, binary search trees are the son, Grover's algorithm is the optimal quantum algo-
case where the edges to the left or right subtrees are rithm for searching an unordered list of elements, and it
given when the queried vertex is unequal to the target.
requires 𝑂(√𝑛) queries.[56]
For all undirected, positively weighted graphs, there is
an algorithm that finds the target vertex in 𝑂(log 𝑛)
queries in the worst case.[48]
History
Noisy binary search The idea of sorting a list of items to allow for faster
searching dates back to antiquity. The earliest known
Noisy binary search algorithms solve the case where the
example was the Inakibit-Anu tablet from Babylon da-
algorithm cannot reliably compare elements of the ar-
ting back to c. 200 BCE. The tablet contained about 500
ray. For each pair of elements, there is a certain proba-
sexagesimalnumbers and their reciprocals sorted in lex-
bility that the algorithm makes the wrong comparison.
icographical order, which made searching for a specific
Noisy binary search can find the correct position of the
entry easier. In addition, several lists of names that
target with a given probability that controls the reliabil-
were sorted by their first letter were discovered on the
ity of the yielded position. Every noisy binary search
log (𝑛) 10
Aegean Islands. Catholicon, a Latin dictionary finished
procedure must make at least (1 − τ) 2 − in 1286 CE, was the first work to describe rules for sort-
𝐻(𝑝) 𝐻(𝑝)
log2 (𝑛) 10 ing words into alphabetical order, as opposed to just the
comparisons on average, where (1 − τ) − is
𝐻(𝑝) 𝐻(𝑝) first few letters.[8]
the binary entropy function and τ is the probability that
the procedure yields the wrong position.[49][50][51] The In 1946, John Mauchly made the first mention of binary
noisy binary search problem can be considered as a case search as part of the Moore School Lectures, a seminal
of the Rényi-Ulam game,[52] a variant of Twenty Ques- and foundational college course in computing.[8] In
tions where the answers may be wrong.[53] 1957, William Wesley Peterson published the first
9 of 13 | WikiJournal of Science
WikiJournal of Science, 2019, 2(1):5
doi: 10.15347/wjs/2019.005
Encyclopedic Review Article
method for interpolation search.[8][57] Every published must be in place. Bentley found that most of the pro-
binary search algorithm worked only for arrays whose grammers who incorrectly implemented binary search
length is one less than a power of two[h] until 1960, made an error in defining the exit conditions.[7][66]
when Derrick Henry Lehmer published a binary search
algorithm that worked on all arrays.[59] In 1962, Her-
mann Bottenbruch presented an ALGOL 60 implemen- Library support
tation of binary search that placed the comparison for
equality at the end, increasing the average number of Many languages' standard libraries include binary
iterations by one, but reducing to one the number of search routines:
comparisons per iteration.[7] The uniform binary search • C provides the function bsearch() in its standard li-
was developed by A. K. Chandra of Stanford University brary, which is typically implemented via binary
in 1971.[8] In 1986, Bernard Chazelle and Leonidas J. search, although the official standard does not re-
Downloaded from search.informit.org/doi/10.3316/informit.573360863402659. on 09/15/2023 11:50 PM AEST; UTC+10:00. © WikiJournal of Science , 2019.
Implementation issues • COBOL provides the SEARCH ALL verb for perform-
Available under a Creative Commons Attribution Licence.
In a practical implementation, the variables used to rep- • For Objective-C, the Cocoa framework provides the
resent the indices will often be of fixed size, and this can NSArray -indexOfObject:inSortedRange:op-
result in an arithmetic overflow for very large arrays. If tions:usingComparator: method in Mac OS X
the midpoint of the span is calculated as
𝐿+𝑅
, then the 10.6+.[74] Apple's Core Foundation C framework also
2
contains a CFArrayBSearchValues() function.[75]
value of 𝐿 + 𝑅 may exceed the range of integers of the
data type used to store the midpoint, even if 𝐿 and 𝑅 are • Python provides the bisect module.[76]
within the range. If 𝐿 and 𝑅 are nonnegative, this can be
𝑅−𝐿 [65] • Ruby's Array class includes a bsearch method with
avoided by calculating the midpoint as 𝐿 + .
2 built-in approximate matching.[77]
An infinite loop may occur if the exit conditions for the
loop are not defined correctly. Once 𝐿 exceeds 𝑅, the
search has failed and must convey the failure of the
search. In addition, the loop must be exited when the
target element is found, or in the case of an implemen-
tation where this check is moved to the end, checks for
whether the search was successful or failed at the end
10 of 13 | WikiJournal of Science
WikiJournal of Science, 2019, 2(1):5
doi: 10.15347/wjs/2019.005
Encyclopedic Review Article
11 of 13 | WikiJournal of Science
WikiJournal of Science, 2019, 2(1):5
doi: 10.15347/wjs/2019.005
Encyclopedic Review Article
16. Shannon, Claude E. (July 1948). "A Mathematical Theory of 51. Rivest, Ronald L.; Meyer, Albert R.; Kleitman, Daniel J.; Winklmann, K.
Communication". Bell System Technical Journal 27 (3): 379–423. Coping with errors in binary search procedures. 10th ACM Symposium on
doi:10.1002/j.1538-7305.1948.tb01338.x. Theory of Computing. doi:10.1145/800133.804351.
17. Knuth 1997, §2.3.4.5 ("Path length"). 52. Pelc, Andrzej (2002). "Searching games with errors—fifty years of coping
18. Knuth 1998, §6.2.1 ("Searching an ordered table"), subsection "Exercise with liars". Theoretical Computer Science270 (1–2): 71–109.
23". doi:10.1016/S0304-3975(01)00303-6.
19. Rolfe, Timothy J. (1997). "Analytic derivation of comparisons in binary 53. Rényi, Alfréd (1961). "On a problem in information theory" (in Hungarian).
search". ACM SIGNUM Newsletter 32 (4): 15–19. Magyar Tudományos Akadémia Matematikai Kutató Intézetének
doi:10.1145/289251.289255. Közleményei 6: 505–516.
20. Khuong, Paul-Virak; Morin, Pat. "Array Layouts for Comparison-Based 54. Høyer, Peter; Neerbek, Jan; Shi, Yaoyun (2002). "Quantum complexities of
Searching". Journal of Experimental Algorithmics (Article 1.3) 22. ordered searching, sorting, and element distinctness". Algorithmica 34 (4):
doi:10.1145/289251.289255. 429–448. doi:10.1007/s00453-002-0976-3.
21. Knuth 1997, §2.2.2 ("Sequential Allocation"). 55. Childs, Andrew M.; Landahl, Andrew J.; Parrilo, Pablo A. (2007). "Quantum
22. Beame, Paul; Fich, Faith E. (2001). "Optimal bounds for the predecessor algorithms for the ordered search problem via semidefinite
problem and related problems". Journal of Computer and System Sciences programming". Physical Review A 75 (3): 032335.
65 (1): 38–72. doi:10.1006/jcss.2002.1822. doi:10.1103/PhysRevA.75.032335.
23. Knuth 1998, Answers to Exercises (§6.2.1) for "Exercise 5". 56. Grover, Lov K. (1996). A fast quantum mechanical algorithm for database
Downloaded from search.informit.org/doi/10.3316/informit.573360863402659. on 09/15/2023 11:50 PM AEST; UTC+10:00. © WikiJournal of Science , 2019.
24. Knuth 1998, §6.2.1 ("Searching an ordered table"). search. 28th ACM Symposium on Theory of Computing. Philadelphia, PA. pp.
25. Knuth 1998, §5.3.1 ("Minimum-Comparison sorting"). 212–219. arXiv:quant-ph/9605043. doi:10.1145/237814.237866.
26. Sedgewick & Wayne 2011, §3.2 ("Ordered symbol tables"). 57. Peterson, William Wesley (1957). "Addressing for random-access storage".
27. Sedgewick & Wayne 2011, §3.2 ("Binary Search Trees"), subsection IBM Journal of Research and Development 1 (2): 130–146.
"Order-based methods and deletion". doi:10.1147/rd.12.0130.
28. Knuth 1998, §6.2.2 ("Binary tree searching"), subsection "But what about 58. "2n−1". OEIS A000225. Retrieved 7 May 2016.
the worst case?". 59. Lehmer, Derrick (1960). Teaching combinatorial tricks to a computer.
29. Sedgewick & Wayne 2011, §3.5 ("Applications"), "Which symbol-table Proceedings of Symposia in Applied Mathematics. 10. pp. 180–181.
implementation should I use?". doi:10.1090/psapm/010.
Available under a Creative Commons Attribution Licence.
30. Knuth 1998, §5.4.9 ("Disks and Drums"). 60. Chazelle, Bernard; Guibas, Leonidas J. (1986). "Fractional cascading: I. A
31. Knuth 1998, §6.2.4 ("Multiway trees"). data structuring technique". Algorithmica1 (1): 133–162.
32. Knuth 1998, §6.4 ("Hashing"). doi:10.1007/BF01840440.
33. Knuth 1998, §6.4 ("Hashing"), subsection "History". 61. Chazelle, Bernard; Guibas, Leonidas J. (1986), "Fractional cascading: II.
34. Dietzfelbinger, Martin; Karlin, Anna; Mehlhorn, Kurt; Meyer auf der Heide, Applications", Algorithmica 1 (1), doi:10.1007/BF01840441
Friedhelm; Rohnert, Hans; Tarjan, Robert E.(August 1994). "Dynamic 62. Bentley 2000, §4.1 ("The Challenge of Binary Search").
perfect hashing: upper and lower bounds". SIAM Journal on Computing 23 63. Pattis, Richard E. (1988). "Textbook errors in binary searching". SIGCSE
(4): 738–761. doi:10.1137/S0097539791194094. Bulletin 20: 190–194. doi:10.1145/52965.53012.
35. Morin, Pat. "Hash tables" (PDF). p. 1. Retrieved 28 March 2016. 64. Bloch, Joshua (2 June 2006). "Extra, extra – read all about it: nearly all binary
36. Knuth 2011, §7.1.3 ("Bitwise Tricks and Techniques"). searches and mergesorts are broken". Google Research Blog. Retrieved 21
37. Silverstein, Alan, Judy IV shop manual, Hewlett-Packard April 2016.
38. Fan, Bin; Andersen, Dave G.; Kaminsky, Michael; Mitzenmacher, Michael D. 65. Ruggieri, Salvatore (2003). "On computing the semi-sum of two integers".
(2014). Cuckoo filter: practically better than Bloom. Proceedings of the 10th Information Processing Letters 87 (2): 67–71. doi:10.1016/S0020-
ACM International on Conference on Emerging Networking Experiments and 0190(03)00263-1.
Technologies. pp. 75–88. doi:10.1145/2674005.2674994. 66. Bentley 2000, §4.4 ("Principles").
39. Bloom, Burton H. (1970). "Space/time trade-offs in hash coding with 67. "bsearch – binary search a sorted table". The Open Group Base Specifications
allowable errors". Communications of the ACM 13(7): 422–426. (7th ed.). The Open Group. 2013. Retrieved 28 March 2016.
doi:10.1145/362686.362692. 68. Stroustrup 2013, p. 945.
40. Knuth 1998, §6.2.1 ("Searching an ordered table"), subsection "An 69. Unisys (2012), COBOL ANSI-85 programming reference manual, 1
important variation". 70. "Package sort". The Go Programming Language. Retrieved 28 April 2016.
41. Knuth 1998, §6.2.1 ("Searching an ordered table"), subsection "Algorithm 71. "java.util.Arrays". Java Platform Standard Edition 8 Documentation. Oracle
U". Corporation. Retrieved 1 May 2016.
42. Moffat & Turpin 2002, p. 33. 72. "java.util.Collections". Java Platform Standard Edition 8 Documentation.
43. Knuth 1998, §6.2.1 ("Searching an ordered table"), subsection Oracle Corporation. Retrieved 1 May 2016.
"Interpolation search". 73. "List<T>.BinarySearch method (T)". Microsoft Developer Network. Retrieved
44. Knuth 1998, §6.2.1 ("Searching an ordered table"), subsection "Exercise 10 April 2016.
22". 74. "NSArray". Mac Developer Library. Apple Inc. Retrieved 1 May 2016.
45. Perl, Yehoshua; Itai, Alon; Avni, Haim (1978). "Interpolation search—a log 75. "CFArray". Mac Developer Library. Apple Inc. Retrieved 1 May 2016.
log n search". Communications of the ACM21 (7): 550–553. 76. "8.6. bisect — Array bisection algorithm". The Python Standard Library.
doi:10.1145/359545.359557. Python Software Foundation. Retrieved 26 March 2018.
46. Chazelle, Bernard; Liu, Ding (6 July 2001). Lower bounds for intersection 77. Fitzgerald 2007, p. 152.
searching and fractional cascading in higher dimension. 33rd ACM
Symposium on Theory of Computing. ACM. pp. 322–329.
doi:10.1145/380752.380818. ISBN 978-1-58113-349-3. Retrieved 30 June
2018.
47. Chazelle, Bernard; Liu, Ding (1 March 2004). "Lower bounds for
Works
intersection searching and fractional cascading in higher dimension" (in • Bentley, Jon (2000). Programming pearls (2nd ed.). Addison-Wesley. ISBN
en). Journal of Computer and System Sciences 68 (2): 269–284. 978-0-201-65788-3.
doi:10.1016/j.jcss.2003.07.003. ISSN 0022-0000. Retrieved 30 June 2018. • Butterfield, Andrew; Ngondi, Gerard E. (2016). A dictionary of computer
48. Emamjomeh-Zadeh, Ehsan; Kempe, David; Singhal, Vikrant (2016). science (7th ed.). Oxford, UK: Oxford University Press. ISBN 978-0-19-
Deterministic and probabilistic binary search in graphs. 48th ACM 968897-5.
Symposium on Theory of Computing. pp. 519–532. arXiv:1503.00805. • Chang, Shi-Kuo (2003). Data structures and algorithms. Software
doi:10.1145/2897518.2897656. Engineering and Knowledge Engineering. 13. Singapore: World Scientific.
49. Ben-Or, Michael; Hassidim, Avinatan (2008). "The Bayesian learner is ISBN 978-981-238-348-8.
optimal for noisy binary search (and pretty good for quantum as well)" (PDF). • Cormen, Thomas H.; Leiserson, Charles E.; Rivest, Ronald L.; Stein, Clifford
49th Symposium on Foundations of Computer Science. pp. 221–230. (2009). Introduction to algorithms (3rd ed.). MIT Press and McGraw-Hill.
doi:10.1109/FOCS.2008.58. ISBN 978-0-7695-3436-7. ISBN 978-0-262-03384-8.
50. Pelc, Andrzej (1989). "Searching with known error probability". Theoretical
• Fitzgerald, Michael (2007). Ruby pocket reference. Sebastopol, California:
Computer Science 63 (2): 185–202. doi:10.1016/0304-3975(89)90077-7. O'Reilly Media. ISBN 978-1-4919-2601-7.
12 of 13 | WikiJournal of Science
WikiJournal of Science, 2019, 2(1):5
doi: 10.15347/wjs/2019.005
Encyclopedic Review Article
• Goldman, Sally A.; Goldman, Kenneth J. (2008). A practical guide to data • Knuth, Donald (2011). Combinatorial algorithms. The Art of Computer
structures and algorithms using Java. Boca Raton, Florida: CRC Press. ISBN Programming. 4A (1st ed.). Reading, MA: Addison-Wesley Professional.
978-1-58488-455-2. ISBN 978-0-201-03804-0.
• Kasahara, Masahiro; Morishita, Shinichi (2006). Large-scale genome • Moffat, Alistair; Turpin, Andrew (2002). Compression and coding
sequence processing. London, UK: Imperial College Press. ISBN 978-1- algorithms. Hamburg, Germany: Kluwer Academic Publishers.
86094-635-6. doi:10.1007/978-1-4615-0935-6. ISBN 978-0-7923-7668-2.
• Knuth, Donald (1997). Fundamental algorithms. The Art of Computer • Sedgewick, Robert; Wayne, Kevin (2011). Algorithms (4th ed.). Upper
Programming. 1 (3rd ed.). Reading, MA: Addison-Wesley Professional. Saddle River, New Jersey: Addison-Wesley Professional. ISBN 978-0-321-
ISBN 978-0-201-89683-1. 57351-3.
• Knuth, Donald (1998). Sorting and searching. The Art of Computer • Stroustrup, Bjarne (2013). The C++ programming language (4th ed.). Upper
Programming. 3 (2nd ed.). Reading, MA: Addison-Wesley Professional. Saddle River, New Jersey: Addison-Wesley Professional. ISBN 978-0-321-
ISBN 978-0-201-89685-5. 56384-2.
Downloaded from search.informit.org/doi/10.3316/informit.573360863402659. on 09/15/2023 11:50 PM AEST; UTC+10:00. © WikiJournal of Science , 2019.
Available under a Creative Commons Attribution Licence.
13 of 13 | WikiJournal of Science