Logarithm
Logarithm
The logarithm base 10 is called the decimal or Plots of logarithm functions, with three commonly
common logarithm and is commonly used in science used bases. The special points logb b = 1 are
and engineering. The natural logarithm has the indicated by dotted lines, and all curves intersect
number e ≈ 2.718 as its base; its use is widespread in logb 1 = 0.
in mathematics and physics because of its very simple
derivative. The binary logarithm uses base 2 and is
widely used in computer science, information theory, music theory, and photography. When the base is
unambiguous from the context or irrelevant it is often omitted, and the logarithm is written log x.
Logarithms were introduced by John Napier in 1614 as a means of simplifying calculations.[1] They were
rapidly adopted by navigators, scientists, engineers, surveyors, and others to perform high-accuracy
computations more easily. Using logarithm tables, tedious multi-digit multiplication steps can be replaced
by table look-ups and simpler addition. This is possible because the logarithm of a product is the sum of
the logarithms of the factors:
provided that b, x and y are all positive and b ≠ 1. The slide rule, also based on logarithms, allows quick
calculations without tables, but at lower precision. The present-day notion of logarithms comes from
Leonhard Euler, who connected them to the exponential function in the 18th century, and who also
introduced the letter e as the base of natural logarithms.[2]
Logarithmic scales reduce wide-ranging quantities to smaller scopes. For example, the decibel (dB) is a
unit used to express ratio as logarithms, mostly for signal power and amplitude (of which sound pressure
is a common example). In chemistry, pH is a logarithmic measure for the acidity of an aqueous solution.
Logarithms are commonplace in scientific formulae, and in measurements of the complexity of
algorithms and of geometric objects called fractals. They help to describe frequency ratios of musical
intervals, appear in formulas counting prime numbers or approximating factorials, inform some models in
psychophysics, and can aid in forensic accounting.
The concept of logarithm as the inverse of exponentiation extends to other mathematical structures as
well. However, in general settings, the logarithm tends to be a multi-valued function. For example, the
complex logarithm is the multi-valued inverse of the complex exponential function. Similarly, the
discrete logarithm is the multi-valued inverse of the exponential function in finite groups; it has uses in
public-key cryptography.
Motivation
Addition, multiplication, and exponentiation are three
of the most fundamental arithmetic operations. The
inverse of addition is subtraction, and the inverse of
multiplication is division. Similarly, a logarithm is the
inverse operation of exponentiation. Exponentiation is
when a number b, the base, is raised to a certain
power y, the exponent, to give a value x; this is
denoted
by which tables of logarithms allow multiplication and division to be reduced to addition and subtraction,
a great aid to calculations before the invention of computers.
Definition
Given a positive real number b such that b ≠ 1, the logarithm of a positive real number x with respect to
base b[nb 1] is the exponent by which b must be raised to yield x. In other words, the logarithm of x to
base b is the unique real number y such that .[3]
The logarithm is denoted "logb x" (pronounced as "the logarithm of x to base b", "the base-b logarithm
of x", or most commonly "the log, base b, of x").
An equivalent and more succinct definition is that the function logb is the inverse function to the
function .
Examples
Logarithmic identities
Several important formulas, sometimes called logarithmic identities or logarithmic laws, relate
logarithms to one another.[4]
Product
Quotient
Power
Root
Change of base
The logarithm logb x can be computed from the logarithms of x and b with respect to an arbitrary base k
using the following formula:[nb 2]
Typical scientific calculators calculate the logarithms to bases 10 and e.[5] Logarithms with respect to any
base b can be determined using either of these two logarithms by the previous formula:
Given a number x and its logarithm y = logb x to an unknown base b, the base is given by:
which can be seen from taking the defining equation to the power of
Particular bases
Among all choices for the base, three are particularly
common. These are b = 10, b = e (the irrational
mathematical constant e ≈ 2.71828183 ), and b = 2 (the
binary logarithm). In mathematical analysis, the logarithm
base e is widespread because of analytical properties
explained below. On the other hand, base 10 logarithms (the
common logarithm) are easy to use for manual calculations
in the decimal number system:[6]
Thus, log10 (x) is related to the number of decimal digits of a positive integer x: The number of digits is
the smallest integer strictly bigger than log10 (x) .[7] For example, log10(5986) is approximately 3.78 .
The next integer above it is 4, which is the number of digits of 5986. Both the natural logarithm and the
binary logarithm are used in information theory, corresponding to the use of nats or bits as the
fundamental units of information, respectively.[8] Binary logarithms are also used in computer science,
where the binary system is ubiquitous; in music theory, where a pitch ratio of two (the octave) is
ubiquitous and the number of cents between any two pitches is a scaled version of the binary logarithm,
or log 2 times 1200, of the pitch ratio (that is, 100 cents per semitone in conventional equal
temperament), or equivalently the log base 21/1200 ; and in photography rescaled base 2 logarithms are
used to measure exposure values, light levels, exposure times, lens apertures, and film speeds in
"stops".[9]
The abbreviation log x is often used when the intended base can be inferred based on the context or
discipline, or when the base is indeterminate or immaterial. Common logarithms (base 10), historically
used in logarithm tables and slide rules, are a basic tool for measurement and computation in many areas
of science and engineering; in these contexts log x still often means the base ten logarithm.[10] In
mathematics log x usually refers to the natural logarithm (base e).[11] In computer science and
information theory, log often refers to binary logarithms (base 2).[12] The following table lists common
notations for logarithms to these bases. The "ISO notation" column lists designations suggested by the
International Organization for Standardization.[13]
History
The history of logarithms in seventeenth-century Europe saw the discovery of a new function that
extended the realm of analysis beyond the scope of algebraic methods. The method of logarithms was
publicly propounded by John Napier in 1614, in a book titled Mirifici Logarithmorum Canonis Descriptio
(Description of the Wonderful Canon of Logarithms).[19][20] Prior to Napier's invention, there had been
other techniques of similar scopes, such as the prosthaphaeresis or the use of tables of progressions,
extensively developed by Jost Bürgi around 1600.[21][22] Napier coined the term for logarithm in Middle
Latin, logarithmus, literally meaning 'ratio-number', derived from the Greek logos 'proportion, ratio,
word' + arithmos 'number'.
The common logarithm of a number is the index of that power of ten which equals the number.[23]
Speaking of a number as requiring so many figures is a rough allusion to common logarithm, and was
referred to by Archimedes as the "order of a number".[24] The first real logarithms were heuristic methods
to turn multiplication into addition, thus facilitating rapid computation. Some of these methods used
tables derived from trigonometric identities.[25] Such methods are called prosthaphaeresis.
Invention of the function now known as the natural logarithm began as an attempt to perform a
quadrature of a rectangular hyperbola by Grégoire de Saint-Vincent, a Belgian Jesuit residing in Prague.
Archimedes had written The Quadrature of the Parabola in the third century BC, but a quadrature for the
hyperbola eluded all efforts until Saint-Vincent published his results in 1647. The relation that the
logarithm provides between a geometric progression in its argument and an arithmetic progression of
values, prompted A. A. de Sarasa to make the connection of Saint-Vincent's quadrature and the tradition
of logarithms in prosthaphaeresis, leading to the term "hyperbolic logarithm", a synonym for natural
logarithm. Soon the new function was appreciated by Christiaan Huygens, and James Gregory. The
notation Log y was adopted by Leibniz in 1675,[26] and the next year he connected it to the integral
Before Euler developed his modern conception of complex natural logarithms, Roger Cotes had a nearly
equivalent result when he showed in 1714 that[27]
As the function f(x) = bx is the inverse function of logb x, it has been called an antilogarithm.[29]
Nowadays, this function is more commonly called an exponential function.
Log tables
A key tool that enabled the practical use of logarithms was the table of logarithms.[30] The first such table
was compiled by Henry Briggs in 1617, immediately after Napier's invention but with the innovation of
using 10 as the base. Briggs' first table contained the common logarithms of all integers in the range from
1 to 1000, with a precision of 14 digits. Subsequently, tables with increasing scope were written. These
tables listed the values of log10 x for any number x in a certain range, at a certain precision. Base-10
logarithms were universally used for computation, hence the name common logarithm, since numbers
that differ by factors of 10 have logarithms that differ by integers. The common logarithm of x can be
separated into an integer part and a fractional part, known as the characteristic and mantissa. Tables of
logarithms need only include the mantissa, as the characteristic can be easily determined by counting
digits from the decimal point.[31] The characteristic of 10 · x is one plus the characteristic of x, and their
mantissas are the same. Thus using a three-digit log table, the logarithm of 3542 is approximated by
Computations
The product and quotient of two positive numbers c and d were routinely calculated as the sum and
difference of their logarithms. The product cd or quotient c/d came from looking up the antilogarithm of
the sum or difference, via the same table:
and
For manual calculations that demand any appreciable precision, performing the lookups of the two
logarithms, calculating their sum or difference, and looking up the antilogarithm is much faster than
performing the multiplication by earlier methods such as prosthaphaeresis, which relies on trigonometric
identities.
Calculations of powers and roots are reduced to multiplications or divisions and lookups by
and
Trigonometric calculations were facilitated by tables that contained the common logarithms of
trigonometric functions.
Slide rules
Another critical application was the slide rule, a pair of logarithmically divided scales used for
calculation. The non-sliding logarithmic scale, Gunter's rule, was invented shortly after Napier's
invention. William Oughtred enhanced it to create the slide rule—a pair of logarithmic scales movable
with respect to each other. Numbers are placed on sliding scales at distances proportional to the
differences between their logarithms. Sliding the upper scale appropriately amounts to mechanically
adding logarithms, as illustrated here:
Schematic depiction of a slide rule. Starting from 2 on the lower scale, add the distance to 3
on the upper scale to reach the product 6. The slide rule works because it is marked such
that the distance from 1 to x is proportional to the logarithm of x.
For example, adding the distance from 1 to 2 on the lower scale to the distance from 1 to 3 on the upper
scale yields a product of 6, which is read off at the lower part. The slide rule was an essential calculating
tool for engineers and scientists until the 1970s, because it allows, at the expense of precision, much
faster computation than techniques based on tables.[32]
Analytic properties
A deeper study of logarithms requires the concept of a function. A function is a rule that, given one
number, produces another number.[33] An example is the function producing the x-th power of b from
any real number x, where the base b is a fixed number. This function is written as f(x) = b x. When b is
positive and unequal to 1, we show below that f is invertible when considered as a function from the reals
to the positive reals.
Existence
Let b be a positive real number not equal to 1 and let f(x) = b x.
It is a standard result in real analysis that any continuous strictly monotonic function is bijective between
its domain and range. This fact follows from the intermediate value theorem.[34] Now, f is strictly
increasing (for b > 1), or strictly decreasing (for 0 < b < 1),[35] is continuous, has domain , and has
range . Therefore, f is a bijection from to . In other words, for each positive real number y,
there is exactly one real number x such that .
We let denote the inverse of f. That is, logb y is the unique real number x such that
. This function is called the base-b logarithm function or logarithmic function (or just logarithm).
More precisely, the logarithm to any base b > 1 is the only increasing function f from the positive reals to
the reals satisfying f(b) = 1 and[36]
That is, the slope of the tangent touching the graph of the base-b
logarithm at the point (x, logb (x)) equals 1/(x ln(b)). The graph of the natural logarithm
(green) and its tangent at x = 1.5
(black)
The derivative of ln(x) is 1/x; this implies that ln(x) is the unique
antiderivative of 1/x that has the value 0 for x = 1. It is this very
simple formula that motivated to qualify as "natural" the natural logarithm; this is also one of the main
reasons of the importance of the constant e.
The quotient at the right hand side is called the logarithmic derivative of f. Computing f'(x) by means of
the derivative of ln(f(x)) is known as logarithmic differentiation.[38] The antiderivative of the natural
logarithm ln(x) is:[39]
Related formulas, such as antiderivatives of logarithms to other bases can be derived from this equation
using the change of bases.[40]
This definition has the advantage that it does not rely on the
exponential function or any trigonometric functions; the definition The natural logarithm of t is the
shaded area underneath the graph
is in terms of an integral of a simple reciprocal. As an integral,
of the function f(x) = 1/x.
ln(t) equals the area between the x-axis and the graph of the
function 1/x, ranging from x = 1 to x = t. This is a consequence
of the fundamental theorem of calculus and the fact that the derivative of ln(x) is 1/x. Product and power
logarithm formulas can be derived from this definition.[41] For example, the product formula
ln(tu) = ln(t) + ln(u) is deduced as:
The equality (1) splits the integral into two parts, while the equality (2) is a change of variable (w = x/t).
In the illustration below, the splitting corresponds to dividing the area into the yellow and blue parts.
Rescaling the left hand blue area vertically by the factor t and shrinking it by the same factor horizontally
does not change its size. Moving it appropriately, the area fits the graph of the function f(x) = 1/x again.
Therefore, the left hand blue area, which is the integral of f(x) from t to tu is the same as the integral
from 1 to u. This justifies the equality (2) with a more geometric proof.
is called the harmonic series. It is closely tied to the natural logarithm: as n tends to infinity, the
difference,
converges (i.e. gets arbitrarily close) to a number known as the Euler–Mascheroni constant
γ = 0.5772.... This relation aids in analyzing the performance of algorithms such as quicksort.[42]
Calculation
Logarithms are easy to compute in some cases, such as
log10 (1000) = 3. In general, logarithms can be calculated using
power series or the arithmetic–geometric mean, or be retrieved
from a precalculated logarithm table that provides a fixed
precision.[45][46] Newton's method, an iterative method to solve
equations approximately, can also be used to calculate the
logarithm, because its inverse function, the exponential function,
can be computed efficiently.[47] Using look-up tables, CORDIC-
like methods can be used to compute logarithms by using only the The logarithm keys (LOG for
operations of addition and bit shifts.[48][49] Moreover, the binary base 10 and LN for base e) on a TI-
logarithm algorithm calculates lb(x) recursively, based on 83 Plus graphing calculator
repeated squarings of x, taking advantage of the relation
Power series
Taylor series
For any real number z that satisfies 0 < z ≤ 2, the following
formula holds:[nb 4][50]
Equating the function ln(z) to this infinite sum (series) is shorthand for saying that the function can be
approximated to a more and more accurate value by the following expressions (known as partial sums):
For example, with z = 1.5 the third approximation yields 0.4167, which is about 0.011 greater than
ln(1.5) = 0.405465, and the ninth approximation yields 0.40553, which is only about 0.0001
greater. The nth partial sum can approximate ln(z) with arbitrary precision, provided the number of
summands n is large enough.
In elementary calculus, the series is said to converge to the function ln(z), and the function is the limit of
the series. It is the Taylor series of the natural logarithm at z = 1. The Taylor series of ln(z) provides a
particularly useful approximation to ln(1 + z) when z is small, |z| < 1, since then
For example, with z = 0.1 the first-order approximation gives ln(1.1) ≈ 0.1, which is less than 5% off
the correct value 0.0953.
for any real number z > 0.[nb 5][50] Using sigma notation, this is also written as
This series can be derived from the above Taylor series. It converges quicker than the Taylor series,
especially if z is close to 1. For example, for z = 1.5, the first three terms of the second series
approximate ln(1.5) with an error of about 3 × 10−6. The quick convergence for z close to 1 can be taken
advantage of in the following way: given a low-accuracy approximation y ≈ ln(z) and putting
The better the initial approximation y is, the closer A is to 1, so its logarithm can be calculated efficiently.
A can be calculated using the exponential series, which converges quickly provided y is not too large.
Calculating the logarithm of larger z can be reduced to smaller values of z by writing z = a · 10b, so that
ln(z) = ln(a) + b · ln(10).
A closely related method can be used to compute the logarithm of integers. Putting in the above
series, it follows that:
If the logarithm of a large integer n is known, then this series yields a fast converging series for
log(n+1), with a rate of convergence of .
to ensure the required precision. A larger m makes the M(x, y) calculation take more steps (the initial x
and y are farther apart so it takes more steps to converge) but gives more precision. The constants π and
ln(2) can be calculated with quickly converging series.
Feynman's algorithm
While at Los Alamos National Laboratory working on the Manhattan Project, Richard Feynman
developed a bit-processing algorithm to compute the logarithm that is similar to long division and was
later used in the Connection Machine. The algorithm relies on the fact that every real number x where
1 < x < 2 can be represented as a product of distinct factors of the form 1 + 2−k. The algorithm
sequentially builds that product P, starting with P = 1 and k = 1: if P · (1 + 2−k) < x, then it changes
P to P · (1 + 2−k). It then increases by one regardless. The algorithm stops when k is large enough to
give the desired accuracy. Because log(x) is the sum of the terms of the form log(1 + 2−k)
corresponding to those k for which the factor 1 + 2−k was included in the product P, log(x) may be
computed by simple addition, using a table of log(1 + 2−k) for all k. Any base may be used for the
logarithm table.[53]
Applications
Logarithms have many applications inside and outside
mathematics. Some of these occurrences are related to the notion
of scale invariance. For example, each chamber of the shell of a
nautilus is an approximate copy of the next one, scaled by a
constant factor. This gives rise to a logarithmic spiral.[54]
Benford's law on the distribution of leading digits can also be
explained by scale invariance.[55] Logarithms are also linked to
self-similarity. For example, logarithms appear in the analysis of
algorithms that solve a problem by dividing it into two similar A nautilus shell displaying a
smaller problems and patching their solutions.[56] The dimensions logarithmic spiral
of self-similar geometric shapes, that is, shapes whose parts
resemble the overall picture are also based on logarithms.
Logarithmic scales are useful for quantifying the relative change of a value as opposed to its absolute
difference. Moreover, because the logarithmic function log(x) grows very slowly for large x, logarithmic
scales are used to compress large-scale scientific data. Logarithms also occur in numerous scientific
formulas, such as the Tsiolkovsky rocket equation, the Fenske equation, or the Nernst equation.
Logarithmic scale
Scientific quantities are often expressed as logarithms of other
quantities, using a logarithmic scale. For example, the decibel is a
unit of measurement associated with logarithmic-scale quantities.
It is based on the common logarithm of ratios—10 times the
common logarithm of a power ratio or 20 times the common
logarithm of a voltage ratio. It is used to quantify the attenuation
or amplification of electrical signals,[57] to describe power levels
of sounds in acoustics,[58] and the absorbance of light in the fields
of spectrometry and optics. The signal-to-noise ratio describing
the amount of unwanted noise in relation to a (meaningful) signal
is also measured in decibels.[59] In a similar vein, the peak signal-
to-noise ratio is commonly used to assess the quality of sound and
image compression methods using the logarithm.[60]
A logarithmic chart depicting the
value of one Goldmark in
The strength of an earthquake is measured by taking the common Papiermarks during the German
logarithm of the energy emitted at the quake. This is used in the hyperinflation in the 1920s
moment magnitude scale or the Richter magnitude scale. For
example, a 5.0 earthquake releases 32 times (101.5) and a 6.0
releases 1000 times (103) the energy of a 4.0.[61] Apparent magnitude measures the brightness of stars
logarithmically.[62] In chemistry the negative of the decimal logarithm, the decimal cologarithm, is
indicated by the letter p.[63] For instance, pH is the decimal cologarithm of the activity of hydronium ions
+
(the form hydrogen ions H take in water).[64] The activity of hydronium ions in neutral water is
10−7 mol·L−1, hence a pH of 7. Vinegar typically has a pH of about 3. The difference of 4 corresponds to
a ratio of 104 of the activity, that is, vinegar's hydronium ion activity is about 10−3 mol·L−1.
Semilog (log–linear) graphs use the logarithmic scale concept for visualization: one axis, typically the
vertical one, is scaled logarithmically. For example, the chart at the right compresses the steep increase
from 1 million to 1 trillion to the same space (on the vertical axis) as the increase from 1 to 1 million. In
such graphs, exponential functions of the form f(x) = a · b x appear as straight lines with slope equal to
the logarithm of b. Log-log graphs scale both axes logarithmically, which causes functions of the form
f(x) = a · x k to be depicted as straight lines with slope equal to the exponent k. This is applied in
visualizing and analyzing power laws.[65]
Psychology
Logarithms occur in several laws describing human perception:[66][67] Hick's law proposes a logarithmic
relation between the time individuals take to choose an alternative and the number of choices they
have.[68] Fitts's law predicts that the time required to rapidly move to a target area is a logarithmic
function of the ratio between the distance to a target and the size of the target.[69] In psychophysics, the
Weber–Fechner law proposes a logarithmic relationship between stimulus and sensation such as the
actual vs. the perceived weight of an item a person is carrying.[70] (This "law", however, is less realistic
than more recent models, such as Stevens's power law.[71])
Psychological studies found that individuals with little mathematics education tend to estimate quantities
logarithmically, that is, they position a number on an unmarked line according to its logarithm, so that 10
is positioned as close to 100 as 100 is to 1000. Increasing education shifts this to a linear estimate
(positioning 1000 10 times as far away) in some circumstances, while logarithms are used when the
numbers to be plotted are difficult to plot linearly.[72][73]
Computational complexity
Analysis of algorithms is a branch of computer science that studies the performance of algorithms
(computer programs solving a certain problem).[80] Logarithms are valuable for describing algorithms
that divide a problem into smaller ones, and join the solutions of the subproblems.[81]
For example, to find a number in a sorted list, the binary search algorithm checks the middle entry and
proceeds with the half before or after the middle entry if the number is still not found. This algorithm
requires, on average, log2 (N) comparisons, where N is the list's length.[82] Similarly, the merge sort
algorithm sorts an unsorted list by dividing the list into halves and sorting these first before merging the
results. Merge sort algorithms typically require a time approximately proportional to N · log(N).[83] The
base of the logarithm is not specified here, because the result only changes by a constant factor when
another base is used. A constant factor is usually disregarded in the analysis of algorithms under the
standard uniform cost model.[84]
A function f(x) is said to grow logarithmically if f(x) is (exactly or approximately) proportional to the
logarithm of x. (Biological descriptions of organism growth, however, use this term for an exponential
function.[85]) For example, any natural number N can be represented in binary form in no more than
log2 N + 1 bits. In other words, the amount of memory needed to store N grows logarithmically with N.
The sum is over all possible states i of the system in question, Billiards on an oval billiard table.
Two particles, starting at the center
such as the positions of gas particles in a container. Moreover, pi
with an angle differing by one
is the probability that the state i is attained and k is the Boltzmann degree, take paths that diverge
constant. Similarly, entropy in information theory measures the chaotically because of reflections at
quantity of information. If a message recipient may expect any the boundary.
one of N possible messages with equal likelihood, then the
amount of information conveyed by any one such message is
quantified as log2 N bits.[86]
Lyapunov exponents use logarithms to gauge the degree of chaoticity of a dynamical system. For
example, for a particle moving on an oval billiard table, even small changes of the initial conditions result
in very different paths of the particle. Such systems are chaotic in a deterministic way, because small
measurement errors of the initial state predictably lead to largely different final states.[87] At least one
Lyapunov exponent of a deterministically chaotic system is positive.
Fractals
Logarithms occur in definitions of the dimension of fractals.[88] Fractals are geometric objects that are
self-similar in the sense that small parts reproduce, at least roughly, the entire global structure. The
Sierpinski triangle (pictured) can be covered by three copies of itself, each having sides half the original
length. This makes the Hausdorff
dimension of this structure
ln(3)/ln(2) ≈ 1.58. Another
logarithm-based notion of dimension The Sierpinski triangle (at the right) is constructed by repeatedly
is obtained by counting the number of replacing equilateral triangles by three smaller ones.
boxes needed to cover the fractal in
question.
Music
Four different octaves shown on a linear scale, then shown on a logarithmic scale (as the ear hears them)
Logarithms are related to musical tones and intervals. In equal temperament tunings, the frequency ratio
depends only on the interval between two tones, not on the specific frequency, or pitch, of the individual
tones. In the 12-tone equal temperament tuning common in modern Western music, each octave
(doubling of frequency) is broken into twelve equally spaced intervals called semitones. For example, if
the note A has a frequency of 440 Hz then the note B-flat has a frequency of 466 Hz. The interval
between A and B-flat is a semitone, as is the one between B-flat and B (frequency 493 Hz). Accordingly,
the frequency ratios agree:
Intervals between arbitrary pitches can be measured in octaves by taking the base-2 logarithm of the
frequency ratio, can be measured in equally tempered semitones by taking the base-21/12 logarithm (12
times the base-2 logarithm), or can be measured in cents, hundredths of a semitone, by taking the
base-21/1200 logarithm (1200 times the base-2 logarithm). The latter is used for finer encoding, as it is
needed for finer measurements or non-equal temperaments.[89]
Interval
Just major
(the two tones 1/12 tone Semitone Major third Tritone Octave
third
are played playⓘ playⓘ playⓘ playⓘ playⓘ
at the same time) playⓘ
Frequency ratio
Number of
semitones
Number of cents
Number theory
Natural logarithms are closely linked to counting prime numbers (2, 3, 5, 7, 11, ...), an important topic in
number theory. For any integer x, the quantity of prime numbers less than or equal to x is denoted π(x).
The prime number theorem asserts that π(x) is approximately given by
in the sense that the ratio of π(x) and that fraction approaches 1 when x tends to infinity.[90] As a
consequence, the probability that a randomly chosen number between 1 and x is prime is inversely
proportional to the number of decimal digits of x. A far better estimate of π(x) is given by the offset
logarithmic integral function Li(x), defined by
The Riemann hypothesis, one of the oldest open mathematical conjectures, can be stated in terms of
comparing π(x) and Li(x).[91] The Erdős–Kac theorem describing the number of distinct prime factors
also involves the natural logarithm.
This can be used to obtain Stirling's formula, an approximation of n! for large n.[92]
Generalizations
Complex logarithm
All the complex numbers a that solve the equation
Using the geometrical interpretation of sine and cosine and their periodicity in 2π, any complex number z
may be denoted as
for any integer number k. Evidently the argument of z is not uniquely specified: both φ and
φ' = φ + 2kπ are valid arguments of z for all integers k, because adding 2kπ radians or k⋅360°[nb 6] to φ
corresponds to "winding" around the origin counter-clock-wise by k turns. The resulting complex number
is always z, as illustrated at the right for k = 1. One may select exactly one of the possible arguments of z
as the so-called principal argument, denoted Arg(z), with a capital A, by requiring φ to belong to one,
conveniently selected turn, e.g. −π < φ ≤ π[93] or 0 ≤ φ < 2π.[94] These regions, where the argument of
z is uniquely determined are called branches of the argument function.
Euler's formula connects the trigonometric functions sine and
cosine to the complex exponential:
Taking k such that φ + 2kπ is within the defined interval for the principal arguments, then ak is called
the principal value of the logarithm, denoted Log(z), again with a capital L. The principal argument of
any positive real number x is 0; hence Log(x) is a real number and equals the real (natural) logarithm.
However, the above formulas for logarithms of products and powers do not generalize to the principal
value of the complex logarithm.[96]
The illustration at the right depicts Log(z), confining the arguments of z to the interval (−π, π]. This
way the corresponding branch of the complex logarithm has discontinuities all along the negative real
x axis, which can be seen in the jump in the hue there. This discontinuity arises from jumping to the other
boundary in the same branch, when crossing a boundary, i.e. not changing to the corresponding k-value
of the continuously neighboring branch. Such a locus is called a branch cut. Dropping the range
restrictions on the argument makes the relations "argument of z", and consequently the "logarithm of z",
multi-valued functions.
In the context of finite groups exponentiation is given by repeatedly multiplying one group element b
with itself. The discrete logarithm is the integer n solving the equation
where x is an element of the group. Carrying out the exponentiation can be done efficiently, but the
discrete logarithm is believed to be very hard to calculate in some groups. This asymmetry has important
applications in public key cryptography, such as for example in the Diffie–Hellman key exchange, a
routine that allows secure exchanges of cryptographic keys over unsecured information channels.[100]
Zech's logarithm is related to the discrete logarithm in the multiplicative group of non-zero elements of a
finite field.[101]
Further logarithm-like inverse functions include the double logarithm ln(ln(x)), the super- or hyper-4-
logarithm (a slight variation of which is called iterated logarithm in computer science), the Lambert W
function, and the logit. They are the inverse functions of the double exponential function, tetration, of
f(w) = wew,[102] and of the logistic function, respectively.[103]
Related concepts
From the perspective of group theory, the identity log(cd) = log(c) + log(d) expresses a group
isomorphism between positive reals under multiplication and reals under addition. Logarithmic functions
are the only continuous isomorphisms between these groups.[104] By means of that isomorphism, the
Haar measure (Lebesgue measure) dx on the reals corresponds to the Haar measure dx/x on the positive
reals.[105] The non-negative reals not only have a multiplication, but also have addition, and form a
semiring, called the probability semiring; this is in fact a semifield. The logarithm then takes
multiplication to addition (log multiplication), and takes addition to log addition (LogSumExp), giving an
isomorphism of semirings between the probability semiring and the log semiring.
Logarithmic one-forms df/f appear in complex analysis and algebraic geometry as differential forms with
logarithmic poles.[106]
The polylogarithm is the function defined by
It is related to the natural logarithm by Li1 (z) = −ln(1 − z). Moreover, Lis (1) equals the Riemann
zeta function ζ(s).[107]
See also
Mathematics portal
Arithmetic portal
Chemistry portal
Geography portal
Engineering portal
Notes
1. The restrictions on x and b are explained in the section "Analytic properties".
2. Proof: Taking the logarithm to base k of the defining identity one gets
The formula follows by solving for
3. z Some mathematicians disapprove of this notation. In his 1985 autobiography, Paul Halmos
criticized what he considered the "childish ln notation", which he said no mathematician had
ever used.[16] The notation was invented by the 19th century mathematician I.
Stringham.[17][18]
4. The same series holds for the principal value of the complex logarithm for complex numbers
z satisfying |z − 1| < 1.
5. The same series holds for the principal value of the complex logarithm for complex numbers
z with positive real part.
6. See radian for the conversion between 2π and 360 degree.
References
1. Hobson, Ernest William (1914), John Napier and the invention of logarithms, 1614; a lecture
(http://archive.org/details/johnnapierinvent00hobsiala), Cambridge University Press
2. Remmert, Reinhold. (1991), Theory of complex functions, New York: Springer-Verlag,
ISBN 0387971955, OCLC 21118309 (https://search.worldcat.org/oclc/21118309)
3. Kate, S.K.; Bhapkar, H.R. (2009), Basics Of Mathematics, Pune: Technical Publications,
ISBN 978-81-8431-755-8, chapter 1
4. All statements in this section can be found in Douglas Downing 2003, p. 275 or Kate &
Bhapkar 2009, p. 1-1, for example.
5. Bernstein, Stephen; Bernstein, Ruth (1999), Schaum's outline of theory and problems of
elements of statistics. I, Descriptive statistics and probability (https://archive.org/details/scha
umsoutlineof00bern), Schaum's outline series, New York: McGraw-Hill, ISBN 978-0-07-
005023-5, p. 21
6. Downing, Douglas (2003), Algebra the Easy Way (https://archive.org/details/algebraeasywa
y00down_0), Barron's Educational Series, Hauppauge, NY: Barron's, chapter 17, p. 275,
ISBN 978-0-7641-1972-9
7. Wegener, Ingo (2005), Complexity Theory: Exploring the limits of efficient algorithms, Berlin,
DE / New York, NY: Springer-Verlag, p. 20, ISBN 978-3-540-21045-0
8. van der Lubbe, Jan C.A. (1997), Information Theory (https://books.google.com/books?id=tB
uI_6MQTcwC&pg=PA3), Cambridge University Press, p. 3, ISBN 978-0-521-46760-5
9. Allen, Elizabeth; Triantaphillidou, Sophie (2011), The Manual of Photography (https://books.
google.com/books?id=IfWivY3mIgAC&pg=PA228), Taylor & Francis, p. 228, ISBN 978-0-
240-52037-7
10. Parkhurst, David F. (2007), Introduction to Applied Mathematics for Environmental Science
(https://books.google.com/books?id=h6yq_lOr8Z4C&pg=PA288) (illustrated ed.), Springer
Science & Business Media, p. 288, ISBN 978-0-387-34228-3
11. Rudin, Walter (1984), "Theorem 3.29", Principles of Mathematical Analysis (https://archive.o
rg/details/principlesofmath00rudi) (3rd ed., International student ed.), Auckland, NZ:
McGraw-Hill International, ISBN 978-0-07-085613-4
12. Goodrich, Michael T.; Tamassia, Roberto (2002), Algorithm Design: Foundations, analysis,
and internet examples, John Wiley & Sons, p. 23, "One of the interesting and sometimes
even surprising aspects of the analysis of data structures and algorithms is the ubiquitous
presence of logarithms ... As is the custom in the computing literature, we omit writing the
base b of the logarithm when b = 2 ."
13. "Part 2: Mathematics", [title not cited], Quantities and units (Report), International
Organization for Standardization, 2019, ISO 80000-2:2019 / EN ISO 80000-2
External links
Media related to Logarithm at Wikimedia Commons
The dictionary definition of logarithm at Wiktionary
Quotations related to History of logarithms at Wikiquote
A lesson on logarithms can be found on Wikiversity
Weisstein, Eric W., "Logarithm" (https://mathworld.wolfram.com/Logarithm.html), MathWorld
Khan Academy: Logarithms, free online micro lectures (https://web.archive.org/web/201212
18200616/http://www.khanacademy.org/math/algebra/logarithms-tutorial)
"Logarithmic function" (https://www.encyclopediaofmath.org/index.php?title=Logarithmic_fun
ction), Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Colin Byfleet, Educational video on logarithms (http://mediasite.oddl.fsu.edu/mediasite/View
er/?peid=003298f9a02f468c8351c50488d6c479), retrieved 12 October 2010
Edward Wright, Translation of Napier's work on logarithms (https://web.archive.org/web/200
21203005508/http://www.johnnapier.com/table_of_logarithms_001.htm), archived from the
original (http://www.johnnapier.com/table_of_logarithms_001.htm) on 3 December 2002,
retrieved 12 October 2010
Glaisher, James Whitbread Lee (1911), "Logarithm" (https://en.wikisource.org/wiki/1911_En
cyclop%C3%A6dia_Britannica/Logarithm), in Chisholm, Hugh (ed.), Encyclopædia
Britannica, vol. 16 (11th ed.), Cambridge University Press, pp. 868–77