Most of the terms listed in Wikipedia glossaries are already defined and explained within Wikipedia itself. However, glossaries like this one are useful for looking up, comparing and reviewing large numbers of terms together. You can help enhance this page by adding new terms or writing definitions for existing ones.

This glossary of calculus is a list of definitions about calculus, its sub-disciplines, and related fields.

Abel's test
A method of testing for the convergence of an infinite series.
absolute convergence
An infinite series of numbers is said to converge absolutely (or to be absolutely convergent) if the sum of the absolute values of the summands is finite. More precisely, a real or complex series   is said to converge absolutely if   for some real number  . Similarly, an improper integral of a function,  , is said to converge absolutely if the integral of the absolute value of the integrand is finite—that is, if  
absolute maximum
The highest value a function attains.
absolute minimum
The lowest value a function attains.
absolute value
The absolute value or modulus |x| of a real number x is the non-negative value of x without regard to its sign. Namely, |x| = x for a positive x, |x| = −x for a negative x (in which case x is positive), and |0| = 0. For example, the absolute value of 3 is 3, and the absolute value of −3 is also 3. The absolute value of a number may be thought of as its distance from zero.
alternating series
An infinite series whose terms alternate between positive and negative.
alternating series test
Is the method used to prove that an alternating series with terms that decrease in absolute value is a convergent series. The test was used by Gottfried Leibniz and is sometimes known as Leibniz's test, Leibniz's rule, or the Leibniz criterion.
annulus
A ring-shaped object, a region bounded by two concentric circles.
antiderivative
An antiderivative, primitive function, primitive integral or indefinite integral[Note 1] of a function f is a differentiable function F whose derivative is equal to the original function f. This can be stated symbolically as  .[1][2] The process of solving for antiderivatives is called antidifferentiation (or indefinite integration) and its opposite operation is called differentiation, which is the process of finding a derivative.
arcsin
area under a curve
asymptote
In analytic geometry, an asymptote of a curve is a line such that the distance between the curve and the line approaches zero as one or both of the x or y coordinates tends to infinity. Some sources include the requirement that the curve may not cross the line infinitely often, but this is unusual for modern authors.[3] In projective geometry and related contexts, an asymptote of a curve is a line which is tangent to the curve at a point at infinity.[4][5]
automatic differentiation
In mathematics and computer algebra, automatic differentiation (AD), also called algorithmic differentiation or computational differentiation,[6][7] is a set of techniques to numerically evaluate the derivative of a function specified by a computer program. AD exploits the fact that every computer program, no matter how complicated, executes a sequence of elementary arithmetic operations (addition, subtraction, multiplication, division, etc.) and elementary functions (exp, log, sin, cos, etc.). By applying the chain rule repeatedly to these operations, derivatives of arbitrary order can be computed automatically, accurately to working precision, and using at most a small constant factor more arithmetic operations than the original program.
average rate of change
binomial coefficient
Any of the positive integers that occurs as a coefficient in the binomial theorem is a binomial coefficient. Commonly, a binomial coefficient is indexed by a pair of integers nk ≥ 0 and is written   It is the coefficient of the xk term in the polynomial expansion of the binomial power (1 + x)n, and it is given by the formula
 
binomial theorem (or binomial expansion)
Describes the algebraic expansion of powers of a binomial.
bounded function
A function f defined on some set X with real or complex values is called bounded, if the set of its values is bounded. In other words, there exists a real number M such that
 
for all x in X. A function that is not bounded is said to be unbounded. Sometimes, if f(x) ≤ A for all x in X, then the function is said to be bounded above by A. On the other hand, if f(x) ≥ B for all x in X, then the function is said to be bounded below by B.
bounded sequence
.
calculus
(From Latin calculus, literally 'small pebble', used for counting and calculations, as on an abacus)[8] is the mathematical study of continuous change, in the same way that geometry is the study of shape and algebra is the study of generalizations of arithmetic operations.
Cavalieri's principle
Cavalieri's principle, a modern implementation of the method of indivisibles, named after Bonaventura Cavalieri, is as follows:[9]
  • 2-dimensional case: Suppose two regions in a plane are included between two parallel lines in that plane. If every line parallel to these two lines intersects both regions in line segments of equal length, then the two regions have equal areas.
  • 3-dimensional case: Suppose two regions in three-space (solids) are included between two parallel planes. If every plane parallel to these two planes intersects both regions in cross-sections of equal area, then the two regions have equal volumes.
chain rule
The chain rule is a formula for computing the derivative of the composition of two or more functions. That is, if f and g are functions, then the chain rule expresses the derivative of their composition f g (the function which maps x to f(g(x)) ) in terms of the derivatives of f and g and the product of functions as follows:
 
This may equivalently be expressed in terms of the variable. Let F = f g, or equivalently, F(x) = f(g(x)) for all x. Then one can also write
 
The chain rule may be written in Leibniz's notation in the following way. If a variable z depends on the variable y, which itself depends on the variable x, so that y and z are therefore dependent variables, then z, via the intermediate variable of y, depends on x as well. The chain rule then states,
 
The two versions of the chain rule are related; if   and  , then
 
In integration, the counterpart to the chain rule is the substitution rule.
change of variables
Is a basic technique used to simplify problems in which the original variables are replaced with functions of other variables. The intent is that when expressed in new variables, the problem may become simpler, or equivalent to a better understood problem.
cofunction
A function f is cofunction of a function g if f(A) = g(B) whenever A and B are complementary angles.[10] This definition typically applies to trigonometric functions.[11][12] The prefix "co-" can be found already in Edmund Gunter's Canon triangulorum (1620).[13][14]
concave function
Is the negative of a convex function. A concave function is also synonymously called concave downwards, concave down, convex upwards, convex cap or upper convex.
constant of integration
The indefinite integral of a given function (i.e., the set of all antiderivatives of the function) on a connected domain is only defined up to an additive constant, the constant of integration.[15][16] This constant expresses an ambiguity inherent in the construction of antiderivatives. If a function   is defined on an interval and   is an antiderivative of  , then the set of all antiderivatives of   is given by the functions  , where C is an arbitrary constant (meaning that any value for C makes   a valid antiderivative). The constant of integration is sometimes omitted in lists of integrals for simplicity.
continuous function
Is a function for which sufficiently small changes in the input result in arbitrarily small changes in the output. Otherwise, a function is said to be a discontinuous function. A continuous function with a continuous inverse function is called a homeomorphism.
continuously differentiable
A function f is said to be continuously differentiable if the derivative f(x) exists and is itself a continuous function.
contour integration
In the mathematical field of complex analysis, contour integration is a method of evaluating certain integrals along paths in the complex plane.[17][18][19]
convergence tests
Are methods of testing for the convergence, conditional convergence, absolute convergence, interval of convergence or divergence of an infinite series  .
convergent series
In mathematics, a series is the sum of the terms of an infinite sequence of numbers. Given an infinite sequence  , the nth partial sum   is the sum of the first n terms of the sequence, that is,
 
A series is convergent if the sequence of its partial sums   tends to a limit; that means that the partial sums become closer and closer to a given number when the number of their terms increases. More precisely, a series converges, if there exists a number   such that for any arbitrarily small positive number  , there is a (sufficiently large) integer   such that for all  ,
 
If the series is convergent, the number   (necessarily unique) is called the sum of the series. Any series that is not convergent is said to be divergent.
convex function
In mathematics, a real-valued function defined on an n-dimensional interval is called convex (or convex downward or concave upward) if the line segment between any two points on the graph of the function lies above or on the graph, in a Euclidean space (or more generally a vector space) of at least two dimensions. Equivalently, a function is convex if its epigraph (the set of points on or above the graph of the function) is a convex set. For a twice differentiable function of a single variable, if the second derivative is always greater than or equal to zero for its entire domain then the function is convex.[20] Well-known examples of convex functions include the quadratic function   and the exponential function  .
Cramer's rule
In linear algebra, Cramer's rule is an explicit formula for the solution of a system of linear equations with as many equations as unknowns, valid whenever the system has a unique solution. It expresses the solution in terms of the determinants of the (square) coefficient matrix and of matrices obtained from it by replacing one column by the column vector of right-hand-sides of the equations. It is named after Gabriel Cramer (1704–1752), who published the rule for an arbitrary number of unknowns in 1750,[21][22] although Colin Maclaurin also published special cases of the rule in 1748[23] (and possibly knew of it as early as 1729).[24][25][26]
critical point
A critical point or stationary point of a differentiable function of a real or complex variable is any value in its domain where its derivative is 0.[27][28]
curve
A curve (also called a curved line in older texts) is, generally speaking, an object similar to a line but that need not be straight.
curve sketching
In geometry, curve sketching (or curve tracing) includes techniques that can be used to produce a rough idea of overall shape of a plane curve given its equation without computing the large numbers of points required for a detailed plot. It is an application of the theory of curves to find their main features. Here input is an equation. In digital geometry it is a method of drawing a curve pixel by pixel. Here input is an array (digital image).
damped sine wave
Is a sinusoidal function whose amplitude approaches zero as time increases.[29]
degree of a polynomial
Is the highest degree of its monomials (individual terms) with non-zero coefficients. The degree of a term is the sum of the exponents of the variables that appear in it, and thus is a non-negative integer.
derivative
The derivative of a function of a real variable measures the sensitivity to change of the function value (output value) with respect to a change in its argument (input value). Derivatives are a fundamental tool of calculus. For example, the derivative of the position of a moving object with respect to time is the object's velocity: this measures how quickly the position of the object changes when time advances.
derivative test
A derivative test uses the derivatives of a function to locate the critical points of a function and determine whether each point is a local maximum, a local minimum, or a saddle point. Derivative tests can also give information about the concavity of a function.
differentiable function
A differentiable function of one real variable is a function whose derivative exists at each point in its domain. As a result, the graph of a differentiable function must have a (non-vertical) tangent line at each point in its domain, be relatively smooth, and cannot contain any breaks, bends, or cusps.
differential (infinitesimal)
The term differential is used in calculus to refer to an infinitesimal (infinitely small) change in some varying quantity. For example, if x is a variable, then a change in the value of x is often denoted Δx (pronounced delta x). The differential dx represents an infinitely small change in the variable x. The idea of an infinitely small or infinitely slow change is extremely useful intuitively, and there are a number of ways to make the notion mathematically precise. Using calculus, it is possible to relate the infinitely small changes of various variables to each other mathematically using derivatives. If y is a function of x, then the differential dy of y is related to dx by the formula
 
where dy/dx denotes the derivative of y with respect to x. This formula summarizes the intuitive idea that the derivative of y with respect to x is the limit of the ratio of differences Δyx as Δx becomes infinitesimal.
differential calculus
Is a subfield of calculus[30] concerned with the study of the rates at which quantities change. It is one of the two traditional divisions of calculus, the other being integral calculus, the study of the area beneath a curve.[31]
differential equation
Is a mathematical equation that relates some function with its derivatives. In applications, the functions usually represent physical quantities, the derivatives represent their rates of change, and the equation defines a relationship between the two.
differential operator
.
differential of a function
In calculus, the differential represents the principal part of the change in a function y = f(x) with respect to changes in the independent variable. The differential dy is defined by
 
where   is the derivative of f with respect to x, and dx is an additional real variable (so that dy is a function of x and dx). The notation is such that the equation
 
holds, where the derivative is represented in the Leibniz notation dy/dx, and this is consistent with regarding the derivative as the quotient of the differentials. One also writes
 
The precise meaning of the variables dy and dx depends on the context of the application and the required level of mathematical rigor. The domain of these variables may take on a particular geometrical significance if the differential is regarded as a particular differential form, or analytical significance if the differential is regarded as a linear approximation to the increment of a function. Traditionally, the variables dx and dy are considered to be very small (infinitesimal), and this interpretation is made rigorous in non-standard analysis.
differentiation rules
.
direct comparison test
A convergence test in which an infinite series or an improper integral is compared to one with known convergence properties.
Dirichlet's test
Is a method of testing for the convergence of a series. It is named after its author Peter Gustav Lejeune Dirichlet, and was published posthumously in the Journal de Mathématiques Pures et Appliquées in 1862.[32] The test states that if   is a sequence of real numbers and   a sequence of complex numbers satisfying
  •  
  •  
  •   for every positive integer N
where M is some constant, then the series
 
converges.
disc integration
Also known in integral calculus as the disc method, is a means of calculating the volume of a solid of revolution of a solid-state material when integrating along an axis "parallel" to the axis of revolution.
divergent series
Is an infinite series that is not convergent, meaning that the infinite sequence of the partial sums of the series does not have a finite limit.
discontinuity
Continuous functions are of utmost importance in mathematics, functions and applications. However, not all functions are continuous. If a function is not continuous at a point in its domain, one says that it has a discontinuity there. The set of all points of discontinuity of a function may be a discrete set, a dense set, or even the entire domain of the function.
dot product
In mathematics, the dot product or scalar product[note 1] is an algebraic operation that takes two equal-length sequences of numbers (usually coordinate vectors) and returns a single number. In Euclidean geometry, the dot product of the Cartesian coordinates of two vectors is widely used and often called "the" inner product (or rarely projection product) of Euclidean space even though it is not the only inner product that can be defined on Euclidean space; see also inner product space.
double integral
The multiple integral is a definite integral of a function of more than one real variable, for example, f(x, y) or f(x, y, z). Integrals of a function of two variables over a region in R2 are called double integrals, and integrals of a function of three variables over a region of R3 are called triple integrals.[33]
e (mathematical constant)
The number e is a mathematical constant that is the base of the natural logarithm: the unique number whose natural logarithm is equal to one. It is approximately equal to 2.71828,[34] and is the limit of (1 + 1/n)n as n approaches infinity, an expression that arises in the study of compound interest. It can also be calculated as the sum of the infinite series[35]
 
elliptic integral
In integral calculus, elliptic integrals originally arose in connection with the problem of giving the arc length of an ellipse. They were first studied by Giulio Fagnano and Leonhard Euler (c. 1750). Modern mathematics defines an "elliptic integral" as any function f which can be expressed in the form
 
where R is a rational function of its two arguments, P is a polynomial of degree 3 or 4 with no repeated roots, and c is a constant..
essential discontinuity
For an essential discontinuity, only one of the two one-sided limits needs not exist or be infinite. Consider the function
 
Then, the point   is an essential discontinuity. In this case,   doesn't exist and   is infinite – thus satisfying twice the conditions of essential discontinuity. So x0 is an essential discontinuity, infinite discontinuity, or discontinuity of the second kind. (This is distinct from the term essential singularity which is often used when studying functions of complex variables.
Euler method
Euler's method is a numerical method to solve first order first degree differential equation with a given initial value. It is the most basic explicit method for numerical integration of ordinary differential equations and is the simplest Runge–Kutta method. The Euler method is named after Leonhard Euler, who treated it in his book Institutionum calculi integralis (published 1768–1870).[36]
exponential function
In mathematics, an exponential function is a function of the form
 

where b is a positive real number, and in which the argument x occurs as an exponent. For real numbers c and d, a function of the form   is also an exponential function, as it can be rewritten as

 
extreme value theorem
States that if a real-valued function f is continuous on the closed interval [a,b], then f must attain a maximum and a minimum, each at least once. That is, there exist numbers c and d in [a,b] such that:
 
A related theorem is the boundedness theorem which states that a continuous function f in the closed interval [a,b] is bounded on that interval. That is, there exist real numbers m and M such that:
 
The extreme value theorem enriches the boundedness theorem by saying that not only is the function bounded, but it also attains its least upper bound as its maximum and its greatest lower bound as its minimum.
extremum
In mathematical analysis, the maxima and minima (the respective plurals of maximum and minimum) of a function, known collectively as extrema (the plural of extremum), are the largest and smallest value of the function, either within a given range (the local or relative extrema) or on the entire domain of a function (the global or absolute extrema).[37][38][39] Pierre de Fermat was one of the first mathematicians to propose a general technique, adequality, for finding the maxima and minima of functions. As defined in set theory, the maximum and minimum of a set are the greatest and least elements in the set, respectively. Unbounded infinite sets, such as the set of real numbers, have no minimum or maximum.
Faà di Bruno's formula
Is an identity in mathematics generalizing the chain rule to higher derivatives, named after Francesco Faà di Bruno (1855, 1857), though he was not the first to state or prove the formula. In 1800, more than 50 years before Faà di Bruno, the French mathematician Louis François Antoine Arbogast stated the formula in a calculus textbook,[40] considered the first published reference on the subject.[41] Perhaps the most well-known form of Faà di Bruno's formula says that
 
where the sum is over all n-tuples of nonnegative integers (m1, …, mn) satisfying the constraint
 
Sometimes, to give it a memorable pattern, it is written in a way in which the coefficients that have the combinatorial interpretation discussed below are less explicit:
 
Combining the terms with the same value of m1 + m2 + ... + mn = k and noticing that m j has to be zero for j > n − k + 1 leads to a somewhat simpler formula expressed in terms of Bell polynomials Bn,k(x1,...,xnk+1):
 
first-degree polynomial
first derivative test
The first derivative test examines a function's monotonic properties (where the function is increasing or decreasing) focusing on a particular point in its domain. If the function "switches" from increasing to decreasing at the point, then the function will achieve a highest value at that point. Similarly, if the function "switches" from decreasing to increasing at the point, then it will achieve a least value at that point. If the function fails to "switch", and remains increasing or remains decreasing, then no highest or least value is achieved.
Fractional calculus
Is a branch of mathematical analysis that studies the several different possibilities of defining real number powers or complex number powers of the differentiation operator D
 ,
and of the integration operator J
 ,[Note 2]
and developing a calculus for such operators generalizing the classical one. In this context, the term powers refers to iterative application of a linear operator to a function, in some analogy to function composition acting on a variable, i.e. f ∘2(x) = f ∘ f (x) = f ( f (x) ).
frustum
In geometry, a frustum (plural: frusta or frustums) is the portion of a solid (normally a cone or pyramid) that lies between one or two parallel planes cutting it. A right frustum is a parallel truncation of a right pyramid or right cone.[42]
function
Is a process or a relation that associates each element x of a set X, the domain of the function, to a single element y of another set Y (possibly the same set), the codomain of the function. If the function is called f, this relation is denoted y = f(x) (read f of x), the element x is the argument or input of the function, and y is the value of the function, the output, or the image of x by f.[43] The symbol that is used for representing the input is the variable of the function (one often says that f is a function of the variable x).
function composition
Is an operation that takes two functions f and g and produces a function h such that h(x) = g(f(x)). In this operation, the function g is applied to the result of applying the function f to x. That is, the functions f : XY and g : YZ are composed to yield a function that maps x in X to g(f(x)) in Z.
fundamental theorem of calculus
The fundamental theorem of calculus is a theorem that links the concept of differentiating a function with the concept of integrating a function. The first part of the theorem, sometimes called the first fundamental theorem of calculus, states that one of the antiderivatives (also called indefinite integral), say F, of some function f may be obtained as the integral of f with a variable bound of integration. This implies the existence of antiderivatives for continuous functions.[44] Conversely, the second part of the theorem, sometimes called the second fundamental theorem of calculus, states that the integral of a function f over some interval can be computed by using any one, say F, of its infinitely many antiderivatives. This part of the theorem has key practical applications, because explicitly finding the antiderivative of a function by symbolic integration avoids numerical integration to compute integrals. This provides generally a better numerical accuracy.
general Leibniz rule
The general Leibniz rule,[45] named after Gottfried Wilhelm Leibniz, generalizes the product rule (which is also known as "Leibniz's rule"). It states that if   and   are  -times differentiable functions, then the product   is also  -times differentiable and its  th derivative is given by
 
where   is the binomial coefficient and   This can be proved by using the product rule and mathematical induction.
global maximum
In mathematical analysis, the maxima and minima (the respective plurals of maximum and minimum) of a function, known collectively as extrema (the plural of extremum), are the largest and smallest value of the function, either within a given range (the local or relative extrema) or on the entire domain of a function (the global or absolute extrema).[46][47][48] Pierre de Fermat was one of the first mathematicians to propose a general technique, adequality, for finding the maxima and minima of functions. As defined in set theory, the maximum and minimum of a set are the greatest and least elements in the set, respectively. Unbounded infinite sets, such as the set of real numbers, have no minimum or maximum.
global minimum
In mathematical analysis, the maxima and minima (the respective plurals of maximum and minimum) of a function, known collectively as extrema (the plural of extremum), are the largest and smallest value of the function, either within a given range (the local or relative extrema) or on the entire domain of a function (the global or absolute extrema).[49][50][51] Pierre de Fermat was one of the first mathematicians to propose a general technique, adequality, for finding the maxima and minima of functions. As defined in set theory, the maximum and minimum of a set are the greatest and least elements in the set, respectively. Unbounded infinite sets, such as the set of real numbers, have no minimum or maximum.
golden spiral
In geometry, a golden spiral is a logarithmic spiral whose growth factor is φ, the golden ratio.[52] That is, a golden spiral gets wider (or further from its origin) by a factor of φ for every quarter turn it makes.
gradient
Is a multi-variable generalization of the derivative. While a derivative can be defined on functions of a single variable, for functions of several variables, the gradient takes its place. The gradient is a vector-valued function, as opposed to a derivative, which is scalar-valued.
harmonic progression
In mathematics, a harmonic progression (or harmonic sequence) is a progression formed by taking the reciprocals of an arithmetic progression. It is a sequence of the form
 
where −a/d is not a natural number and k is a natural number. Equivalently, a sequence is a harmonic progression when each term is the harmonic mean of the neighboring terms. It is not possible for a harmonic progression (other than the trivial case where a = 1 and k = 0) to sum to an integer. The reason is that, necessarily, at least one denominator of the progression will be divisible by a prime number that does not divide any other denominator.[53]
higher derivative
Let f be a differentiable function, and let f be its derivative. The derivative of f (if it has one) is written f ′′ and is called the second derivative of f. Similarly, the derivative of the second derivative, if it exists, is written f ′′′ and is called the third derivative of f. Continuing this process, one can define, if it exists, the nth derivative as the derivative of the (n-1)th derivative. These repeated derivatives are called higher-order derivatives. The nth derivative is also called the derivative of order n.
homogeneous linear differential equation
A differential equation can be homogeneous in either of two respects. A first order differential equation is said to be homogeneous if it may be written
 
where f and g are homogeneous functions of the same degree of x and y. In this case, the change of variable y = ux leads to an equation of the form
 
which is easy to solve by integration of the two members. Otherwise, a differential equation is homogeneous if it is a homogeneous function of the unknown function and its derivatives. In the case of linear differential equations, this means that there are no constant terms. The solutions of any linear ordinary differential equation of any order may be deduced by integration from the solution of the homogeneous equation obtained by removing the constant term.
hyperbolic function
Hyperbolic functions are analogs of the ordinary trigonometric, or circular, functions.
identity function
Also called an identity relation or identity map or identity transformation, is a function that always returns the same value that was used as its argument. In equations, the function is given by f(x) = x.
imaginary number
Is a complex number that can be written as a real number multiplied by the imaginary unit i,[note 2] which is defined by its property i2 = −1.[54] The square of an imaginary number bi is b2. For example, 5i is an imaginary number, and its square is −25. Zero is considered to be both real and imaginary.[55]
implicit function
In mathematics, an implicit equation is a relation of the form  , where   is a function of several variables (often a polynomial). For example, the implicit equation of the unit circle is  . An implicit function is a function that is defined implicitly by an implicit equation, by associating one of the variables (the value) with the others (the arguments).[56]: 204–206  Thus, an implicit function for   in the context of the unit circle is defined implicitly by  . This implicit equation defines   as a function of   only if   and one considers only non-negative (or non-positive) values for the values of the function. The implicit function theorem provides conditions under which some kinds of relations define an implicit function, namely relations defined as the indicator function of the zero set of some continuously differentiable multivariate function.
improper fraction
Common fractions can be classified as either proper or improper. When the numerator and the denominator are both positive, the fraction is called proper if the numerator is less than the denominator, and improper otherwise.[57][58] In general, a common fraction is said to be a proper fraction if the absolute value of the fraction is strictly less than one—that is, if the fraction is greater than −1 and less than 1.[59][60] It is said to be an improper fraction, or sometimes top-heavy fraction,[61] if the absolute value of the fraction is greater than or equal to 1. Examples of proper fractions are 2/3, –3/4, and 4/9; examples of improper fractions are 9/4, –4/3, and 3/3.
improper integral
In mathematical analysis, an improper integral is the limit of a definite integral as an endpoint of the interval(s) of integration approaches either a specified real number,  ,  , or in some instances as both endpoints approach limits. Such an integral is often written symbolically just like a standard definite integral, in some cases with infinity as a limit of integration. Specifically, an improper integral is a limit of the form:
 
or
 
in which one takes a limit in one or the other (or sometimes both) endpoints (Apostol 1967, §10.23).
inflection point
In differential calculus, an inflection point, point of inflection, flex, or inflection (British English: inflexion) is a point on a continuous plane curve at which the curve changes from being concave (concave downward) to convex (concave upward), or vice versa.
instantaneous rate of change
The derivative of a function of a single variable at a chosen input value, when it exists, is the slope of the tangent line to the graph of the function at that point. The tangent line is the best linear approximation of the function near that input value. For this reason, the derivative is often described as the "instantaneous rate of change", the ratio of the instantaneous change in the dependent variable to that of the independent variable. .
instantaneous velocity
If we consider v as velocity and x as the displacement (change in position) vector, then we can express the (instantaneous) velocity of a particle or object, at any particular time t, as the derivative of the position with respect to time:
 
From this derivative equation, in the one-dimensional case it can be seen that the area under a velocity vs. time (v vs. t graph) is the displacement, x. In calculus terms, the integral of the velocity function v(t) is the displacement function x(t). In the figure, this corresponds to the yellow area under the curve labeled s (s being an alternative notation for displacement).
 
Since the derivative of the position with respect to time gives the change in position (in metres) divided by the change in time (in seconds), velocity is measured in metres per second (m/s). Although the concept of an instantaneous velocity might at first seem counter-intuitive, it may be thought of as the velocity that the object would continue to travel at if it stopped accelerating at that moment. .
integral
An integral assigns numbers to functions in a way that can describe displacement, area, volume, and other concepts that arise by combining infinitesimal data. Integration is one of the two main operations of calculus, with its inverse operation, differentiation, being the other. .
integral symbol
The integral symbol:
(Unicode),   (LaTeX)
is used to denote integrals and antiderivatives in mathematics. .
integrand
The function to be integrated in an integral.
integration by parts
In calculus, and more generally in mathematical analysis, integration by parts or partial integration is a process that finds the integral of a product of functions in terms of the integral of their derivative and antiderivative. It is frequently used to transform the antiderivative of a product of functions into an antiderivative for which a solution can be more easily found. The rule can be readily derived by integrating the product rule of differentiation. If u = u(x) and du = u(x) dx, while v = v(x) and dv = v(x) dx, then integration by parts states that:
 
or more compactly:
 
Mathematician Brook Taylor discovered integration by parts, first publishing the idea in 1715.[62][63] More general formulations of integration by parts exist for the Riemann–Stieltjes and Lebesgue–Stieltjes integrals. The discrete analogue for sequences is called summation by parts. .
integration by substitution
Also known as u-substitution, is a method for solving integrals. Using the fundamental theorem of calculus often requires finding an antiderivative. For this and other reasons, integration by substitution is an important tool in mathematics. It is the counterpart to the chain rule for differentiation. .
intermediate value theorem
In mathematical analysis, the intermediate value theorem states that if a continuous function, f, with an interval, [a, b], as its domain, takes values f(a) and f(b) at each end of the interval, then it also takes any value between f(a) and f(b) at some point within the interval. This has two important corollaries:
  1. If a continuous function has values of opposite sign inside an interval, then it has a root in that interval (Bolzano's theorem).[64]
  2. The image of a continuous function over an interval is itself an interval. .
inverse trigonometric functions
(Also called arcus functions,[65][66][67][68][69] antitrigonometric functions[70] or cyclometric functions[71][72][73]) are the inverse functions of the trigonometric functions (with suitably restricted domains). Specifically, they are the inverses of the sine, cosine, tangent, cotangent, secant, and cosecant functions, and are used to obtain an angle from any of the angle's trigonometric ratios.
jump discontinuity
Consider the function
 
Then, the point x0 = 1 is a jump discontinuity. In this case, a single limit does not exist because the one-sided limits, L and L+, exist and are finite, but are not equal: since, LL+, the limit L does not exist. Then, x0 is called a jump discontinuity, step discontinuity, or discontinuity of the first kind. For this type of discontinuity, the function f may have any value at x0.
Lebesgue integration
In mathematics, the integral of a non-negative function of a single variable can be regarded, in the simplest case, as the area between the graph of that function and the x-axis. The Lebesgue integral extends the integral to a larger class of functions. It also extends the domains on which these functions can be defined.
L'Hôpital's rule
L'Hôpital's rule or L'Hospital's rule uses derivatives to help evaluate limits involving indeterminate forms. Application (or repeated application) of the rule often converts an indeterminate form to an expression that can be evaluated by substitution, allowing easier evaluation of the limit. The rule is named after the 17th-century French mathematician Guillaume de l'Hôpital. Although the contribution of the rule is often attributed to L'Hôpital, the theorem was first introduced to L'Hôpital in 1694 by the Swiss mathematician Johann Bernoulli. L'Hôpital's rule states that for functions f and g which are differentiable on an open interval I except possibly at a point c contained in I, if     for all x in I with xc, and   exists, then
 
The differentiation of the numerator and denominator often simplifies the quotient or converts it to a limit that can be evaluated directly.
limit comparison test
The limit comparison test allows one to determine the convergence of one series based on the convergence of another.
limit of a function
.
limits of integration
.
linear combination
In mathematics, a linear combination is an expression constructed from a set of terms by multiplying each term by a constant and adding the results (e.g. a linear combination of x and y would be any expression of the form ax + by, where a and b are constants).[74][75][76] The concept of linear combinations is central to linear algebra and related fields of mathematics.
linear equation
A linear equation is an equation relating two or more variables to each other in the form of   with the highest power of each variable being 1.
linear system
.
list of integrals
.
logarithm
.
logarithmic differentiation
.
lower bound
.
mean value theorem
.
monotonic function
.
multiple integral
.
Multiplicative calculus
.
multivariable calculus
.
natural logarithm
The natural logarithm of a number is its logarithm to the base of the mathematical constant e, where e is an irrational and transcendental number approximately equal to 2.718281828459. The natural logarithm of x is generally written as ln x, loge x, or sometimes, if the base e is implicit, simply log x.[77] Parentheses are sometimes added for clarity, giving ln(x), loge(x) or log(x). This is done in particular when the argument to the logarithm is not a single symbol, to prevent ambiguity.
non-Newtonian calculus
.
nonstandard calculus
.
notation for differentiation
.
numerical integration
.
one-sided limit
.
ordinary differential equation
.
Pappus's centroid theorem
(Also known as the Guldinus theorem, Pappus–Guldinus theorem or Pappus's theorem) is either of two related theorems dealing with the surface areas and volumes of surfaces and solids of revolution.
parabola
Is a plane curve that is mirror-symmetrical and is approximately U-shaped. It fits several superficially different other mathematical descriptions, which can all be proved to define exactly the same curves.
paraboloid
.
partial derivative
.
partial differential equation
.
partial fraction decomposition
.
particular solution
.
piecewise-defined function
A function defined by multiple sub-functions that apply to certain intervals of the function's domain.
position vector
.
power rule
.
product integral
.
product rule
.
proper fraction
.
proper rational function
.
Pythagorean theorem
.
Pythagorean trigonometric identity
.
quadratic function
In algebra, a quadratic function, a quadratic polynomial, a polynomial of degree 2, or simply a quadratic, is a polynomial function with one or more variables in which the highest-degree term is of the second degree. For example, a quadratic function in three variables x, y, and z contains exclusively terms x2, y2, z2, xy, xz, yz, x, y, z, and a constant:
 
with at least one of the coefficients a, b, c, d, e, or f of the second-degree terms being non-zero. A univariate (single-variable) quadratic function has the form[78]
 
in the single variable x. The graph of a univariate quadratic function is a parabola whose axis of symmetry is parallel to the y-axis, as shown at right. If the quadratic function is set equal to zero, then the result is a quadratic equation. The solutions to the univariate equation are called the roots of the univariate function. The bivariate case in terms of variables x and y has the form
 
with at least one of a, b, c not equal to zero, and an equation setting this function equal to zero gives rise to a conic section (a circle or other ellipse, a parabola, or a hyperbola). In general there can be an arbitrarily large number of variables, in which case the resulting surface is called a quadric, but the highest degree term must be of degree 2, such as x2, xy, yz, etc.
quadratic polynomial
.
quotient rule
A formula for finding the derivative of a function that is the ratio of two functions.
radian
Is the SI unit for measuring angles, and is the standard unit of angular measure used in many areas of mathematics. The length of an arc of a unit circle is numerically equal to the measurement in radians of the angle that it subtends; one radian is just under 57.3 degrees (expansion at OEISA072097). The unit was formerly an SI supplementary unit, but this category was abolished in 1995 and the radian is now considered an SI derived unit.[79] Separately, the SI unit of solid angle measurement is the steradian .
ratio test
.
reciprocal function
.
reciprocal rule
.
Riemann integral
.
.
removable discontinuity
.
Rolle's theorem
.
root test
.
scalar
.
secant line
.
second-degree polynomial
.
second derivative
.
second derivative test
.
second-order differential equation
.
series
.
shell integration
.
Simpson's rule
.
sine
.
sine wave
.
slope field
.
squeeze theorem
.
sum rule in differentiation
.
sum rule in integration
.
summation
.
supplementary angle
.
surface area
.
system of linear equations
.
table of integrals
.
Taylor series
.
Taylor's theorem
.
tangent
.
third-degree polynomial
.
third derivative
.
toroid
.
total differential
.
trigonometric functions
.
trigonometric identities
.
trigonometric integral
.
trigonometric substitution
.
trigonometry
.
triple integral
.
upper bound
.
variable
.
vector
.
vector calculus
.
washer
.
washer method
.

See also

edit

References

edit
  1. ^ Stewart, James (2008). Calculus: Early Transcendentals (6th ed.). Brooks/Cole. ISBN 978-0-495-01166-8.
  2. ^ Larson, Ron; Edwards, Bruce H. (2009). Calculus (9th ed.). Brooks/Cole. ISBN 978-0-547-16702-2.
  3. ^ "Asymptotes" by Louis A. Talman
  4. ^ Williamson, Benjamin (1899), "Asymptotes", An elementary treatise on the differential calculus
  5. ^ Nunemacher, Jeffrey (1999), "Asymptotes, Cubic Curves, and the Projective Plane", Mathematics Magazine, 72 (3): 183–192, CiteSeerX 10.1.1.502.72, doi:10.2307/2690881, JSTOR 2690881
  6. ^ Neidinger, Richard D. (2010). "Introduction to Automatic Differentiation and MATLAB Object-Oriented Programming" (PDF). SIAM Review. 52 (3): 545–563. doi:10.1137/080743627. S2CID 17134969.
  7. ^ Baydin, Atilim Gunes; Pearlmutter, Barak; Radul, Alexey Andreyevich; Siskind, Jeffrey (2018). "Automatic differentiation in machine learning: a survey". Journal of Machine Learning Research. 18: 1–43.
  8. ^ "Calculus". OxfordDictionaries. Archived from the original on April 30, 2013. Retrieved 15 September 2017.
  9. ^ Eves, Howard (March 1991). "Two Surprising Theorems on Cavalieri Congruence". The College Mathematics Journal. 22 (2): 118–124. doi:10.1080/07468342.1991.11973367.
  10. ^ Hall, Arthur Graham; Frink, Fred Goodrich (January 1909). "Chapter II. The Acute Angle [10] Functions of complementary angles". Written at Ann Arbor, Michigan, USA. Trigonometry. Vol. Part I: Plane Trigonometry. New York, USA: Henry Holt and Company / Norwood Press / J. S. Cushing Co. - Berwick & Smith Co., Norwood, Massachusetts, USA. pp. 11–12. Retrieved 2017-08-12.
  11. ^ Aufmann, Richard; Nation, Richard (2014). Algebra and Trigonometry (8 ed.). Cengage Learning. p. 528. ISBN 978-128596583-3. Retrieved 2017-07-28.
  12. ^ Bales, John W. (2012) [2001]. "5.1 The Elementary Identities". Precalculus. Archived from the original on 2017-07-30. Retrieved 2017-07-30.
  13. ^ Gunter, Edmund (1620). Canon triangulorum.
  14. ^ Roegel, Denis, ed. (2010-12-06). "A reconstruction of Gunter's Canon triangulorum (1620)" (Research report). HAL. inria-00543938. Archived from the original on 2017-07-28. Retrieved 2017-07-28.
  15. ^ Stewart, James (2008). Calculus: Early Transcendentals (6th ed.). Brooks/Cole. ISBN 978-0-495-01166-8.
  16. ^ Larson, Ron; Edwards, Bruce H. (2009). Calculus (9th ed.). Brooks/Cole. ISBN 978-0-547-16702-2.
  17. ^ Stalker, John (1998). Complex Analysis: Fundamentals of the Classical Theory of Functions. Springer. p. 77. ISBN 0-8176-4038-X.
  18. ^ Bak, Joseph; Newman, Donald J. (1997). "Chapters 11 & 12". Complex Analysis. Springer. pp. 130–156. ISBN 0-387-94756-6.
  19. ^ Krantz, Steven George (1999). "Chapter 2". Handbook of Complex Variables. Springer. ISBN 0-8176-4011-8.
  20. ^ "Lecture Notes 2" (PDF). www.stat.cmu.edu. Retrieved 3 March 2017.
  21. ^ Cramer, Gabriel (1750). "Introduction à l'Analyse des lignes Courbes algébriques" (in French). Geneva: Europeana. pp. 656–659. Retrieved 2012-05-18.
  22. ^ Kosinski, A. A. (2001). "Cramer's Rule is due to Cramer". Mathematics Magazine. 74 (4): 310–312. doi:10.2307/2691101. JSTOR 2691101.
  23. ^ MacLaurin, Colin (1748). A Treatise of Algebra, in Three Parts. Printed for A. Millar & J. Nourse.
  24. ^ Boyer, Carl B. (1968). A History of Mathematics (2nd ed.). Wiley. p. 431.
  25. ^ Katz, Victor (2004). A History of Mathematics (Brief ed.). Pearson Education. pp. 378–379.
  26. ^ Hedman, Bruce A. (1999). "An Earlier Date for "Cramer's Rule"" (PDF). Historia Mathematica. 26 (4): 365–368. doi:10.1006/hmat.1999.2247. S2CID 121056843.
  27. ^ Stewart, James (2008). Calculus: Early Transcendentals (6th ed.). Brooks/Cole. ISBN 978-0-495-01166-8.
  28. ^ Larson, Ron; Edwards, Bruce H. (2009). Calculus (9th ed.). Brooks/Cole. ISBN 978-0-547-16702-2.
  29. ^ Douglas C. Giancoli (2000). [Physics for Scientists and Engineers with Modern Physics (3rd Edition)]. Prentice Hall. ISBN 0-13-021517-1
  30. ^ "Definition of DIFFERENTIAL CALCULUS". www.merriam-webster.com. Retrieved 2018-09-26.
  31. ^ "Integral Calculus - Definition of Integral calculus by Merriam-Webster". www.merriam-webster.com. Retrieved 2018-05-01.
  32. ^ Démonstration d’un théorème d’Abel. Journal de mathématiques pures et appliquées 2nd series, tome 7 (1862), p. 253-255 Archived 2011-07-21 at the Wayback Machine.
  33. ^ Stewart, James (2008). Calculus: Early Transcendentals (6th ed.). Brooks Cole Cengage Learning. ISBN 978-0-495-01166-8.
  34. ^ Oxford English Dictionary, 2nd ed.: natural logarithm
  35. ^ Encyclopedic Dictionary of Mathematics 142.D
  36. ^ Butcher 2003, p. 45; Hairer, Nørsett & Wanner 1993, p. 35
  37. ^ Stewart, James (2008). Calculus: Early Transcendentals (6th ed.). Brooks/Cole. ISBN 978-0-495-01166-8.
  38. ^ Larson, Ron; Edwards, Bruce H. (2009). Calculus (9th ed.). Brooks/Cole. ISBN 978-0-547-16702-2.
  39. ^ Thomas, George B.; Weir, Maurice D.; Hass, Joel (2010). Thomas' Calculus: Early Transcendentals (12th ed.). Addison-Wesley. ISBN 978-0-321-58876-0.
  40. ^ (Arbogast 1800).
  41. ^ According to Craik (2005, pp. 120–122): see also the analysis of Arbogast's work by Johnson (2002, p. 230).
  42. ^ William F. Kern, James R. Bland, Solid Mensuration with proofs, 1938, p. 67
  43. ^ MacLane, Saunders; Birkhoff, Garrett (1967). Algebra (First ed.). New York: Macmillan. pp. 1–13.
  44. ^ Spivak, Michael (1980), Calculus (2nd ed.), Houston, Texas: Publish or Perish Inc.
  45. ^ Olver, Peter J. (2000). Applications of Lie Groups to Differential Equations. Springer. pp. 318–319. ISBN 9780387950006.
  46. ^ Stewart, James (2008). Calculus: Early Transcendentals (6th ed.). Brooks/Cole. ISBN 978-0-495-01166-8.
  47. ^ Larson, Ron; Edwards, Bruce H. (2009). Calculus (9th ed.). Brooks/Cole. ISBN 978-0-547-16702-2.
  48. ^ Thomas, George B.; Weir, Maurice D.; Hass, Joel (2010). Thomas' Calculus: Early Transcendentals (12th ed.). Addison-Wesley. ISBN 978-0-321-58876-0.
  49. ^ Stewart, James (2008). Calculus: Early Transcendentals (6th ed.). Brooks/Cole. ISBN 978-0-495-01166-8.
  50. ^ Larson, Ron; Edwards, Bruce H. (2009). Calculus (9th ed.). Brooks/Cole. ISBN 978-0-547-16702-2.
  51. ^ Thomas, George B.; Weir, Maurice D.; Hass, Joel (2010). Thomas' Calculus: Early Transcendentals (12th ed.). Addison-Wesley. ISBN 978-0-321-58876-0.
  52. ^ Chang, Yu-sung, "Golden Spiral Archived 2019-07-28 at the Wayback Machine", The Wolfram Demonstrations Project.
  53. ^ Erdős, P. (1932), "Egy Kürschák-féle elemi számelméleti tétel általánosítása" [Generalization of an elementary number-theoretic theorem of Kürschák] (PDF), Mat. Fiz. Lapok (in Hungarian), 39: 17–24. As cited by Graham, Ronald L. (2013), "Paul Erdős and Egyptian fractions", Erdős centennial, Bolyai Soc. Math. Stud., vol. 25, János Bolyai Math. Soc., Budapest, pp. 289–309, doi:10.1007/978-3-642-39286-3_9, ISBN 978-3-642-39285-6, MR 3203600.
  54. ^ Uno Ingard, K. (1988). "Chapter 2". Fundamentals of Waves and Oscillations. Cambridge University Press. p. 38. ISBN 0-521-33957-X.
  55. ^ Sinha, K.C. (2008). A Text Book of Mathematics Class XI (Second ed.). Rastogi Publications. p. 11.2. ISBN 978-81-7133-912-9.
  56. ^ Chiang, Alpha C. (1984). Fundamental Methods of Mathematical Economics (Third ed.). New York: McGraw-Hill. ISBN 0-07-010813-7.
  57. ^ "World Wide Words: Vulgar fractions". World Wide Words. Retrieved 2014-10-30.
  58. ^ Weisstein, Eric W. "Improper Fraction". MathWorld.
  59. ^ Laurel (31 March 2004). "Math Forum – Ask Dr. Math:Can Negative Fractions Also Be Proper or Improper?". Retrieved 2014-10-30.
  60. ^ "New England Compact Math Resources". Archived from the original on 2012-04-15. Retrieved 2019-06-16.
  61. ^ Greer, A. (1986). New comprehensive mathematics for 'O' level (2nd ed., reprinted. ed.). Cheltenham: Thornes. p. 5. ISBN 978-0-85950-159-0. Retrieved 2014-07-29.
  62. ^ "Brook Taylor". History.MCS.St-Andrews.ac.uk. Retrieved May 25, 2018.
  63. ^ "Brook Taylor". Stetson.edu. Archived from the original on January 3, 2018. Retrieved May 25, 2018.
  64. ^ Weisstein, Eric W. "Bolzano's Theorem". MathWorld.
  65. ^ Taczanowski, Stefan (October 1978). "On the optimization of some geometric parameters in 14 MeV neutron activation analysis". Nuclear Instruments and Methods. 155 (3): 543–546. Bibcode:1978NucIM.155..543T. doi:10.1016/0029-554X(78)90541-4.
  66. ^ Hazewinkel, Michiel (1994) [1987]. Encyclopaedia of Mathematics (unabridged reprint ed.). Kluwer Academic Publishers / Springer Science & Business Media. ISBN 978-155608010-4.[page needed]
  67. ^ Ebner, Dieter (2005-07-25). Preparatory Course in Mathematics (PDF) (6 ed.). Department of Physics, University of Konstanz. Archived (PDF) from the original on 2017-07-26. Retrieved 2017-07-26.[page needed]
  68. ^ Mejlbro, Leif (2010-11-11). Stability, Riemann Surfaces, Conformal Mappings - Complex Functions Theory (PDF) (1 ed.). Ventus Publishing ApS / Bookboon. ISBN 978-87-7681-702-2. Archived (PDF) from the original on 2017-07-26. Retrieved 2017-07-26.[page needed]
  69. ^ Durán, Mario (2012). Mathematical methods for wave propagation in science and engineering. 1: Fundamentals (1 ed.). Ediciones UC. p. 88. ISBN 978-956141314-6.[page needed]
  70. ^ Hall, Arthur Graham; Frink, Fred Goodrich (January 1909). "Chapter II. The Acute Angle [14] Inverse trigonometric functions". Written at Ann Arbor, Michigan, USA. Trigonometry. Part I: Plane Trigonometry. New York, USA: Henry Holt and Company / Norwood Press / J. S. Cushing Co. - Berwick & Smith Co., Norwood, Massachusetts, USA. p. 15. Retrieved 2017-08-12. […] α = arcsin m: It is frequently read "arc-sinem" or "anti-sine m," since two mutually inverse functions are said each to be the anti-function of the other. […] A similar symbolic relation holds for the other trigonometric functions. […] This notation is universally used in Europe and is fast gaining ground in this country. A less desirable symbol, α = sin-1m, is still found in English and American texts. The notation α = inv sin m is perhaps better still on account of its general applicability. […]
  71. ^ Klein, Christian Felix (1924) [1902]. Elementarmathematik vom höheren Standpunkt aus: Arithmetik, Algebra, Analysis (in German). 1 (3rd ed.). Berlin: J. Springer.
  72. ^ Klein, Christian Felix (2004) [1932]. Elementary Mathematics from an Advanced Standpoint: Arithmetic, Algebra, Analysis. Translated by Hedrick, E. R.; Noble, C. A. (Translation of 3rd German ed.). Dover Publications, Inc. / The Macmillan Company. ISBN 978-0-48643480-3. Retrieved 2017-08-13.
  73. ^ Dörrie, Heinrich (1965). Triumph der Mathematik. Translated by Antin, David. Dover Publications. p. 69. ISBN 978-0-486-61348-2.
  74. ^ Lay, David C. (2006). Linear Algebra and Its Applications (3rd ed.). Addison–Wesley. ISBN 0-321-28713-4.
  75. ^ Strang, Gilbert (2006). Linear Algebra and Its Applications (4th ed.). Brooks Cole. ISBN 0-03-010567-6.
  76. ^ Axler, Sheldon (2002). Linear Algebra Done Right (2nd ed.). Springer. ISBN 0-387-98258-2.
  77. ^ Mortimer, Robert G. (2005). Mathematics for physical chemistry (3rd ed.). Academic Press. p. 9. ISBN 0-12-508347-5. Extract of page 9
  78. ^ "Quadratic Equation -- from Wolfram MathWorld". Retrieved January 6, 2013.
  79. ^ "Resolution 8 of the CGPM at its 20th Meeting (1995)". Bureau International des Poids et Mesures. Retrieved 2014-09-23.

Works cited

edit

Notes

edit
  1. ^ The term scalar product is often also used more generally to mean a symmetric bilinear form, for example for a pseudo-Euclidean space.[citation needed]
  2. ^ j is usually used in Engineering contexts where i has other meanings (such as electrical current)
  1. ^ Antiderivatives are also called general integrals, and sometimes integrals. The latter term is generic, and refers not only to indefinite integrals (antiderivatives), but also to definite integrals. When the word integral is used without additional specification, the reader is supposed to deduce from the context whether it refers to a definite or indefinite integral. Some authors define the indefinite integral of a function as the set of its infinitely many possible antiderivatives. Others define it as an arbitrarily selected element of that set. Wikipedia adopts the latter approach.[citation needed]
  2. ^ The symbol J is commonly used instead of the intuitive I in order to avoid confusion with other concepts identified by similar I–like glyphs, e.g. identities.